wiki:Other/Summer/2024/aiBD

Version 51 (modified by bmy25, 4 months ago) ( diff )

AI For Behavioral Discovery

Team: Adarsh NarayananUG, Benjamin YuUG, Elias XuHS, Shreyas MusukuHS

Advisors: Dr. Richard Martin and Dr. Richard Howard


Project Description & Goals:

The past 40 years has seen enormous increases in man-made Radio Frequency (RF) radiation. However, the possible small and long term impacts of RF radiation are not well understood. This project seeks to discover if RF exposure impacts animal behaviors. In this experimental paradigm, animals are subject to RF exposure while their behaviors are video recorded. Deep Neural Networks (DNNs) are then tasked to correctly classify if the video contains exposure to RF or not. This uses DNNs as powerful pattern discovery tools, in contrast to their traditional uses of classification and generation. The project involves evaluating the accuracies of a number of DNN architectures on pre-recorded videos, as well as describe any behavioral patterns found by the DNNs.


Weekly Progress:

Week 1 - https://docs.google.com/presentation/d/1oZaaNaLMyTjMO3_yzCU-_ruVrwQr0rTuVrc27HG6WAo/edit?usp=share_link

  • Created synthetic data to train a model to perform binary classification based on linear vs curved path (due to presence of distortion field) of a single bee flight to home.
    • Gathered further insight to how data preparation and model training can work for the real dataset.

Week 2 - https://docs.google.com/presentation/d/1BebSXbCDB7Z3yCCVYtAP1WkcEWYFKNNOK1TcxBEtSrs/edit?usp=share_link

software_pipeline no_distortion_field yes_distortion_field


Week 3 - https://docs.google.com/presentation/d/1gv2Mb9vWc3VottF-gdk-rGUH5jPb285WYxMsCsw6OnI/edit?usp=share_link

1 frame per sample:

a_fold_1 a_fold_2

4 frames per sample:

b_fold_1 b_fold_2

Confusion matrices (bias towards class 1, where field is on); Overfitted (expected because the scenario the data is trying to emulate is oversimplified for the complex model):

confusion_1 confusion_2


Week 4 - https://docs.google.com/presentation/d/1v5lVYUB6YdxdCAED8_bN5KRCGSDFeQI9UTT7IYCR_as/edit?usp=sharing

Calculated a hypothetical decision boundary

If our hypothetical recall ~= actual recall ⇒ hypothetical decision boundary might actually represent the model's decision boundary

Class 0 hypothetical recall (between curves / total actual class 0): 0.6070519810977826
Actual Class 0 Recall (tested among 500 samples): ~0.896
up-down-trajectories

Gathered grad-cam heat-maps

→ Gave us insight and confirmation that the home and bee were being used as features

gradcam


Week 5 - https://docs.google.com/presentation/d/1zLVgiYbtt4TZ1SGsb9uq4mg7eFy8aJ9Eb-sj8c6GtM4/edit?usp=sharing

Week 4's dataset results (left - 1 frame/sample, right - 4 frames/sample)

week4-1frm-result week4-4frm-result

First iteration of radial dataset - Radial entries, 200 entries per side, normalized vectors, fixed center home, fixed field magnitude, 4 possible field directions → significantly improved location distribution of both classes

1st iteration of radial dataset

First radial iteration's training results (left - 1 frame/sample, right - 4 frames/sample)

1st radial 1 frm 1st radial 4 frm

Grad-CAM plot for one of the layers- model seems to be focused at background instead of bee

→ Similar result accuracies between 1 and 4 frames/sample could suggest that the model is not learning motion/sequence of frames

Attachments (38)

Note: See TracWiki for help on using the wiki.