| 15 | |
| 16 | |
| 17 | 1. Trained a small and large Neural Networks (DENSENET & Mobilenet V2) on the CIFAR10 dataset |
| 18 | 2. Performed PCA and SVM on NNs to familiarize ourselves with PyTorch |
| 19 | 3. Loaded the MNIST database (image) onto an orbit node |
| 20 | 4. Connected 2 nodes with client-server architecture and extracted data for time and accuracy measurements |
| 21 | 5. Compared performances of both neural networks on the CIFAR10 dataset |
| 22 | * Established connection between two nodes |
| 23 | * Communicated test data between nodes to compare accuracy and delay between our NN models |
| 24 | 6. Worked with professors/mentors and read papers to understand the concepts of early exit, split computing, accuracy/latency tradeoff, and distributed deep neural networks over the edge cloud |
| 25 | 7.Split the NN ResNet 18 using split computing onto two different devices and ran an inference across a network |
| 26 | 8. Used Network Time Protocol (NTP) and sent data in "packages" (chunks) to collect latency and delay data |
| 27 | 9.Explored different research questions with the data collected: __________ |
| 28 | 10. Limited CPU power in terminal to imitate mobile devices |
| 29 | 11. Implemented different threshold values based on confidence for sending the data to the edge and server for inference |
| 30 | * Generated graphs for threshold vs latency, accuracy vs latency, ____ (LINK TO GRAPHS) |
| 31 | 9. Retrained neural network to achieve 88% accuracy and collected new graphs (LINK) |
| 32 | 10. Introduced a delay in the inference as well as data transfer to simulate a queue |