15 | | == Overview |
16 | | As machine learning models become increasingly advanced and complex, running these models on less powerful devices is becoming increasingly difficult, especially when accounting for latency. However, it is also not efficient to run everything using only the cloud as it creates too much traffic. A viable solution to this problem is edge computing, where we use the edge (the networks in between user devices and the cloud) for computation. |
17 | | |
18 | | |
19 | | 1. Trained a small and large Neural Networks (DENSENET & Mobilenet V2) on the CIFAR10 dataset |
20 | | 2. Performed PCA and SVM on NNs to familiarize ourselves with PyTorch |
21 | | 3. Loaded the MNIST database (image) onto an orbit node |
22 | | 4. Connected 2 nodes with client-server architecture and extracted data for time and accuracy measurements |
23 | | 5. Compared performances of both neural networks on the CIFAR10 dataset |
24 | | * Established connection between two nodes |
25 | | * Communicated test data between nodes to compare accuracy and delay between our NN models |
26 | | 6. Worked with professors/mentors and read papers to understand the concepts of early exit, split computing, accuracy/latency tradeoff, and distributed DNNs over the edge cloud |
27 | | 7. Split the NN ResNet 18 using split computing onto two different devices and ran an inference across a network |
28 | | 8. Used Network Time Protocol (NTP) and sent data in "packages" (chunks) to collect latency and delay data |
29 | | 9. Explored different research questions with the data collected: __________ |
30 | | 10. Limited CPU power in terminal to imitate mobile devices |
31 | | 11. Implemented different threshold values based on confidence for sending the data to the edge and server for inference |
32 | | * Generated graphs for threshold vs latency, accuracy vs latency, etc. |
33 | | 12. Retrained neural network to achieve 88% accuracy and collected new graphs |
34 | | 13. Introduced a delay in the inference as well as data transfer to simulate a queue |
| 164 | |
| 165 | 1. Trained a small and large Neural Networks (DENSENET & Mobilenet V2) on the CIFAR10 dataset |
| 166 | 2. Performed PCA and SVM on NNs to familiarize ourselves with PyTorch |
| 167 | 3. Loaded the MNIST database (image) onto an orbit node |
| 168 | 4. Connected 2 nodes with client-server architecture and extracted data for time and accuracy measurements |
| 169 | 5. Compared performances of both neural networks on the CIFAR10 dataset |
| 170 | * Established connection between two nodes |
| 171 | * Communicated test data between nodes to compare accuracy and delay between our NN models |
| 172 | 6. Worked with professors/mentors and read papers to understand the concepts of early exit, split computing, accuracy/latency tradeoff, and distributed DNNs over the edge cloud |
| 173 | 7. Split the NN ResNet 18 using split computing onto two different devices and ran an inference across a network |
| 174 | 8. Used Network Time Protocol (NTP) and sent data in "packages" (chunks) to collect latency and delay data |
| 175 | 9. Explored different research questions with the data collected: __________ |
| 176 | 10. Limited CPU power in terminal to imitate mobile devices |
| 177 | 11. Implemented different threshold values based on confidence for sending the data to the edge and server for inference |
| 178 | * Generated graphs for threshold vs latency, accuracy vs latency, etc. |
| 179 | 12. Retrained neural network to achieve 88% accuracy and collected new graphs |
| 180 | 13. Introduced a delay in the inference as well as data transfer to simulate a queue |