Changes between Version 102 and Version 103 of Other/Summer/2023/Inference
- Timestamp:
- Aug 21, 2023, 9:13:35 PM (15 months ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Other/Summer/2023/Inference
v102 v103 16 16 == Networking Setup for Our Experiment Setup 17 17 We have developed an experiment enviornment that explores the nature of mobile-edge computing. Our innovative solution involves a client-server architecture that enables seamless communication between devices for predictive analysis. In this section, we'll provide you with an overview of the technologies and processes we employ in our networking setup. 18 = Socket Programming: Establishing Connectivity18 === Socket Programming: Establishing Connectivity 19 19 Our networking framework relies on socket programming to establish seamless connections between client and server devices. This connectivity is pivotal for the real-time exchange of data, enabling us to experiment and analyze performance in a dynamic environment. We used the **socket** library to accomplish this connectivity. 20 = Data Serialization and Deserialization: Ensuring Accuracy20 === Data Serialization and Deserialization: Ensuring Accuracy 21 21 To ensure precise data transmission, we employ data serialization and deserialization methods. The **struct** library plays a crucial role in packaging and unpacking data, guaranteeing that information traverses between devices without compromise. 22 = Tracking Metrics with Pandas22 === Tracking Metrics with Pandas 23 23 Efficient data management forms the foundation of our endeavors. Leveraging the **pandas** library, we meticulously organize and log data in CSV format. This meticulous record-keeping is essential for tracking timing metrics and prediction results, facilitating in-depth analysis and iterative improvement. 24 = Unraveling the Networking Workflow24 === Unraveling the Networking Workflow 25 25 1. **Image Transmission:** Images are prepared and transmitted from the client to the server, initiating the analysis process. 26 26 … … 31 31 4. **Client's Analytical Role:** Armed with the prediction outcomes and timing insights, the client takes the reins. It logs data, evaluates accuracy, and extracts valuable insights from the entire process. 32 32 33 = Our Pursuit's Significance33 === Our Pursuit's Significance 34 34 Our networking ecosystem holds profound significance for several reasons: 35 35 … … 41 41 == Unveiling the Power of Training and Optimization 42 42 Not only did we learn how to set up the networking in our experiment but, we were equally dedicated to mastering the art of training, optimization, and making the most of the resources at hand. This section sheds light on our training methodologies and the plethora of tools and techniques we've harnessed to hone our models. 43 = Foundational Training43 === Foundational Training 44 44 Our training journey commenced by delving into the fundamentals of PyTorch and other essential libraries. We took inspiration from tutorials and insights gathered from previous classes to build neural networks from the ground up. Our exploration led us to harness the capabilities of diverse libraries, including TensorFlow, Keras, Optuna, torchvision, torch, pandas, numpy, matplotlib, and time. 45 = Strategic Techniques for Optimization45 === Strategic Techniques for Optimization 46 46 To tackle challenges of overfitting and ensure robustness, we embraced a plethora of techniques that transformed our models into precision instruments. Here are the key strategies we employed: 47 47 * Image Cutouts: We utilized image cutouts, a sophisticated augmentation technique that randomly masks out portions of input images during training. This approach fortifies the model's resilience and acts as a buffer against overfitting. … … 51 51 * Random Rotations: Our models underwent random rotations during training, a technique designed to boost accuracy by imparting robustness. 52 52 * Random Erasing: This ingenious technique strategically resets selected weights to their original state, preventing over-reliance on individual nodes and fostering balanced learning. 53 = Hyperparameter Tuning53 === Hyperparameter Tuning 54 54 To unlock the full potential of our models, we turned to hyperparameter tuning. Optuna, a hyperparameter optimization library, became our ally. We designed experiments with 20 training trials, fine-tuning parameters like normalization transforms, patience levels, and starting learning rates. This approach ensured that our models achieved peak performance across various scenarios. 55 = Navigating Model Limitations55 === Navigating Model Limitations 56 56 In a race against time, we confronted the limitations posed by our chosen models, including Mobilenet_v2, Densenet121, and Resnet18. These models were initially tailored for ImageNet, a rich dataset comprising high-quality images and an extensive class spectrum. However, within our 10-week timeline, employing ImageNet for training and testing was impractical due to the prolonged time requirement. Instead, we focused on training the last layers of these models specifically for Cifar10. This strategic adaptation allowed us to attain meaningful results in a constrained timeframe. 57 = The Significance of Training and Optimization57 === The Significance of Training and Optimization 58 58 Our training and optimization endeavors hold exceptional significance: 59 59 … … 63 63 64 64 == Weekly Updates 65 = Week 165 === Week 1 66 66 **Summary** 67 67 * Understood the goal of the project and broke down its objectives … … 73 73 * Attempt to simulate the difference between their performances at inference 74 74 75 = Week 275 === Week 2 76 76 **Summary** 77 77 * Performed basics of pattern recognition and Machine Learning (PPA - Patterns, Predictions, Actions) using Pytorch … … 84 84 * Attempt to simulate the difference between their performances at inference 85 85 86 = Week 386 === Week 3 87 87 **Summary** 88 88 * Debugged issues with work we had done in previous weeks … … 98 98 * Think about other implementations - Early Exiting, model compression, data compression, … Mixture? 99 99 100 = Week 4100 === Week 4 101 101 **Summary** 102 102 * Compared performances of both neural networks on the CIFAR10 dataset … … 111 111 * Track and add Age of Information Metrics 112 112 113 = Week 5113 === Week 5 114 114 * We learned how to use the Network Time Protocol while we waited for the more accurate Precision Time Protocol to be implemented in the technology we were using. 115 115 * Implemented NTP in our code files so that we can measure time and evaluate the trade offs that we came up with. 116 116 117 = Week 6117 === Week 6 118 118 **Summary** 119 119 * Figured out how to properly split a NN using split computing … … 130 130 Transfer to Precision time protocol(PTP) 131 131 132 = Week 7132 === Week 7 133 133 **Summary** 134 134 * Explored different research questions with the data collected