11 | | We delved into the context of HyPhyLearn and conducted an in-depth exploration of Domain Adversarial Neural Networks (DANN). |
12 | | Ran reference code for DANN and examined the source, target, and domain accuracies. |
13 | | Analyzed graphs that demonstrated the model's ability to learn from domain-invariant features. |
| 12 | **Week 2 (6/03 - 6/06)** |
| 13 | |
| 14 | We delved into the context of HyPhyLearn and conducted an in-depth exploration of Domain Adversarial Neural Networks (DANN). \\ |
| 15 | |
| 16 | Ran reference code for DANN and examined the source, target, and domain accuracies. \\ |
| 17 | |
| 18 | Analyzed graphs that demonstrated the model's ability to learn from domain-invariant features.\\ |
18 | | We explored the TensorFlow code of the HyPhyLearn model, which classifies 2D Gaussian datasets. |
19 | | Augmented the code from TensorFlow 1.0 to PyTorch and validated the experimentation. |
20 | | Additionally, we conducted research on works with similar use cases and reviewed research papers to gain proper knowledge on setting up a physical model for generating synthetic data. |
| 22 | **Week 3 (6/03 - 6/06)** |
| 23 | |
| 24 | We explored the TensorFlow code of the HyPhyLearn model, which classifies 2D Gaussian datasets. \\ |
| 25 | |
| 26 | Augmented the code from TensorFlow 1.0 to PyTorch and validated the experimentation. \\ |
| 27 | |
| 28 | Additionally, we conducted research on works with similar use cases and reviewed research papers to gain proper knowledge on setting up a physical model for generating synthetic data.\\ |