| 136 | - We have successfully implemented setting up the '''SMO for OAM framework''' and '''O-RU framework'''. |
| 137 | - Now, we are working on running test cases. We implemented "make run" command, where there is a makefile with the shell scripting in it based on our 3 test cases and each test case has its own python code for it and we implemented the three test cases by running single command "make run". |
| 138 | [[Image(https://www.orbit-lab.org/raw-attachment/wiki/Other/Summer/2025/Cloud-Native-O-RAN/TestCase_Result.png, 70%)]] |
| 139 | - We actually implemented the "make run" only for Test case-002 and Test case-003, as the Test case-001 is completely real-time based application. We got result as "PASS" for Test cases-002,003. |
| 140 | - In the O-RU framework, there is Grafana (for visualizing, analyzing, and interpreting test results and system metrics) and MinIO console (which is relevant to S3 in AWS). |
| 142 | [[Image(https://www.orbit-lab.org/raw-attachment/wiki/Other/Summer/2025/Cloud-Native-O-RAN/Grafana.png, 70%)]] |
| 143 | '''Grafana Dashboards''' |
| 144 | - Grafana is integrated into the testing framework to provide real-time observability into the performance and results of O-RU and SMO interactions, specifically during the execution of Hybrid M-Plane test cases. |
| 145 | - The key functions of Grafana in this project are: |
| 146 | - Visualizing test execution data collected during the running of TC-HMP-002 and TC-HMP-003. |
| 147 | - Providing clear insights into NETCONF session status, configuration data flow, and interface behavior. |
| 148 | - Helping developers and testers verify protocol compliance, troubleshoot issues, and validate expected results. |
| 149 | - Supporting long-term analysis and historical tracking of test case results. |
| 150 | - The framework offers pre-configured dashboards, accessible via Grafana web interface (http://localhost:3000), after starting the visualization stack using docker compose. |
| 151 | - The two most relevant dashboards are: |
| 152 | - Test Artifacts Dashboard |
| 153 | - Test Results Dashboard |
| 154 | - '''Test Artifacts Dashboard:''' |
| 155 | - Displays detailed metrics and logs related to individual test cases. |
| 156 | - Allows viewing test IDs, execution timestamps, pass/fail status, and error messages. |
| 157 | - Enables deep-dive analysis into specific test artifacts and debug traces. |
| 158 | - '''Test Results Dashboard:''' |
| 159 | - Provides a summary view of all test runs executed via the framework. |
| 160 | - Useful for tracking overall testing trends, result history, and performance metrics. |
| 161 | - Filters by test case, timestamp, or execution environment. |
| 162 | '''Backend Data Source – TimescaleDB:''' |
| 163 | - Grafana reads its data in this project from TimescaleDB, a time-series database built on PostgreSQL. |
| 164 | - Test results and metrics are stored in TimescaleDB by the test framework during execution, and Grafana connects to it as its primary data source. |
| 165 | |
| 166 | '''MinIO Console''' |
| 167 | [[Image(https://www.orbit-lab.org/raw-attachment/wiki/Other/Summer/2025/Cloud-Native-O-RAN/MINIO_Console.png, 70%)]] |
| 168 | |