| 20 | |
| 21 | == Progress == |
| 22 | |
| 23 | * We used opencv and ArUco markers to detect points in the intersection. This allows us to find common points that are in view of multiple cameras. |
| 24 | * Utilizing the depth streams from the cameras, we converted these points into 3D coordinates. |
| 25 | * We created a server and client using python sockets to send pickle files containing the points and their labels between different nodes. |
| 26 | * By comparing the ids of the markers detected by each camera, we found common points and used the Kabsch algorithm to find a translation and rotation from one set of points onto the other. |
| 27 | * We had to convert the rotation matrix from the Kabsch algorithm to yaw pitch and roll to rotate the points. |
| 28 | * Using ros, we initialized each camera with Pointclouds enabled and exported the streams to one node. |
| 29 | * We then opened Rviz (Ros Vizualization) to view the Pointclouds. |
| 30 | * We used the ros static transform publisher to translate and rotate the pointclouds in relation to each camera's depth_optical_frame. |
| 31 | * This was inconsistent and still showed error even at its best. We decided that our transformation/visualization methods might be the issue, so we stopped using Ros for now. |
| 32 | * Using opencv and matplotlib, we displayed the detected 3D points from two of the cameras, and translated the points from one of the cameras according to our Kabsch results. It is clear that our algorithm is working correctly, and this works for every pairing of cameras. [[Image(Calibration.png)]] |
| 33 | * Without ros we needed to find a way to stream the cameras to one node, and the ethersense package seemed promising. It uses python sockets to transfer live camera streams over ethernet. It required some reformatting and minor changes but the camera views are now streaming, we just need to figure out how to view/manipulate these streams on the host node. |