Changes between Version 13 and Version 14 of Other/Summer/2023/Awareness
- Timestamp:
- Jul 25, 2023, 4:17:35 PM (16 months ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Other/Summer/2023/Awareness
v13 v14 63 63 * We also ran the segmentation model which outlines the precise figures that are detected. These points can be used in the future to separate the objects in the point clouds. 64 64 [[Image(yolo segment.gif)]] 65 * We were requested to add to the model so that it can detect the DIY cars that other groups were building, so we took 19 images and labeled them with the shapes of the cars .65 * We were requested to add to the model so that it can detect the DIY cars that other groups were building, so we took 19 images and labeled them with the shapes of the cars using Roboflow. 66 66 [[Image(manual label.png)]] 67 * We were able to deploy this model on the Ultralytics Hub mobile app for real-time detection, and it was very accurate, however had many false positives when no cars were in the area. 67 * We were able to train and deploy this model on the Ultralytics Hub mobile app for real-time detection, and it was very accurate, however had many false positives when no cars were in the area. 68 [[Image(high accuracy car.png)]] 69 * This model was extremely slow on the phone and especially so when using the Realsense camera on the node, but it was still impressive considering only 19 images were used. 68 70 [[Image(detected car.png)]] 69 * This model was extremely slow both on the phone and in t 71 * We plan to combine our dataset with the COCO17 dataset, which YOLOv8 uses for its training so that we have a model that can detect everything we want. Then we want to start combining the detections from multiple cameras. This may include drawing multiple rectangles from the boxes created by each camera resulting in an image like the one below. 72 [[Image(3d car.png)]] 73 * Ideally, we will update our model to add the segmentation feature and combine the "masks" of the objects to separate the point cloud of the detected object. This would be used later to send object information to smart cars and to track the movements of these objects within the intersection.