wiki:Other/Summer/2023/Latency

Version 75 (modified by sclarke53, 16 months ago) ( diff )

Low Latency Camera Feed Development

Project members: Brayden Casaren, Sebastian Clarke, Ayush Iyer, Rohit Kartickeyan

Advisors: Ivan Seskar and Jennifer Shane


Project Goal: To find the method(s) of reducing latency to a minimum in a unicast camera to computer connection over a network.

Project Design

https://cdn.discordapp.com/attachments/840223980811714590/1138500481149313065/camera_diagram_resized.png

Our camera and LED light is sealed in a box with minimal light.
The camera is continuously recording to capture the LED light turning on and off.

https://cdn.discordapp.com/attachments/840223980811714590/1138531408663552152/TM1000-e1490300173183_reszied.png

The Time Machine is equipped with a GPS and uses PPS (Pulse Per Second) and PTP (Precision Time Protocol).
PPS sends a precise voltage that turns the LED light on at the start of every second.

As the camera streams footage to our computer, we can save the incoming video packets through a software called TCPDUMP. These packets contain the data for the captured camera video. When the LED light turns on, the data inside the transmitting video packets will change, indicating that light is now being captured.

The GPS in the device gives a standard source of time. PTP uses this time to timestamp packets with the time the computer receives them, and to sync all the local clocks on our computer network to the GPS standard time.

https://cdn.discordapp.com/attachments/840223980811714590/1138860049666678825/test_area.pnghttps://cdn.discordapp.com/attachments/840223980811714590/1138860900296699954/camera_light.png

Tools Used

  • Wireshark
  • TCPDUMP
  • xxd (HEXDUMP)
  • VLC Media Player
  • FFMPEG
  • Python3
    • OpenCV
    • Matplotlib

Week 1-2

Week 1 presentation: https://docs.google.com/presentation/d/1WgLttl-gL1IPvtBatBWwfke9rf4txnYGf04_Hy7mANE/edit#slide=id.p

Week 2 presentation: https://docs.google.com/presentation/d/1rk4QhjhJOKQ2Q0PooV-JtUru1SGpYBeEUab7y3EtSyQ/edit#slide=id.p

Week 3-4

Week 3 presentation: https://docs.google.com/presentation/d/1PPJB0SCb0Y8G04pPQRvMS-2uoILxmxJBSVdZgO7B73c/edit#slide=id.p

Week 5

Summary

  • Created Python script that made artificial videos with adjustable noise

Week Run-through

The bulk of this week was spent developing a Python script that created artificial videos of black and white frames alternating every half a second. This program comes with adjustable noise to mimic the camera’s change from black to colored frames with noise. Using OpenCV, we were able create the code successfully.

Week 5 presentation: https://docs.google.com/presentation/d/1eFaGVwyJf6AOELmQVRvLJxMywu_rgNEc2hIgDxk3BDA/edit#slide=id.p

Week 6

Summary

  • Created artificial videos
  • Planned to create a histogram using Matplotlib

Week Run-through

This week was shortened to a considerable margin to the point where we couldn't make much progress. However, we were able to plan for the following weeks by creating many artificial videos of varying amounts of noise as well as a plan to create histogram to plot the number of occurrences each hex byte appeared in order to visualize how noise affects packets using the Python library Matplotlib.

Week 6 presentation: https://docs.google.com/presentation/d/1IwPrkkpvbTHO3LuBZ2Umuxp5rz9jwKTqmml3EoJ7wAA/edit

Weeks 7-8

Summary

  • Visualizing Data Through Histograms (Matplotlib).
  • Uploaded individual frames of the artificial video to the node.
  • Made little to no noise.
  • Changed LED setup to take up more of the camera’s view.
  • Used FFMPEG to get video into one MJPEG or many JPEG files.

Weeks Run-through

Following up on our plan from week 6, we worked on plotting histograms of the hex byte data. This had a rough start as when uploading the artificial videos to the node, the node ended up corrupting the video, making it impossible to use. As a result, we uploaded frames from the video to the node in order to compute it. After this, several histogram scripts were developed, each functioning in a different way. Ultimately, we decided upon creating three separate graphs for each color (RGB) of how often certain values occurred. After testing these with the JPEG images, we then used it on the camera where we changed LED setup to take up more of the camera’s view as well as used FFMPEG to get video into one MJPEG or many JPEG files. The resulting histograms were the following:

https://cdn.discordapp.com/attachments/840223980811714590/1138851957713412226/red_values_resized.png https://cdn.discordapp.com/attachments/840223980811714590/1138852476502691840/green_values_resized.png https://cdn.discordapp.com/attachments/840223980811714590/1138852959699083394/blue_values_resized.png

Noise Histogram

https://cdn.discordapp.com/attachments/840223980811714590/1138853467121782784/noise_histogram_resized.png

These histograms show the number of occurrences in the 5-sec camera footage for each type of colored pixel value with no noise.

  • Red/Green
    • The number of occurrences is mostly 0 because most of the camera footage is dark, though there are some high red/green pixel values due to some white light being present when the LED flashes on.
  • Blue
    • The occurrences of blue pixel values is significantly higher because the LED emits blue light.

The RGB histograms were made without noise, compared to the histogram on the bottom when there was significant amount of noise present. These results show that the amount of noise in our camera can heavily affect our data set.

Week 7 presentation: https://docs.google.com/presentation/d/1oPe0z3FzBUPzzFf-zcCJgz43jQIHqMHRg1BsnYJdaCU/edit

Week 8 presentation: https://docs.google.com/presentation/d/1qZLdapSWYrYTItAQqAmdhgF34x_hxIMOjZ_adv8LDIg/edit

Week 9

Summary

  • Learned about JPEG hex data indicators
  • Wrote Python scripts to isolate frames
  • Calculated Latency

Week Run-through

During this week, we decided to take a step back to understand what the values in the JPEG hex values meant. Once analyzing the hex files from the video, it was quickly discovered there were certain indicators present, being FFD8 and FFD9. FFD8 represented the start of frame while FFD9 represented the end of the frame, while everything inside containing the frame contents. Knowing this, we were then able to isolate specific frames via these indicators via a Python script.

https://cdn.discordapp.com/attachments/750525976105975828/1138556198992494683/Hexdata.png

After this step, we were finally able to calculate latency. By comparing the frame data to the packet data, we could find what exact packet correlates to the frame. By using the timestamp of the packet, we can then be able to find the latency.

However, this didn't come without challenges. For one, both the frame data and the packet data have many values that all need to equal to each other. Moreover, much of the packet data could be similar to the frame data with a few differences, making it extremely difficult to determine whether they match or not. As a result of these difficulties, we ended up developing a Python script that can determine which packet matches with the frame. With these challenges overcome, we were then able to find the latency of the camera of being about 45 milliseconds.

Week 9 presentation: https://docs.google.com/presentation/d/1p3A7ZTJOdkUAqZujGuvOyyJHKzd3qgGuxxUHSaZVm9k/edit#slide=id.p

Week 10

Summary

  • Finalized poster
  • Finalized final presentation
  • Reduced latency

Week Run-through

When it came down to our final week of the project, it mainly revolved around finalizing the presentation and poster. However, there were other things that needed to be done, the main one being a way to reduce latency. When it came to reducing latency, this we could do via changing the camera settings in areas such as codec, data compression rate, or contrast.

Week 10 presentation: https://docs.google.com/presentation/d/1si5gw012hevYePNOeTQiqYPRlw_s4Ao4pM_46wUkApk/edit#slide=id.p

Note: See TracWiki for help on using the wiki.