| 1 | = Privacy Leakage Study and Protection for Virtual Reality Devices |
| 2 | |
| 3 | ** Advisor:** [https://eceweb1.rutgers.edu/~daisylab/people.html# Dr. Yingying (Jennifer) Chen] |
| 4 | |
| 5 | **Mentors:** [https://eceweb1.rutgers.edu/~daisylab/people.html# Changming Li]^GR^, [https://eceweb1.rutgers.edu/~daisylab/people.html# Honglu Li]^GR^, and [https://eceweb1.rutgers.edu/~daisylab/people.html# Tianfang Zhang]^GR^ |
| 6 | |
| 7 | **Team:** [https://www.linkedin.com/in/dirky9000 Dirk Catpo Risco]^GR^, [https://www.linkedin.com/in/genericnameforjinu/Brandon (Jinu) Son]^UG^, Brody^HS^, Emily Yao^HS^ |
| 8 | |
| 9 | [https://google.com Final Poser With Results] |
| 10 | |
| 11 | |
| 12 | == Project Overview |
| 13 | Augmented reality/virtual reality (AR/VR) is used for many applications and have been used for many purposes ranging from communicating and tourism, all the way to healthcare. Accessing the built-in motion sensors does not require user permissions, as most VR applications need to access this information in order to function. However, this introduces the possibility of privacy vulnerabilities: zero-permission motion sensors can be used in order to infer live speech, which is a problem when that speech may include sensitive information. |
| 14 | |
| 15 | == Project Goal |
| 16 | The purpose of this project is to extract motion data from AR/VR devices inertial measurement unit (IMU), and then input this data to a large language model (LLM) to predict what the user is doing |
| 17 | |
| 18 | == Weekly Updates |
| 19 | === Week 1 |
| 20 | **Progress** |
| 21 | * Read research paper ![1] regarding an eavesdropping attack called Face-Mic |
| 22 | **Next Week Goals** |
| 23 | * We plan to meet with our mentors and get more information on the duties and expectations of our project |
| 24 | |
| 25 | === Week 2 |
| 26 | **Progress** |
| 27 | * Read research paper ![2] regarding LLMs comprehending the physical world |
| 28 | * Build a connection between research paper and also privacy concerns of AR/VR devices |
| 29 | |
| 30 | **Next Week Goals** |
| 31 | * Get familiar with AR/VR device: |
| 32 | * Meta Quest |
| 33 | * How to use device |
| 34 | * Configure settings on host computer |
| 35 | * Extract motion data from IMU |
| 36 | * Connecting motion sensor application program interface (API) to access data |
| 37 | * Data processing method |
| 38 | |
| 39 | === Week 3 |
| 40 | **Progress** |
| 41 | * Placeholder |
| 42 | |
| 43 | **Next Week Goals** |
| 44 | * Placeholder |
| 45 | |
| 46 | === Week 4 |
| 47 | **Progress** |
| 48 | * Placeholder |
| 49 | |
| 50 | **Next Week Goals** |
| 51 | * Placeholder |
| 52 | |
| 53 | === Week 5 |
| 54 | **Progress** |
| 55 | * Placeholder |
| 56 | |
| 57 | **Next Week Goals** |
| 58 | * Placeholder |
| 59 | |
| 60 | === Week 6 |
| 61 | **Progress** |
| 62 | * Placeholder |
| 63 | |
| 64 | **Next Week Goals** |
| 65 | * Placeholder |
| 66 | |
| 67 | === Week 7 |
| 68 | **Progress** |
| 69 | * Placeholder |
| 70 | **Next Week Goals** |
| 71 | * Placeholder |
| 72 | |
| 73 | == Links to Presentations |
| 74 | |
| 75 | {{{#!html |
| 76 | <p style="display: inline-block;"> |
| 77 | <a href="https://google.com">Week 1</a>   |
| 78 | <a href="https://google.com">Week 2</a>   |
| 79 | <a href="https://google.com">Week 3</a>   |
| 80 | <a href="https://google.com">Week 4</a>   |
| 81 | <a href="https://google.com">Week 5</a>   |
| 82 | <a href="https://google.com">Week 6</a>   |
| 83 | <a href="https://google.com">Week 7</a>   |
| 84 | <a href="https://google.com">Final Presentation</a>   |
| 85 | |
| 86 | |
| 87 | </p> |
| 88 | }}} |
| 89 | |
| 90 | == References |
| 91 | |
| 92 | ![1] Shi, C., Xu, X., Zhang, T., Walker, P., Wu, Y., Liu, J., Saxena, N., Chen, Y. and Yu, J., 2021, October. Face-Mic: inferring live speech and speaker identity via subtle facial dynamics captured by AR/VR motion sensors. In Proceedings of the 27th Annual International Conference on Mobile Computing and Networking (pp. 478-490). |
| 93 | |
| 94 | ![2] Xu, H., Han, L., Yang, Q., Li, M. and Srivastava, M., 2024, February. Penetrative ai: Making llms comprehend the physical world. In Proceedings of the 25th International Workshop on Mobile Computing Systems and Applications (pp. 1-7). |