wiki:Other/Summer/2024/lLM

Version 30 (modified by talati, 4 months ago) ( diff )

HS Team Members: Jose Rubio, Sathvik Samant, Jonathan Duran, Vedant Talati https://www.orbit-lab.org/wiki/Other/Summer/2024/lLM?action=edit#


Project Goal: To create a functional smart-space for Internet-of-Things(IoT) and Machine Learning (ML) research. "Maestros," which are RaspberryPi 3B+s with various sensors, will collect a variety of multi-modal data formats (ex. temperature, humidity, RGB, motion, video, audio, etc). This data, then, will be stored on the WINLAB server, and can be remotely connected to and experimented with using SSH.

After collecting this data, these "Maestros" will enable LLM creation and model development to create a multi-modal, machine learning-powered smart space for cities and a variety of environments.


Week 1

Slideshow Link: https://docs.google.com/presentation/d/1tB2aXgDJbP6hyVbTjhTdmyO4sL-t9mIQVLcBbdQiM44/edit?usp=sharing What we did this week:

  1. Github Familiarized ourselves with Cy-Phy Lab Github, understanding system architecture, existing code, and creating our credentials. We then practiced pushing and pulling commits to Github using the following command line prompts:
git clone https://github.com/URL-TO-REPO-HERE #to clone repository onto local
git status 
git add #takes the changes from working directory to staging area
git commit #takes changes from staging area and pushes to repository
git commit -m "ADD_COMMIT_MESSAGE_HERE" #to ensure readable Github
git push origin main #this pushes changes to the main repository from your local
  1. SSL Connection to Maestros Since this project requires us to first connect to the testbed and then to the RaspberryPi (Maestros), we had to familiarize ourself with SSH (Secure Shell Protocol) to securely send instructions to the Maestros. We first created ORBIT accounts to access testbed, and then followed instructions to connect:
    #have two command prompts open, and complete the following in both
    ssh testbed@10.61.1.235 -L 5000:localhost:5000 #after, type in the password
    #in one of the windows, run the following:
    conda activate smart-box #activating a specific environment with dependencies
    cd maestro/Maestro-Testbed #access directory
    ./testbed.sh 
    #in the other, run the following
    cd /home/testbed/maestro/Maestro-Testbed/app
    python start_experiment.py --ids 1 --video_model None --sensor_model None --chunk_size 10 --frequency 10 --directory storage #id can be changed based on running maestro
    #to end, run the following in the same tab:
    python end_experiment.py
    

After familiarizing ourselves with this, we were more prepared to run experiments ourselves and respond to connection and backend issues in the Github.

  1. Calibrating Maestros To ensure that experiments can be run on most/all maestros, we needed to install dependencies, found SD imaging method to do that.
  2. Creating Frontend UI for SSH connection To simplify testbed/maestro connection workflow, we are working on creating a user friendly frontend to run experiments without 2 windows open.

Week 2

Slideshow Link:https://docs.google.com/presentation/d/1KVkkivbBrbGRZX3KOFKji4vJhOQXKhP2bg0rk4jRsdE/edit#slide=id.p What we did this week:

  1. Maestro Tracking (GitHub Issue): We worked on creating an updatable JSON file for the experiment which could show the status of the experiment, as well as the specific Maestros that were running. The file can update in real time, as Maestros are turned on and off. This was accomplished by adding a check_json function to the file, app.py.
    def check_json():
        path = r"experiment.json"
        if not os.path.exists(path):
            experiment_dict = {
                "status": "off",
                "maestros": []
            }
            json_obj = json.dumps(experiment_dict, indent=4)
            with open('experiment.json', 'w') as outfile:
                outfile.write(json_obj)
        else:
            return
    
  2. Frontend Development:

We were able to develop a rough outline for what our dashboard is supposed to look like and all the features that should be included. Dashboard Outline image:https://online.publuu.com/594674/1333216

  1. Maestro Setup: We got 6 Maestros setup and running within our group's local workspace. These Maestros are all set to run experiments and start collecting data. We plan roll out more Maestro devices throughout the rest of WINLAB in the coming two weeks, with the end goal of having ~25 Maestro devices throughout the facility collecting data.

Week 3

Slideshow Link:https://docs.google.com/presentation/d/14wa3S-AsHlxlYRIgaAETMUtxAuQyDLT258NQ9iMTzWw/edit#slide=id.gbd6c00e730_0_89 What we did this week:

  1. Updatable JSON File (GitHub Issue): We worked on fixing bugs in the updatable JSON file from last week. Previously, Maestros would be depicted as running without confirmation of the same, which led to issues with data collection. As a result, we added a function to verify Maestro activation.
for pi in selected_ids:
            # only adds the ones that just turned on their scripts
            if pi not in res["maestros"]: # Prevents duplicate IDs in the case of a disconnect
                res["maestros"].append(pi)
        with open("experiment.json", "w") as f:
            json.dump(res, f, indent=2)
{
"status": "running",
"maestros": [1,6,19]
}
  1. Frontend Development: We were able to develop a script to turn off all Pis at once and also shut off specific Pis using comma separation. This was done using Paramiko to SSH into the Pi and run the "sudo shutdown —h now" command. We also created a dashboard GUI for this process so that a user could do this with ease, and we plan to integrate it into the barebones HTML file soon.
client = paramiko.client.SSHClient()
    client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
    try:
        print(f"Connecting to {ip}...")
        client.connect(ip, username=username, password=password, timeout=1)
        print(f"Connected to {ip}. Shutting down...")
        client.exec_command("sudo shutdown -h now")
        print(f"{ip} is shutting down.")
    except paramiko.AuthenticationException:
        print(f"Authentication failed for {ip}. Check username and password.")
    except paramiko.SSHException as ssh_err:
        print(f"SSH error occurred for {ip}: {str(ssh_err)}")
    except socket.timeout:
        print(f"Connection to {ip} timed out. Check if the Pi is plugged in.")
    except Exception as e:
        print(f"Failed to connect to {ip}: {str(e)}")
    finally:
        client.close()

Shutdown Maestros image:https://online.publuu.com/594674/1333239

  1. Termination Control: We also worked on adding more specific termination control for Maestros. Previously, running the end_experiment script would terminate all Maestros, or at least attempt to do so. Hence, even if only one was running, it would try and connect to each every Maestro, of which there are more than 30 total, and this process was inefficient. Hence, we wanted to add a feature that could shut off specified Maestros and give the user more control and efficient workflow when interacting with the Maestros. It is still a work in progress, with tr, and it should be finalized by the upcoming week.
  1. Git Hub Documentation: We worked on finishing installing the dependencies for a Maestro and documenting it. Even though imaging was considered the best method to have all Maestros with the required dependencies to run, using the other method, installing dependencies from a fresh Raspberry Pi SD card, is still important to understand how the working Maestros came to be and to have a reference on how to create your own working Maestro for experimentation.

Week 4

Slideshow Link: https://docs.google.com/presentation/d/1Opw-jvDWLsCzsMahu0Ax9Z1Hh6cPiQextrTsCcV5_hI/edit#slide=id.g2ee2c68907a_0_0 What we did this week:

  1. Research Paper: We read and annotated a Google DeepMind paper on Weighted Average Reward Models (WARM), a novel approach to develop and train reward models to mitigate reward hacking. The paper discusses the advantages of WARM over more traditional methods such as ensembling, which take the average output of various individual models, whereas WARM provides a single output using the weights and biases corresponding to multiple models. We aim to present this paper to Dr. Ortiz and team at our weekly meeting next Tuesday.
  1. Sensor Testing: We also spent a considerable portion of our time testing out the Maestro sensor suite and verifying that all the sensors can transmit meaningful data to the Testbed server. We tested the following sensors: Accelerometer, Humidity, Temperature, Air Quality, Infrared Motion, RGB, as well as Audio. All of the sensors were working optimally, except audio, which kept returning values containing high entropy. We thoroughly tested the audio sensors by introducing extremely loud stimuli for a short period of time and maintaining a near silent environment in between the stimuli. However, the streamed values showed no significant shift. This is currently a work in progress.
  1. Loguru Logging: We also implemented the Loguru library into our Testbed server to create a more organized script output. Previously, our output logs were not organized and this led to efficiency losses when trying to access the specific logs. However, using Loguru, we created an organized, timestamped way to log and debug the scripts. Additionally, we also added a function to locally download and clear old log files once unnecessary.
  1. GitHub: Completed the installation of all Maestro dependencies and the necessary documentation. The documentation was an important focus for this week because it allows newcomers to have an understanding as to how the working Maestros came to be. It also provides a reference on how to replicate this experiment to others in the community.

Week 5

Note: See TracWiki for help on using the wiki.