| 160 | |
| 161 | [[Image(Ceph.png)]] |
| 162 | |
| 163 | [[Image(Ansible-Playbook.png)]] |
| 164 | |
| 165 | == **Week 8** |
| 166 | **Summary** |
| 167 | - Added and removed cluster networks to our Ceph nodes |
| 168 | - Enabled the second network interface in our node |
| 169 | - Added the cluster network in the configuration |
| 170 | - Manually added the new node to the Ceph cluster |
| 171 | - Developed Ansible playbooks to automate the process |
| 172 | |
| 173 | **Idea Of Our Progress This Week** |
| 174 | This week, we spent most of our time focusing on the manual side installation of Ceph. In order for us to do it, we set up the manager as well as the monitor service setup. We also set up the dashboard, storage pool, file system, and also the CephFS setup. After that, we went over Ansible tasks and went. over how to work with Ansible tasks using yaml scripts which are also known as the Ansible playbooks. We then created playbooks in order for us to be able to fix hostname tasks. After that, the final thing that we did was to setup the LXC ansible container in order for us to test further configurations. We used passwordless sudo, deployed Ceph on one node, and created an Ansible playbook in order for us to mount the Ceph FS. |
| 175 | |
| 176 | [[Image(Ceph.png)]] |
| 177 | |
| 178 | [[Image(Ansible-Playbook.png)]] |
| 179 | |
| 180 | == **Week 9** |
| 181 | **Summary** |
| 182 | - SLURM Installation |
| 183 | - MATPLOTLIB Post Processing |
| 184 | - Performance Testing With Graphs |
| 185 | |
| 186 | **Idea Of Our Progress This Week** |
| 187 | This week, we worked on installing SLURM, which is basically a job scheduler for Linux clusters. We first installed SLURM on LXC containers, restarted the node, and finally ran benchmarks using SLURM. We also setup infiniband networks on the cluster and also continued to run performance tests. |