83 | | This was probably the week where we did the most chunk of work that was relevant to the project, which was getting into the process of Ceph installation as well as working with Ceph. We watched three videos about how Ceph works, data security, as well as the process of self-balancing and also self-healing. While we were trying to mount Ceph, there was a lack of OSDs and we spent Week 4 trying to troubleshoot the problem |
| 83 | This was probably the week where we did the most chunk of work that was relevant to the project, which was getting into the process of Ceph installation as well as working with Ceph. We watched three videos about how Ceph works, data security, as well as the process of self-balancing and also self-healing. While we were trying to mount Ceph, there was a lack of OSDs and we spent Week 4 trying to troubleshoot the problem. After spending a long time with troubleshooting, we found out that we actually had an older version of Ceph installed. To fix this, we needed to change the uid, gid, as well as the group container too. After that, we installed the Debian 11 and did the Ceph installation through that. |
| 84 | |
| 85 | Here is how the Ceph diagram works: |
| 86 | 1. Data arrives to the file system/pool |
| 87 | 2. Splits into chunks (pool groups) |
| 88 | 3. Metadata and replicas are created and then stored into OSDs. |
| 89 | |