Ceph FS / RBD
No Conda or PIP on CephFS
Installing conda
and pip
packages on all CephFS (shared) filesystems is strictly prohibited!
Ceph filesystems data use
Credit: Ceph data usage
General ceph grafana dashboard
Currently available storageClasses:
StorageClass | Filesystem Type | Region | AccessModes | Restrictions | Storage Type |
---|---|---|---|---|---|
rook-cephfs | CephFS | US West | ReadWriteMany | Spinning drives with NVME meta | |
rook-cephfs-central | CephFS | US Central | ReadWriteMany | Spinning drives with NVME meta | |
rook-cephfs-east | CephFS | US East | ReadWriteMany | Mixed | |
rook-cephfs-south-east | CephFS | US South East | ReadWriteMany | Spinning drives with NVME meta | |
rook-cephfs-pacific | CephFS | Hawaii+Guam | ReadWriteMany | Spinning drives with NVME meta | |
rook-cephfs-haosu | CephFS | US West (local) | ReadWriteMany | Hao Su and Ravi cluster | Spinning drives with NVME, meta on NVME |
rook-cephfs-tide | CephFS | US West (local) | ReadWriteMany | SDSU Tide cluster | Spinning drives with NVME meta |
rook-cephfs-ucsd | CephFS | US West (local) | ReadWriteMany | Read the Policy | NVME |
rook-ceph-block | RBD | US West | ReadWriteOnce | Spinning drives with NVME meta | |
rook-ceph-block-east | RBD | US East | ReadWriteOnce | Mixed | |
rook-ceph-block-south-east | RBD | US South East | ReadWriteOnce | Spinning drives with NVME meta | |
rook-ceph-block-pacific | RBD | Hawaii+Guam | ReadWriteOnce | Spinning drives with NVME meta | |
rook-ceph-block-tide | RBD | US West (local) | ReadWriteOnce | SDSU Tide cluster | Spinning drives with NVME meta |
rook-ceph-block-central (*default*) | RBD | US Central | ReadWriteOnce | Spinning drives with NVME meta |
Ceph shared filesystem (CephFS) is the primary way of storing data in nautilus and allows mounting same volumes from multiple PODs in parallel (ReadWriteMany).
Ceph block storage allows RBD (Rados Block Devices) to be attached to a single pod at a time (ReadWriteOnce). Provides fastest access to the data, and is preferred for all datasets not needing shared access from multiple pods.
UCSD NVMe CephFS filesystem policy
Warning
This policy applies to the rook-cephfs-ucsd
storageclass
The filesystem is very fast and small. We expect all data on it to be used for currently running computation and then promptly deleted. We reserve the right to purge any data that's staying there longer than needed at admin's discretion without any notifications.