Tools: Ceph on Ubuntu (Single Node): Setup Guide (OSD, iSCSI, Bonding) (2026)

Tools: Ceph on Ubuntu (Single Node): Setup Guide (OSD, iSCSI, Bonding) (2026)

What this includes

Why this might help

Notes from the lab

Limitations

If you're running something similar I recently put together a small Ceph lab on Ubuntu using cephadm, mainly to understand how everything fits together in a real environment. Most guides out there are either too abstract or assume multi-node production setups, so I decided to document a single-node deployment end-to-end β€” focusing on what actually matters when you're getting started. Everything is based on Ubuntu Server 22.04 and Ceph Squid. Full step-by-step docs here: πŸ‘‰ https://github.com/EzequielPA4/ceph-infrastructure-docs this will probably save you time. A few things that are worth calling out: This is a single-node setup, so: I'm curious how others are setting this up in lab environments. Feel free to share your setup or improvements. Templates let you quickly answer FAQs or store snippets for re-use. as well , this person and/or - Ceph installation (cephadm)

- OSD provisioning from previously used disks- Network bonding (802.3ad / LACP)- iSCSI gateway setup- LVM snapshots for rollback scenarios - testing Ceph in a lab- learning how OSDs really work- trying to avoid common cephadm issues- dealing with reused disks (LVM leftovers, etc.) - Ceph is very strict with disks β†’ anything with existing LVM or FS will be rejected- Cleaning disks properly is key (wipefs + sgdisk + dd in some cases)- Bonding (LACP) needs correct switch config or it just won’t behave- Snapshots with LVM are useful, but you need to monitor usage (lvs -o +data_percent) - no quorum discussion- focused on learning / testing - Are you using cephadm or something else?- Any gotchas with iSCSI gateways?