Ceph Administration (CP-ADM)

Ceph admin training on cluster, object, block, and file storage deployment

Description

This training equips participants with skills to manage Ceph as an integrated storage platform for object, block, and file storage. Participants will learn to create and manage storage pools, control access, and access storage via S3, Swift, RBD, iSCSI, and CephFS.

The course also covers performance monitoring and tuning using tools like Grafana and Prometheus. Designed for system, cloud, and storage administrators, with theory, case studies, quizzes, and hands-on labs. Linux knowledge is recommended

Why Take This Course?

  • Master Enterprise Storage with Ceph

    Learn how to deploy, manage, and expand Ceph clusters to provide object, block, and file storage for production environments.

  • Optimize and Secure Storage Operations

    Gain practical skills to control access, tune performance, and ensure storage reliability using modern monitoring and management tools.

  • Advance Your Career in Cloud and Storage Engineering

    Acquire in-demand expertise that strengthens your professional profile as a storage or cloud administrator in mission-critical environments.

Facilities

  • Hands-on Lab Environment – Train using dedicated virtual machines with full access to lab resources via Jumpserver (RDP & SSH), enabling real-world practice throughout the training.
  • Downloadable Lab Environment – Continue practicing after the training with our VM Lab Downloader (.qcow2), allowing you to run the lab environment on your own machine.
  • Complete Learning Materials – Get comprehensive digital training materials and a handbook with up to 1 year access, plus a certificate of course completion.
  • Post-Training Support – Continue learning after the class with access to training records (for online sessions) and community discussion groups to help reinforce your skills.

Trainer

Pahrial MS

IT Infrastructure Engineer

-

View LinkedIn

Syllabus

Introduction Ceph
  • What is Ceph?
  • Ceph Release
  • Storage Challenges
  • Architecture of Red Hat Ceph Storage
Deploying Ceph
  • Hardware Recommendations
  • OS Recommendations
  • Ceph Deployment Tools
  • Planning a Cluster
  • Lab 2.1 Lab Environment Preparation
  • Lab 2.2 Bootstrap Ceph Cluster
  • Lab 2.3 Adding Hosts
Configuring Ceph
  • Replicated Pool
  • Erasure Coded Pool
  • Managing Ceph Authentication
  • Lab 3.1 Managing Replicated Pools
  • Lab 3.2 Managing Erasure Coded Pools
  • Lab 3.3 Modifying Settings in the Configuration File
  • Lab 3.4 Managing Ceph Authentication
  • Quiz 3 Managing Pool and Ceph Authentication
Providing Block Storage with RBD
  • RADOS Block Devices
  • RBD Mirrors for Disaster Recovery
  • Lab 4.1 Managing RADOS Block Devices
  • Lab 4.2 RBD Image Snapshot Feature
  • Lab 4.3 RBD Image Clone Feature
  • Lab 4.4 Importing and Exporting RBD Images
  • Quiz 4 Managing RBD Image
Providing Object Storage with RADOSGW
  • RADOS Gateway for Object Storage
  • Multisite RADOSGW Deployments
  • Lab 5.1 Deploying a RADOS Gateway
  • Lab 5.2 Providing Object Storage Using S3
  • Quiz 5 Managing Object Storage
Providing File Storage with CephFS
  • File Storage with CephFS
  • Lab 6.1 Deploying MDS for CephFS
  • Lab 6.2 Providing File Storage with CephFS
  • Lab 6.3 Working with Directory Layout Attribute
  • Quiz 6 Managing Ceph Filesystem
Configuring the CRUSH Map
  • Managing and Customizing the CRUSH Map
  • Lab 7.1 Managing the CRUSH Map
  • Lab 7.2 Customizing CRUSH Hierarchy
  • Quiz 7 Customizing CRUSH Hierarchy
Managing and Updating the Cluster Maps
  • Managing the Monitor and OSD Maps
  • Lab 8.1 Managing the Monitor and OSD Maps
Managing a Ceph Storage Cluster
  • Operating a Ceph Storage Cluster
  • Lab 9.1 Operating and Maintaining a Ceph Monitor
  • Lab 9.2 Operating and Maintaining a Ceph OSD
Tuning and Troubleshooting Red Hat Ceph Storage
  • Tuning Linux Servers for Ceph
  • Optimizing Ceph Performance
  • Preserving Ceph Client Performance
  • Troubleshooting Client Issues
  • FIO Benchmark​
  • Lab 10.1 Tuning Linux I/O Network Parameters
  • Lab 10.2 Analyzing Ceph Cluster Performance
  • Lab 10.3 Tuning Ceph Cluster Performance
  • Lab 10.4 Troubleshooting Client Issues
  • Lab 10.5 Benchmarking Ceph using FIO
  • Quiz 10 Tuning Ceph Parameter and Benchmarking Ceph

Common Questions

Is there a minimum number of participants required for the training to run?

Yes, the training can be conducted with a minimum of 4 participants. If the number of participants does not meet the minimum requirement, you may contact us for further information on the available options.

Is it possible to customize the training materials?

Yes, the training materials can be customized based on your needs. The topics are not limited to Cloud, CloudSecOps, and DevSecOps.

Available Training

Accelerate Your Professional Growth

Category Cloud
Duration None
Level Intermediate
Method Offline / Online / In-house

Need help?

Contact our team for corporate training inquiries.

Phone Icon Chat on WhatsApp

Related Courses

Course Image
Pro Training

Istio Administration (IS-ADM)

Istio training on GKE for service mesh setup, monitoring, and troubleshooting

Intermediate

modules

10 Modules

duration

None

Course Image
Pro Training

OpenStack Administration (OS-ADM)

OpenStack admin on dashboard, CLI, instances, and Kolla-Ansible deployment.

Intermediate

modules

17 Modules

duration

None

Course Image
Pro Training

Kubernetes Administration (K9-ADM)

Kubernetes is an open source platform for container orchestration and scaling

Intermediate

modules

25 Modules

duration

None