Ceph (software)

This article is about the computer storage platform. For other uses, see Ceph (disambiguation).
Ceph
Original author(s) Inktank Storage (Sage Weil, Yehuda Sadeh Weinraub, Gregory Farnum, Josh Durgin, Samuel Just, Wido den Hollander)
Developer(s) Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE[1]
Stable release
10.2.2 "Jewel"[2] / 15 June 2016 (2016-06-15)
Repository git.ceph.com?p=ceph.git%3Ba%3Dsummary
Written in C++, Python
Operating system Linux
Type Distributed object store
License LGPL 2.1[3]
Website ceph.com

In computing, Ceph, a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, scalable to the exabyte level, and freely available.

Ceph replicates data and makes it fault-tolerant,[4] using commodity hardware and requiring no specific hardware support. As a result of its design, the system is both self-healing and self-managing, aiming to minimize administration time and other costs.

On April 21, 2016, the Ceph development team released "Jewel", the first Ceph release in which CephFS is considered stable. The CephFS repair and disaster recovery tools are feature-complete (snapshots, multiple active metadata servers and some other functionality is disabled by default).[5]

Design

A high-level overview of the Ceph's internal organization[6]:4

Ceph employs four distinct kinds of daemons:[6]

All of these are fully distributed, and may run on the same set of servers. Clients directly interact with all of them.[8]

Ceph does striping of individual files across multiple nodes to achieve higher throughput, similarly to how RAID0 stripes partitions across multiple hard drives. Adaptive load balancing is supported whereby frequently accessed objects are replicated over more nodes. As of December 2014, XFS is the recommended underlying filesystem type for production environments, while Btrfs is recommended for non-production environments. ext4 filesystems are not recommended because of resulting limitations on the maximum RADOS objects length.[9]

Object storage

An architecture diagram showing the relations between components of the Ceph storage platform

Ceph implements distributed object storage. Ceph’s software libraries provide client applications with direct access to the reliable autonomic distributed object store (RADOS) object-based storage system, and also provide a foundation for some of Ceph’s features, including RADOS Block Device (RBD), RADOS Gateway, and the Ceph File System.

The librados software libraries provide access in C, C++, Java, PHP, and Python. The RADOS Gateway also exposes the object store as a RESTful interface which can present as both native Amazon S3 and OpenStack Swift APIs.

Block storage

Ceph’s object storage system allows users to mount Ceph as a thin-provisioned block device. When an application writes data to Ceph using a block device, Ceph automatically stripes and replicates the data across the cluster. Ceph's RADOS Block Device (RBD) also integrates with Kernel-based Virtual Machines (KVMs).

Ceph RBD interfaces with the same Ceph object storage system that provides the librados interface and the CephFS file system, and it stores block device images as objects. Since RBD is built on librados, RBD inherits librados's abilities, including read-only snapshots and revert to snapshot. By striping images across the cluster, Ceph improves read access performance for large block device images.

The block device can be virtualized, providing block storage to virtual machines, in virtualization platforms such as Apache CloudStack, OpenStack, OpenNebula, Ganeti, and Proxmox Virtual Environment.

File system

Ceph’s file system (CephFS) runs on top of the same object storage system that provides object storage and block device interfaces. The Ceph metadata server cluster provides a service that maps the directories and file names of the file system to objects stored within RADOS clusters. The metadata server cluster can expand or contract, and it can rebalance the file system dynamically to distribute data evenly among cluster hosts. This ensures high performance and prevents heavy loads on specific hosts within the cluster.

Clients mount the POSIX-compatible file system using a Linux kernel client. On March 19, 2010, Linus Torvalds merged the Ceph client into Linux kernel version 2.6.34[10] which was released on May 16, 2010. An older FUSE-based client is also available. The servers run as regular Unix daemons.

History

Ceph was initially created by Sage Weil (developer of the Webring concept and co-founder of DreamHost) for his doctoral dissertation,[11] which was advised by Professor Scott A. Brandt in the Jack Baskin School of Engineering at the University of California, Santa Cruz and funded by the United States Department of Energy (DOE) and National Nuclear Security Administration (NNSA), involving Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), and Sandia National Laboratories (SNL).

After his graduation in fall 2007, Weil continued to work on Ceph full-time, and the core development team expanded to include Yehuda Sadeh Weinraub and Gregory Farnum. In 2012, Weil created Inktank Storage for professional services and support for Ceph.[12][13]

In April 2014, Red Hat purchased Inktank, bringing the majority of Ceph development in-house.[14]

In October 2015, the Ceph Community Advisory Board was formed to assist the community in driving the direction of open source software-defined storage technology. The charter advisory board includes Ceph community members from global IT organizations that are committed to the Ceph project, including individuals from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.[15]

Etymology

The name "Ceph" is a common nickname given to pet octopuses and derives from cephalopods, a class of molluscs, and ultimately from Ancient Greek κεφαλή (ke-pha-LEE), meaning "head" and πόδι (PO-dhi), meaning "leg". The name (emphasized by the logo) suggests the highly parallel behavior of an octopus and was chosen to connect the file system with UCSC's mascot, a banana slug called "Sammy".[25] Banana slugs are gastropods, which are also a class of molluscs.

See also

References

  1. "Ceph Community Forms Advisory Board". 2015-10-28. Retrieved 2016-01-20.
  2. "v10.2.2 Jewel released".
  3. "LGPL2.1 license file in the Ceph sources". 2014-10-24. Retrieved 2014-10-24.
  4. Jeremy Andrews (2007-11-15). "Ceph Distributed Network File System". KernelTrap.
  5. 1 2 Sage Weil (2016-04-21). "v10.2.0 Infernalis Released". Ceph Blog.
  6. 1 2 M. Tim Jones (2010-06-04). "Ceph: A Linux petabyte-scale distributed file system" (PDF). IBM. Retrieved 2014-12-03.
  7. "Btrfs – Ceph Wiki". Retrieved 2010-04-27.
  8. Jake Edge (2007-11-14). "The Ceph filesystem". LWN.net.
  9. "Hard Disk and File System Recommendations". ceph.com. Retrieved 28 March 2013.
  10. Sage Weil (2010-02-19). "Client merged for 2.6.34". ceph.newdream.net.
  11. Sage Weil (2007-12-01). "Ceph: Reliable, Scalable, and High-Performance Distributed Storage" (PDF). University of California, Santa Cruz.
  12. Bryan Bogensberger (2012-05-03). "And It All Comes Together". Inktank Blog.
  13. Joseph F. Kovar (July 10, 2012). "The 10 Coolest Storage Startups Of 2012 (So Far)". CRN. Retrieved July 19, 2013.
  14. Red Hat Inc (2014-04-30). "Red Hat to Acquire Inktank, Provider of Ceph". Red Hat. Retrieved 2014-08-19.
  15. "Ceph Community Forms Advisory Board". 2015-10-28. Retrieved 2016-01-20.
  16. Sage Weil (2012-07-03). "v0.48 "Argonaut" Released". Ceph Blog.
  17. Sage Weil (2013-01-01). "v0.56 Released". Ceph Blog.
  18. Sage Weil (2013-05-17). "v0.61 "Cuttlefish" Released". Ceph Blog.
  19. Sage Weil (2013-08-14). "v0.67 Dumpling Released". Ceph Blog.
  20. Sage Weil (2013-11-09). "v0.72 Emperor Released". Ceph Blog.
  21. Sage Weil (2014-05-07). "v0.80 Firefly Released". Ceph Blog.
  22. Sage Weil (2014-10-29). "v0.87 Giant Released". Ceph Blog.
  23. Sage Weil (2015-04-07). "v0.94 Hammer Released". Ceph Blog.
  24. Sage Weil (2015-11-06). "v9.2.0 Infernalis Released". Ceph Blog.
  25. "How the Banana Slug became UCSC's official mascot". Retrieved September 22, 2009.

Further reading

External links

Look up κεφαλή in Wiktionary, the free dictionary.
Wikimedia Commons has media related to Ceph.
This article is issued from Wikipedia - version of the 11/16/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.