commit a477966c0155cc2d0e8f44af56478e006c3e40f7 Author: Peter Matulis Date: Fri Oct 2 14:36:20 2020 -0400 Update README for Ceph EC pools This updates the README for erasure coded Ceph pools for the case of Ceph-based Object storage. The new text should be as similar as possible for all the charms that support configuration options for EC pools. See the below review for the first of these charms whose README has been updated. https://review.opendev.org/#/c/749824/ Remove section on RGW multisite replication as it is already in the CDG. The charm now points there. Add basic README template sections (Actions, Bugs, Configuration). Standardise Overview, Deployment, and Network spaces sections. Improve Access and Keystone integration sections. Change-Id: I3839c1018b9bdf0d6712d3fb2e9f95b633591615 diff --git a/README.md b/README.md index 5c08423..f207a35 100644 --- a/README.md +++ b/README.md @@ -1,225 +1,225 @@ # Overview -Ceph is a distributed storage and network file system designed to provide -excellent performance, reliability and scalability. +[Ceph][ceph-upstream] is a unified, distributed storage system designed for +excellent performance, reliability, and scalability. -This charm deploys the RADOS Gateway, a S3 and Swift compatible HTTP gateway -for online object storage on-top of a ceph cluster. +The ceph-radosgw charm deploys the RADOS Gateway, a S3 and Swift compatible +HTTP gateway. The deployment is done within the context of an existing Ceph +cluster. -## Usage +# Usage -In order to use this charm, it is assumed that you have already deployed a ceph -storage cluster using the 'ceph' charm with something like this:: +## Configuration - juju deploy -n 3 --config ceph.yaml ceph +This section covers common and/or important configuration options. See file +`config.yaml` for the full list of options, along with their descriptions and +default values. See the [Juju documentation][juju-docs-config-apps] for details +on configuring applications. -To deploy the RADOS gateway simple do:: +#### `pool-type` - juju deploy ceph-radosgw - juju add-relation ceph-radosgw ceph - -You can then directly access the RADOS gateway by exposing the service:: - - juju expose ceph-radosgw - -The gateway can be accessed over port 80 (as show in juju status exposed -ports). - -## Access - -Note that you will need to login to one of the service units supporting the -ceph charm to generate some access credentials:: - - juju ssh ceph/0 \ - 'sudo radosgw-admin user create --uid="ubuntu" --display-name="Ubuntu Ceph"' +The `pool-type` option dictates the storage pool type. See section 'Ceph pool +type' for more information. -For security reasons the ceph-radosgw charm is not set up with appropriate -permissions to administer the ceph cluster. +#### `source` -## Keystone Integration +The `source` option states the software sources. A common value is an OpenStack +UCA release (e.g. 'cloud:xenial-queens' or 'cloud:bionic-ussuri'). See [Ceph +and the UCA][cloud-archive-ceph]. The underlying host's existing apt sources +will be used if this option is not specified (this behaviour can be explicitly +chosen by using the value of 'distro'). -Ceph >= 0.55 integrates with Openstack Keystone for authentication of Swift requests. +## Ceph pool type -This is enabled by relating the ceph-radosgw service with keystone:: +Ceph storage pools can be configured to ensure data resiliency either through +replication or by erasure coding. This charm supports both types via the +`pool-type` configuration option, which can take on the values of 'replicated' +and 'erasure-coded'. The default value is 'replicated'. - juju deploy keystone - juju add-relation keystone ceph-radosgw +For this charm, the pool type will be associated with Object storage. -If you try to relate the radosgw to keystone with an earlier version of ceph the hook -will error out to let you know. - -## High availability - -When more than one unit is deployed with the [hacluster][hacluster-charm] -application the charm will bring up an HA active/active cluster. - -There are two mutually exclusive high availability options: using virtual IP(s) -or DNS. In both cases the hacluster subordinate charm is used to provide the -Corosync and Pacemaker backend HA functionality. +> **Note**: Erasure-coded pools are supported starting with Ceph Luminous. -See [OpenStack high availability][cdg-ha-apps] in the [OpenStack Charms -Deployment Guide][cdg] for details. +### Replicated pools -## Network Space support +Replicated pools use a simple replication strategy in which each written object +is copied, in full, to multiple OSDs within the cluster. -This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above. +The `ceph-osd-replication-count` option sets the replica count for any object +stored within the rgw pools. Increasing this value increases data resilience at +the cost of consuming more real storage in the Ceph cluster. The default value +is '3'. -API endpoints can be bound to distinct network spaces supporting the network separation of public, internal and admin endpoints. +> **Important**: The `ceph-osd-replication-count` option must be set prior to + adding the relation to the ceph-mon application. Otherwise, the pool's + configuration will need to be set by interfacing with the cluster directly. -To use this feature, use the --bind option when deploying the charm: +### Erasure coded pools - juju deploy ceph-radosgw --bind "public=public-space internal=internal-space admin=admin-space" +Erasure coded pools use a technique that allows for the same resiliency as +replicated pools, yet reduces the amount of space required. Written data is +split into data chunks and error correction chunks, which are both distributed +throughout the cluster. -alternatively these can also be provided as part of a juju native bundle configuration: +> **Note**: Erasure coded pools require more memory and CPU cycles than + replicated pools do. - ceph-radosgw: - charm: cs:ceph-radosgw - num_units: 1 - bindings: - public: public-space - admin: admin-space - internal: internal-space +When using erasure coded pools for Object storage multiple pools will be +created: one erasure coded pool ('rgw.buckets.data' for storing actual RGW +data) and several replicated pools (for storing RGW omap metadata). The +`ceph-osd-replication-count` configuration option only applies to the metadata +(replicated) pools. -NOTE: Spaces must be configured in the underlying provider prior to attempting to use them. +Erasure coded pools can be configured via options whose names begin with the +`ec-` prefix. -NOTE: Existing deployments using os-\*-network configuration options will continue to function; these options are preferred over any network space binding provided if set. +> **Important**: It is strongly recommended to tailor the `ec-profile-k` and + `ec-profile-m` options to the needs of the given environment. These latter + options have default values of '1' and '2' respectively, which result in the + same space requirements as those of a replicated pool. -## Multi-Site replication +See [Ceph Erasure Coding][cdg-ceph-erasure-coding] in the [OpenStack Charms +Deployment Guide][cdg] for more information. -### Overview +## Deployment -This charm supports configuration of native replication between Ceph RADOS -gateway deployments. +To deploy a single RADOS gateway node within an existing Ceph cluster: -This is supported both within a single model and between different models -using cross-model relations. + juju deploy ceph-radosgw + juju add-relation ceph-radosgw:mon ceph-mon:radosgw -By default either ceph-radosgw deployment will accept write operations. +Expose the service: -### Deployment + juju expose ceph-radosgw -NOTE: example bundles for the us-west and us-east models can be found -in the bundles subdirectory of the ceph-radosgw charm. +> **Note**: The `expose` command is only required if the backing cloud blocks + traffic by default. In general, MAAS is the only cloud type that does not + employ firewalling. -NOTE: switching from a standalone deployment to a multi-site replicated -deployment is not supported. +The gateway can be accessed over port 80 (as per `juju status ceph-radosgw` +output). -To deploy in this configuration ensure that the following configuration -options are set on the ceph-radosgw charm deployments - in this example -rgw-us-east and rgw-us-west are both instances of the ceph-radosgw charm: +## Multi-site replication - rgw-us-east: - realm: replicated - zonegroup: us - zone: us-east - rgw-us-west: - realm: replicated - zonegroup: us - zone: us-west +The charm supports native replication between multiple RADOS Gateway +deployments. This is documented under [Ceph RADOS Gateway multisite +replication][cdg-ceph-radosgw-multisite] in the [OpenStack Charms Deployment +Guide][cdg]. -When deploying with this configuration the ceph-radosgw applications will -deploy into a blocked state until the master/slave (cross-model) relation -is added. +## Tenant namespacing -Typically each ceph-radosgw deployment will be associated with a separate -ceph cluster at different physical locations - in this example the deployments -are in different models ('us-east' and 'us-west'). +By default, Ceph RADOS Gateway puts all tenant buckets into the same global +namespace, disallowing multiple tenants to have buckets with the same name. +Tenant namespacing can be enabled in this charm by deploying with configuration +like: -One ceph-radosgw application acts as the initial master for the deployment - -setup the master relation endpoint as the provider of the offer for the -cross-model relation: + ceph-radosgw: + charm: cs:ceph-radosgw + num_units: 1 + options: + namespace-tenants: True - juju offer -m us-east rgw-us-east:master +Enabling tenant namespacing will place all tenant buckets into their own +namespace under their tenant id, as well as adding the tenant's ID parameter to +the Keystone endpoint registration to allow seamless integration with OpenStack. +Tenant namespacing cannot be toggled on in an existing installation as it will +remove tenant access to existing buckets. Toggling this option on an already +deployed RADOS Gateway will have no effect. -The cross-model relation offer can then be consumed in the other model and -related to the slave ceph-radosgw application: +## Access - juju consume -m us-west admin/us-east.rgw-us-east - juju add-relation -m us-west rgw-us-west:slave rgw-us-east:master +For security reasons the charm is not designed to administer the Ceph cluster. +A user (e.g. 'ubuntu') for the Ceph Object Gateway service will need to be +created manually: -Once the relation has been added the realm, zonegroup and zone configuration -will be created in the master deployment and then synced to the slave -deployment. + juju ssh ceph-mon/0 'sudo radosgw-admin user create \ + --uid="ubuntu" --display-name="Charmed Ceph"' -The current sync status can be validated from either model: +## Keystone integration (Swift) - juju ssh -m us-east ceph-mon/0 - sudo radosgw-admin sync status - realm 142eb39c-67c4-42b3-9116-1f4ffca23964 (replicated) - zonegroup 7b69f059-425b-44f5-8a21-ade63c2034bd (us) - zone 4ee3bc39-b526-4ac9-a233-64ebeacc4574 (us-east) - metadata sync no sync (zone is master) - data sync source: db876cf0-62a8-4b95-88f4-d0f543136a07 (us-west) - syncing - full sync: 0/128 shards - incremental sync: 128/128 shards - data is caught up with source +Ceph RGW supports Keystone authentication of Swift requests. This is enabled +by adding a relation to an existing keystone application: -Once the deployment is complete, the default zone and zonegroup can -optionally be tidied using the 'tidydefaults' action: + juju add-relation ceph-radosgw:identity-service keystone:identity-service - juju run-action -m us-west --unit rgw-us-west/0 tidydefaults +## High availability -This operation is not reversible. +When more than one unit is deployed with the [hacluster][hacluster-charm] +application the charm will bring up an HA active/active cluster. -### Failover/Recovery +There are two mutually exclusive high availability options: using virtual IP(s) +or DNS. In both cases the hacluster subordinate charm is used to provide the +Corosync and Pacemaker backend HA functionality. -In the event that the site hosting the zone which is the master for metadata -(in this example us-east) has an outage, the master metadata zone must be -failed over to the slave site; this operation is performed using the 'promote' -action: +See [OpenStack high availability][cdg-ha-apps] in the [OpenStack Charms +Deployment Guide][cdg] for details. - juju run-action -m us-west --wait rgw-us-west/0 promote +## Network spaces -Once this action has completed, the slave site will be the master for metadata -updates and the deployment will accept new uploads of data. +This charm supports the use of Juju [network spaces][juju-docs-spaces] (Juju +`v.2.0`). This feature optionally allows specific types of the application's +network traffic to be bound to subnets that the underlying hardware is +connected to. -Once the failed site has been recovered it will resync and resume as a slave -to the promoted master site (us-west in this example). +> **Note**: Spaces must be configured in the backing cloud prior to deployment. -The master metadata zone can be failed back to its original location once resync -has completed using the 'promote' action: +API endpoints can be bound to distinct network spaces supporting the network +separation of public, internal and admin endpoints. - juju run-action -m us-east --wait rgw-us-east/0 promote +For example, providing that spaces 'public-space', 'internal-space', and +'admin-space' exist, the deploy command above could look like this: -### Read/write vs Read-only + juju deploy ceph-radosgw \ + --bind "public=public-space internal=internal-space admin=admin-space" -By default all zones within a deployment will be read/write capable but only -the master zone can be used to create new containers. +Alternatively, configuration can be provided as part of a bundle: -Non-master zones can optionally be marked as read-only by using the 'readonly' -action: +```yaml + ceph-radosgw: + charm: cs:ceph-radosgw + num_units: 1 + bindings: + public: public-space + internal: internal-space + admin: admin-space +``` - juju run-action -m us-east --wait rgw-us-east/0 readonly +> **Note**: Existing ceph-radosgw units configured with the `os-admin-network`, + `os-internal-network`, `os-public-network`, `os-public-hostname`, + `os-internal-hostname`, or `os-admin-hostname` options will continue to + honour them. Furthermore, these options override any space bindings, if set. -a zone that is currently read-only can be switched to read/write mode by either -promoting it to be the current master or by using the 'readwrite' action: +## Actions - juju run-action -m us-east --wait rgw-us-east/0 readwrite +This section lists Juju [actions][juju-docs-actions] supported by the charm. +Actions allow specific operations to be performed on a per-unit basis. To +display action descriptions run `juju actions ceph-radosgw`. If the charm is +not deployed then see file `actions.yaml`. -### Tenant Namespacing +* `pause` +* `resume` +* `promote` +* `readonly` +* `readwrite` +* `tidydefaults` -By default, Ceph Rados Gateway puts all tenant buckets into the same global -namespace, disallowing multiple tenants to have buckets with the same name. -Tenant namespacing can be enabled in this charm by deploying with configuration -like: +# Bugs - ceph-radosgw: - charm: cs:ceph-radosgw - num_units: 1 - options: - namespace-tenants: True +Please report bugs on [Launchpad][lp-bugs-charm-ceph-radosgw]. -Enabling tenant namespacing will place all tenant buckets into their own -namespace under their tenant id, as well as adding the tenant's ID parameter to -the Keystone endpoint registration to allow seamless integration with OpenStack. -Tenant namespacing cannot be toggled on in an existing installation as it will -remove tenant access to existing buckets. Toggling this option on an already -deployed Rados Gateway will have no effect. +For general charm questions refer to the OpenStack [Charm Guide][cg]. +[juju-docs-actions]: https://jaas.ai/docs/actions +[ceph-upstream]: https://ceph.io [hacluster-charm]: https://jaas.ai/hacluster [cg]: https://docs.openstack.org/charm-guide [cdg]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide [cdg-ha-apps]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-ha.html#ha-applications +[cloud-archive-ceph]: https://wiki.ubuntu.com/OpenStack/CloudArchive#Ceph_and_the_UCA +[juju-docs-config-apps]: https://juju.is/docs/configuring-applications +[cdg-ceph-erasure-coding]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-erasure-coding.html +[lp-bugs-charm-ceph-radosgw]: https://bugs.launchpad.net/charm-ceph-radosgw/+filebug +[juju-docs-spaces]: https://jaas.ai/docs/spaces +[cdg-ceph-radosgw-multisite]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-rgw-multisite.html