This document outlines what a production environment hosting Designate could look like, it follows an in-cloud model, where Designate would be hosted on instances in an OpenStack cloud. It’s supposed to complement the Architecture document, please start there if you are unfamiliar with the designate components.
Designate has been designed to integrate with Keystone, or a Keystone-like service, for authentication & authorization, in a production environment it should rely on your Keystone service, and be registered in your service catalog.
This architecture expects your environment to have an external loadbalancer that is the first touch point for customer traffic, this will distribute requests across the available API nodes, which should span your AZs & regions where possible.
A Designate deploy breaks down into several key roles:
Typically, API nodes would be made available in multiple AZs, providing redundancy should an individual AZ have issues.
In a Multi-AZ deployment, the API nodes should be configured to talk to all members of the MQ Cluster - so that in the event of MQ node failing, requests continue to flow to the MQ.
In a Multi-AZ deployment, the sink node should be configured to talk to all members of the MQ Cluster - so that in the event of MQ node failing, requests continue to flow to the MQ.
In a Multi-AZ deployment, the Central nodes should be configured to talk to all members of the MQ Cluster - so that in the event of MQ node failing, requests continue to be processed.
In a Multi-AZ deployment, the MiniDNS nodes should be configured to talk to all members of the MQ Cluster - so that in the event of MQ node failing, requests continue to be processed. It should also be configured to talk to multiple DB servers, to allow for reliable access to the data store
In a Multi-AZ deployment, the Pool Manager nodes should be configured to talk to all members of the MQ Cluster - so that in the event of MQ node failing, requests continue to be processed.
An AMQP implementation is required for all communication between api & central nodes, in practice this means an RabbitMQ installation, preferably a cluster that spans across the AZs in a given region.
Designate needs a SQLAlchemy supported Database/Storage engine for the persistent storage of data, the recommended driver is MySQL.
In a Multi-AZ environment, a MySQL Galera Cluster, built using Percona’s MySQL packages has performed well.
Designate optionally uses Memory caching usually through a Memcached instance to speed up Pool Manager operations.
Designate supports multiple backend implementations, PowerDNS, BIND and MySQL BIND, you are also free to implement your own backend to fit your needs, as well as extensions to provide extra functionality to complement existing backends.
There are various ways to provide a highly available authoritative DNS service, here are some suggestions: