Note
Care should be taken when deploying Trove in production environments. Be sure to fully understand the security implications of the deployed architecture.
Trove provides DBaaS to an OpenStack deployment. It deploys guest VMs that provide the desired DB for use by the end consumer. The trove guest VMs need connectivity back to the trove services via RPC (rabbitmq) and the OpenStack services. The way these guest VM get access to those services could be via internal networking (in the case of rabbitmq) or via public interfaces (in the case of OpenStack services). For the example configuration, we’ll designate a provider network as the network for trove to provision on each guest VM. The guest can then connect to rabbitmq via this network and to the OpenStack services externally. Optionally, the guest VMs could use the internal network to access OpenStack services, but that would require more containers being bound to this network.
The deployment configuration outlined below may not be appropriate for production environments. Review this very carefully with your own security requirements.
Trove needs connectivity between the control plane and the DB guest VMs. For
this purpose a provider network should be created which bridges the trove
containers (if the control plane is installed in a container) or hosts with
VMs. In a general case, neutron networking can be a simple flat network.
An example entry into openstack_user_config.yml
is shown below:
- network:
container_bridge: "br-dbaas"
container_type: "veth"
container_interface: "eth14"
host_bind_override: "eth14"
ip_from_q: "dbaas"
type: "flat"
net_name: "dbaas-mgmt"
group_binds:
- neutron_linuxbridge_agent
- rabbitmq
Make sure to modify the other entries in this file as well.
The net_name
will be the physical network that is specified when creating
the neutron network. The default value of dbaas-mgmt
is also used to
lookup the addresses of the rabbitmq container. If the default is not used then
some variables in defaults\main.yml
will need to be overwritten.
By default this role will not create the neutron network automaticaly. However,
the default values can be changed to create the neutron network. See the
trove_service_net_*
variable in defaults\main.yml
. By customizing the
trove_service_net_*
variables and having this role create the neutron
network a full deployment of the OpenStack and DBaaS can proceed
without interruption or intervention.
The following is an example how to set up a provider network in neutron manually, if so desired:
neutron net-create dbaas_service_net --shared \
--provider:network_type flat \
--provider:physical_network dbaas-mgmt
neutron subnet-create dbaas_service_net 172.19.0.0/22 --name dbaas_service_subnet
--ip-version=4 \
--allocation-pool start=172.19.1.100,end=172.19.1.200 \
--enable-dhcp \
--dns-nameservers list=true 8.8.4.4 8.8.8.8
Special attention needs to be applied to the --allocation-pool
to not have
ips which overlap with ips assigned to hosts or containers (see the used_ips
variable in openstack_user_config.yml
)
Note
This role needs the neutron network created before it can run properly since the trove guest agent configuration file contains that information.
When building disk image for the guest VM deployments there are many items to consider. Listed below are a few:
Images can be built using the diskimage-builder
tooling. The trove
virtual environment can be tar’d up from the trove containers and deployed to
the images using custom diskimage-builder
elements.
See the trove/integration/scripts/files/elements
directory contents in
the OpenStack Trove project for diskimage-builder
elements to build trove
disk images.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.