devtest_overcloud¶
Build images. There are two helper scripts which can be used to build images. The first method uses environment variables to create a specific image for each overcloud role. This method works best if you are using tripleo-image-elements for configuration (which requires per role image customization). See devtest_overcloud_images for documentation. This method is currently the default.
Another option is to make use of the build-images script which dynamically creates a set of images using a YAML (or JSON) config file (see the build-images script for details and the expected config file format). This method is typically preferred when using tripleo-puppet-elements (Puppet) for configuration which allows the contents and number of images used to deploy an overcloud to be more flexibly defined. Example:
build-images -d -c $DISK_IMAGES_CONFIG
Load all images into Glance (based on the provided disk images config). This captures all the Glance IDs into a Heat env file which maps them to the appropriate parameter names. This allows us some amount of flexability how many images to use for the overcloud deployment.
OVERCLOUD_IMAGE_IDS_ENV=${OVERCLOUD_IMAGE_IDS_ENV:-"${TRIPLEO_ROOT}/overcloud-images-env.yaml"} load-images -d --remove -c $DISK_IMAGES_CONFIG -o $OVERCLOUD_IMAGE_IDS_ENV
For running an overcloud in VM’s. For Physical machines, set to kvm:
OVERCLOUD_LIBVIRT_TYPE=${OVERCLOUD_LIBVIRT_TYPE:-"qemu"}
Set the public interface of overcloud network node::
NeutronPublicInterface=${NeutronPublicInterface:-'nic1'}
Set the NTP server for the overcloud::
OVERCLOUD_NTP_SERVER=${OVERCLOUD_NTP_SERVER:-''}
If you want to permit VM’s access to bare metal networks, you need to define flat-networks and bridge mappings in Neutron. We default to creating one called datacentre, which we use to grant external network access to VMs::
OVERCLOUD_FLAT_NETWORKS=${OVERCLOUD_FLAT_NETWORKS:-'datacentre'} OVERCLOUD_BRIDGE_MAPPINGS=${OVERCLOUD_BRIDGE_MAPPINGS:-'datacentre:br-ex'} OVERCLOUD_HYPERVISOR_PHYSICAL_BRIDGE=${OVERCLOUD_HYPERVISOR_PHYSICAL_BRIDGE:-'br-ex'} OVERCLOUD_HYPERVISOR_PUBLIC_INTERFACE=${OVERCLOUD_HYPERVISOR_PUBLIC_INTERFACE:-'nic1'} OVERCLOUD_VIRTUAL_INTERFACE=${OVERCLOUD_VIRTUAL_INTERFACE:-'br-ex'}
If you are using SSL, your compute nodes will need static mappings to your endpoint in
/etc/hosts
(because we don’t do dynamic undercloud DNS yet). set this to the DNS name you’re using for your SSL certificate - the heat template looks up the controller address within the cloud:OVERCLOUD_NAME=${OVERCLOUD_NAME:-''}
Detect if we are deploying with a VLAN for API endpoints / floating IPs. This is done by looking for a ‘public’ network in Neutron, and if found we pull out the VLAN id and pass that into Heat, as well as using a VLAN enabled Heat template.
if (neutron net-list | grep -q public); then VLAN_ID=$(neutron net-show public | awk '/provider:segmentation_id/ { print $4 }') NeutronPublicInterfaceTag="$VLAN_ID" # This should be in the heat template, but see # https://bugs.launchpad.net/heat/+bug/1336656 # note that this will break if there are more than one subnet, as if # more reason to fix the bug is needed :). PUBLIC_SUBNET_ID=$(neutron net-show public | awk '/subnets/ { print $4 }') VLAN_GW=$(neutron subnet-show $PUBLIC_SUBNET_ID | awk '/gateway_ip/ { print $4}') BM_VLAN_CIDR=$(neutron subnet-show $PUBLIC_SUBNET_ID | awk '/cidr/ { print $4}') NeutronPublicInterfaceDefaultRoute="${VLAN_GW}" export CONTROLEXTRA=overcloud-vlan-port.yaml else VLAN_ID= NeutronPublicInterfaceTag= fi
TripleO explicitly models key settings for OpenStack, as well as settings that require cluster awareness to configure. To configure arbitrary additional settings, provide a JSON string with them in the structure required by the template ExtraConfig parameter.
OVERCLOUD_EXTRA_CONFIG=${OVERCLOUD_EXTRA_CONFIG:-‘’}
Choose whether to deploy or update. Use stack-update to update:
HEAT_OP=stack-create
Wait for the BM cloud to register BM nodes with the scheduler:
expected_nodes=$(( $OVERCLOUD_COMPUTESCALE + $OVERCLOUD_CONTROLSCALE + $OVERCLOUD_BLOCKSTORAGESCALE )) wait_for -w $((60 * $expected_nodes)) --delay 10 -- wait_for_hypervisor_stats $expected_nodes
Set password for Overcloud SNMPd, same password needs to be set in Undercloud Ceilometer
- UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD=$(os-apply-config -m $TE_DATAFILE –key undercloud.ceilometer_snmpd_password –type raw –key-default ‘’)
UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD=$(os-make-password)
Create unique credentials:
setup-overcloud-passwords $TRIPLEO_ROOT/tripleo-overcloud-passwords source $TRIPLEO_ROOT/tripleo-overcloud-passwords
We need an environment file to store the parameters we’re gonig to give heat.:
HEAT_ENV=${HEAT_ENV:-"${TRIPLEO_ROOT}/overcloud-env.json"}
Read the heat env in for updating.:
if [ -e "${HEAT_ENV}" ]; then ENV_JSON=$(cat "${HEAT_ENV}") else ENV_JSON='{"parameters":{}}' fi
Set parameters we need to deploy a KVM cloud.:
NeutronControlPlaneID=$(neutron net-show ctlplane | grep ' id ' | awk '{print $4}') ENV_JSON=$(jq '.parameters = { "MysqlInnodbBufferPoolSize": 100 } + .parameters + { "AdminPassword": "'"${OVERCLOUD_ADMIN_PASSWORD}"'", "AdminToken": "'"${OVERCLOUD_ADMIN_TOKEN}"'", "CeilometerPassword": "'"${OVERCLOUD_CEILOMETER_PASSWORD}"'", "CeilometerMeteringSecret": "'"${OVERCLOUD_CEILOMETER_SECRET}"'", "CinderPassword": "'"${OVERCLOUD_CINDER_PASSWORD}"'", "CloudName": "'"${OVERCLOUD_NAME}"'", "GlancePassword": "'"${OVERCLOUD_GLANCE_PASSWORD}"'", "HeatPassword": "'"${OVERCLOUD_HEAT_PASSWORD}"'", "HeatStackDomainAdminPassword": "'"${OVERCLOUD_HEAT_STACK_DOMAIN_PASSWORD}"'", "HypervisorNeutronPhysicalBridge": "'"${OVERCLOUD_HYPERVISOR_PHYSICAL_BRIDGE}"'", "HypervisorNeutronPublicInterface": "'"${OVERCLOUD_HYPERVISOR_PUBLIC_INTERFACE}"'", "NeutronBridgeMappings": "'"${OVERCLOUD_BRIDGE_MAPPINGS}"'", "NeutronControlPlaneID": "'${NeutronControlPlaneID}'", "NeutronFlatNetworks": "'"${OVERCLOUD_FLAT_NETWORKS}"'", "NeutronPassword": "'"${OVERCLOUD_NEUTRON_PASSWORD}"'", "NeutronPublicInterface": "'"${NeutronPublicInterface}"'", "NeutronPublicInterfaceTag": "'"${NeutronPublicInterfaceTag}"'", "NovaComputeLibvirtType": "'"${OVERCLOUD_LIBVIRT_TYPE}"'", "NovaPassword": "'"${OVERCLOUD_NOVA_PASSWORD}"'", "NtpServer": "'"${OVERCLOUD_NTP_SERVER}"'", "SwiftHashSuffix": "'"${OVERCLOUD_SWIFT_HASH}"'", "SwiftPassword": "'"${OVERCLOUD_SWIFT_PASSWORD}"'", "SSLCertificate": "'"${OVERCLOUD_SSL_CERT}"'", "SSLKey": "'"${OVERCLOUD_SSL_KEY}"'", "OvercloudComputeFlavor": "'"${COMPUTE_FLAVOR}"'", "OvercloudControlFlavor": "'"${CONTROL_FLAVOR}"'", "OvercloudBlockStorageFlavor": "'"${BLOCKSTORAGE_FLAVOR}"'", "OvercloudSwiftStorageFlavor": "'"${SWIFTSTORAGE_FLAVOR}"'" }' <<< $ENV_JSON)
We enable the automatic relocation of L3 routers in Neutron by default, alternatively you can use the L3 agents high availability mechanism (only works with three or more controller nodes) or the distributed virtul routing mechanism (deploying routers on compute nodes). Set the environment variable
OVERCLOUD_L3
torelocate
,ha
ordvr
.OVERCLOUD_L3=${OVERCLOUD_L3:-'relocate'}
If enabling distributed virtual routing on the overcloud, some values need to be set so that Neutron DVR will work.
if [ ${OVERCLOUD_DISTRIBUTED_ROUTERS:-'False'} == "True" -o $OVERCLOUD_L3 == "dvr" ]; then ENV_JSON=$(jq '.parameters = {} + .parameters + { "NeutronDVR": "True", "NeutronTunnelTypes": "vxlan", "NeutronNetworkType": "vxlan", "NeutronMechanismDrivers": "openvswitch,l2population", "NeutronAllowL3AgentFailover": "False", }' <<< $ENV_JSON) fi if [ ${OVERCLOUD_L3_HA:-'False'} == "True" -o $OVERCLOUD_L3 == "ha" ]; then ENV_JSON=$(jq '.parameters = {} + .parameters + { "NeutronL3HA": "True", "NeutronAllowL3AgentFailover": "False", }' <<< $ENV_JSON) fi
Save the finished environment file.:
jq . > "${HEAT_ENV}" <<< $ENV_JSON chmod 0600 "${HEAT_ENV}"
Add Keystone certs/key into the environment file.:
generate-keystone-pki --heatenv $HEAT_ENV
Deploy an overcloud:
heat $HEAT_OP -e "$HEAT_ENV" \ -f $TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml \ -P "ExtraConfig=${OVERCLOUD_EXTRA_CONFIG}" \ overcloud
You can watch the console via
virsh
/virt-manager
to observe the PXE boot/deploy process. After the deploy is complete, the machines will reboot and be available.While we wait for the stack to come up, build an end user disk image and register it with glance.:
USER_IMG_NAME="user.qcow2" $TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST vm $TEST_IMAGE_DIB_EXTRA_ARGS \ -a $NODE_ARCH -o $TRIPLEO_ROOT/user 2>&1 | tee $TRIPLEO_ROOT/dib-user.log
Get the overcloud IP from the heat stack
wait_for_stack_ready -w $(($OVERCLOUD_STACK_TIMEOUT * 60)) 10 $STACKNAME OVERCLOUD_ENDPOINT=$(heat output-show $STACKNAME KeystoneURL|sed 's/^"\(.*\)"$/\1/') OVERCLOUD_IP=$(echo $OVERCLOUD_ENDPOINT | awk -F '[/:]' '{print $4}')
We don’t (yet) preserve ssh keys on rebuilds.
ssh-keygen -R $OVERCLOUD_IP
Export the overcloud endpoint and credentials to your test environment.
NEW_JSON=$(jq '.overcloud.password="'${OVERCLOUD_ADMIN_PASSWORD}'" | .overcloud.endpoint="'${OVERCLOUD_ENDPOINT}'" | .overcloud.endpointhost="'${OVERCLOUD_IP}'"' $TE_DATAFILE) echo $NEW_JSON > $TE_DATAFILE
Source the overcloud configuration:
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc
Exclude the overcloud from proxies:
export no_proxy=$no_proxy,$OVERCLOUD_IP
If we updated the cloud we don’t need to do admin setup again - skip down to Wait for Nova Compute.
Perform admin setup of your overcloud.
init-keystone -o $OVERCLOUD_IP -t $OVERCLOUD_ADMIN_TOKEN \ -e admin@example.com -p $OVERCLOUD_ADMIN_PASSWORD \ ${SSLBASE:+-s $PUBLIC_API_URL} --no-pki-setup # Creating these roles to be used by tenants using swift openstack role create swiftoperator openstack role create ResellerAdmin setup-endpoints $OVERCLOUD_IP \ --cinder-password $OVERCLOUD_CINDER_PASSWORD \ --glance-password $OVERCLOUD_GLANCE_PASSWORD \ --heat-password $OVERCLOUD_HEAT_PASSWORD \ --neutron-password $OVERCLOUD_NEUTRON_PASSWORD \ --nova-password $OVERCLOUD_NOVA_PASSWORD \ --swift-password $OVERCLOUD_SWIFT_PASSWORD \ --ceilometer-password $OVERCLOUD_CEILOMETER_PASSWORD \ ${SSLBASE:+--ssl $PUBLIC_API_URL} openstack role create heat_stack_user user-config BM_NETWORK_GATEWAY=$(OS_CONFIG_FILES=$TE_DATAFILE os-apply-config --key baremetal-network.gateway-ip --type raw --key-default '192.0.2.1') OVERCLOUD_NAMESERVER=$(os-apply-config -m $TE_DATAFILE --key overcloud.nameserver --type netaddress --key-default "$OVERCLOUD_FIXED_RANGE_NAMESERVER") NETWORK_JSON=$(mktemp) jq "." <<EOF > $NETWORK_JSON { "float": { "cidr": "$OVERCLOUD_FIXED_RANGE_CIDR", "name": "default-net", "nameserver": "$OVERCLOUD_NAMESERVER", "segmentation_id": "$NeutronPublicInterfaceTag", "physical_network": "datacentre", "gateway": "$OVERCLOUD_FIXED_RANGE_GATEWAY" }, "external": { "name": "ext-net", "provider:network_type": "flat", "provider:physical_network": "datacentre", "cidr": "$FLOATING_CIDR", "allocation_start": "$FLOATING_START", "allocation_end": "$FLOATING_END", "gateway": "$BM_NETWORK_GATEWAY" } } EOF setup-neutron -n $NETWORK_JSON rm $NETWORK_JSON
If you want a demo user in your overcloud (probably a good idea).
os-adduser -p $OVERCLOUD_DEMO_PASSWORD demo demo@example.com
Workaround https://bugs.launchpad.net/diskimage-builder/+bug/1211165.
nova flavor-delete m1.tiny nova flavor-create m1.tiny 1 512 2 1
Register the end user image with glance.
glance image-create --name user --visibility public --disk-format qcow2 \ --container-format bare --file $TRIPLEO_ROOT/$USER_IMG_NAME
Wait for Nova Compute
wait_for -w 300 --delay 10 -- nova service-list --binary nova-compute 2\>/dev/null \| grep 'enabled.*\ up\ '
Wait for L2 Agent On Nova Compute
wait_for 30 10 neutron agent-list -f csv -c alive -c agent_type -c host \| grep "\":-).*Open vSwitch agent.*-novacompute\""
Log in as a user.
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc-user
If you just created the cloud you need to add your keypair to your user.
user-config
So that you can deploy a VM.
IMAGE_ID=$(nova image-show user | awk '/ id / {print $4}') nova boot --key-name default --flavor m1.tiny --block-device source=image,id=$IMAGE_ID,dest=volume,size=3,shutdown=preserve,bootindex=0 demo
Add an external IP for it.
wait_for -w 50 --delay 5 -- neutron port-list -f csv -c id --quote none \| grep id PORT=$(neutron port-list -f csv -c id --quote none | tail -n1) FLOATINGIP=$(neutron floatingip-create ext-net \ --port-id "${PORT//[[:space:]]/}" \ | awk '$2=="floating_ip_address" {print $4}')
And allow network access to it.
neutron security-group-rule-create default --protocol icmp \ --direction ingress --port-range-min 8 neutron security-group-rule-create default --protocol tcp \ --direction ingress --port-range-min 22 --port-range-max 22
After which, you should be able to ping it
wait_for -w 300 --delay 10 -- ping -c 1 $FLOATINGIP