Deploy a DSS instance to Google Cloud Platform with terraform¶
This guide will help you deploy a DSS instance to Google Cloud Platform with terraform and tanka.
Getting started¶
Prerequisites¶
Terraform¶
- Install terraform to your workstation.
- Verify installation with
terraform version
- Verify installation with
Kubernetes tools¶
- Install kubectl from Prerequisites
- Verify kubectl installation with
kubectl version
- Verify kubectl installation with
Google Cloud Platform¶
- Install and initialize Google Cloud CLI.
- Confirm successful initialization with
gcloud info; check "Account".
- Confirm successful initialization with
- Ensure a GCP project is available (create one in web UI if needed)
- Consider
$GOOGLE_PROJECT_NAMEto refer to this project
- Consider
- Check that the GCP DSS project is correctly selected:
gcloud config list project- Set another one if needed using:
gcloud config set project $GOOGLE_PROJECT_NAME
- Set another one if needed using:
- Enable the following API using Google Cloud CLI:
gcloud services enable compute.googleapis.comgcloud services enable container.googleapis.com- If you want to manage DNS entries with terraform:
gcloud services enable dns.googleapis.com
- Install the auth plugin to connect to kubernetes:
gcloud components install gke-gcloud-auth-plugin - Run
gcloud auth application-default loginto generate credentials to call Google Cloud Platform APIs.- If the result of performing the authorization indicates 404 in the browser, check whether a local dummy-oauth instance is running (using port 8085). Stop the dummy-oauth instance if it is running.
Deployment of the Kubernetes cluster¶
Paths in the documentation
In the documentation, we often refer to path starting from the root (prefixed with /). This is to indicate that the path is relative to the root of the repository.
- Create a new folder in
/deploy/infrastructure/personal/for the deployment named for example:terraform-google-dss-dev - From
/deploy/infrastructure/modules/terraform-google-dss, copymain.tf,output.tf,variables.gen.tfandterraform.dev.example.tfvarsto the infrastructure personal folder. - In the infrastructure personal folder:
- Rename
terraform.dev.example.tfvarstoterraform.tfvars. - Check that the directory contains the following files:
- main.tf
- output.tf
- terraform.tfvars
- variables.gen.tf
- Set the variables in
terraform.tfvarsaccording to your environment. See TFVARS.gen.md for variables descriptions. - Initialize terraform:
terraform init. - Run
terraform planto check that the configuration is valid. It will display the resources which will be provisioned. - Run
terraform applyto deploy the cluster. (This operation may take up to 15 min.)
- Rename
- Configure the DNS resolution to the public ip addresses. DNS entries can be either managed with terraform or managed manually:
If your DNS zone is managed on the same account, it is possible to instruct terraform to create and manage it with the rest of the infrastructure.
- For Google Cloud Platform, configure the zone in your google account and set the
google_dns_managed_zone_namevariable the zone name. Zones can be listed by runninggcloud dns managed-zones list. Entries will be automatically created by terraform.
If DNS entries are managed manually, set them up manually using the following steps:
- Retrieve IP addresses and expected hostnames:
terraform outputExample of expected output: - Create the related DNS A entries to point to the static ips indicated in the output:
crdb_addresses[*].expected_dnsgateway_address.expected_dns
Deployment of the DSS services¶
Following the successful terraform run, you should find a new workspace directory
for the new cluster. The new workspace name corresponds to the cluster context. The cluster context
can be retrieved by running terraform output in your infrastructure folder (eg /deploy/infrastructure/personal/terraform-google-dss-dev).
It contains scripts to operate the cluster and setup the services.
-
Go to the new workspace
/build/workspace/${cluster_context}.- Run
./get-credentials.shto login to kubernetes. You can now access the cluster withkubectl.
- Run
-
Prepare the datastore certificates:
- Generate the certificates using
./dss-certs.sh init - If joining a cluster, check
dss-certs.sh's help to add others CA in your pool and share your CA with others pools members. - Deploy the certificates using
./dss-certs.sh apply.
- Generate the certificates using
./make-certs.sh. Follow script instructions if you are not initializing the cluster. - Deploy the certificates using
./apply-certs.sh.
- Go to the tanka workspace in
/deploy/services/tanka/workspace/${cluster_context}. - Run
tk apply .to deploy the services to kubernetes. (This may take up to 30 min) - Wait for services to initialize:
- On Google Cloud, the highest-latency operation is provisioning of the HTTPS certificate which generally takes 10-45 minutes. To track this progress:
- Go to the "Services & Ingress" left-side tab from the Kubernetes Engine page.
- Click on the https-ingress item (filter by just the cluster of interest if you have multiple clusters in your project).
- Under the "Ingress" section for Details, click on the link corresponding with "Load balancer".
- Under Frontend for Details, the Certificate column for HTTPS protocol will have an icon next to it which will change to a green checkmark when provisioning is complete.
- Click on the certificate link to see provisioning progress.
- If everything indicates OK and you still receive a cipher mismatch error message when attempting to visit /healthy, wait an additional 5 minutes before attempting to troubleshoot further.
- On Google Cloud, the highest-latency operation is provisioning of the HTTPS certificate which generally takes 10-45 minutes. To track this progress:
- Verify that basic services are functioning by navigating to https://your-gateway-domain.com/healthy.
Clean up¶
To delete all resources, run terraform destroy. Note that this operation can't be reverted and all data will be lost.
For Google Cloud Engine, make sure to manually clean up the persistent storage: https://console.cloud.google.com/compute/disks