Unlocking the Basics: A Beginner's Project of On-Premise Migration to MultiCloud with AWS, Google Cloud, and DevOps Tools – Docker, K8s, Terraform

Unlocking the Basics: A Beginner's Project of On-Premise Migration to MultiCloud with AWS, Google Cloud, and DevOps Tools – Docker, K8s, Terraform

·

8 min read

This is the step by step guide for beginners on how to migrate an on-premise application to a multicloud environment. We will use the most modern technologies in the market during the process such as: AWS, Google Cloud, Docker, Kubernetes and Terraform. Access the GitHub repo here.

  • Part 1: Enable a MultiCloud architecture deployment through Terraform, with resources running in AWS and Google Cloud Platform

  • Part 2: Convert a database and an application to run on the MultiCloud architecture (AWS and Google Cloud), including Docker and Kubernetes

  • Part 3: Migrate the application files and data from a database

  • Part 4: Delete all the resources created. Destroying the (testing) environment permanently.

Extended Tech Stack

Infrastructure as Code (IaC):

  • Terraform: For provisioning and managing infrastructure on AWS and Google Cloud.

Cloud Platforms:

  • AWS: Cloud services for computing power, storage, and databases.

  • Google Cloud Platform (GCP): Cloud services for building, deploying, and scaling applications.

Containerization and Orchestration:

  • Docker: Containerization for packaging applications and their dependencies.

  • Kubernetes: Container orchestration for automating deployment, scaling, and management of containerized applications.

Database:

  • Cloud SQL (Google Cloud): Managed relational database service for MySQL.

  • Amazon S3 (AWS): Object storage service for storing and retrieving data.

Continuous Integration/Continuous Deployment (CI/CD):

  • Cloud Build (Google Cloud): CI/CD platform for automating build, test, and deployment processes.

Version Control:

  • GitHub: Hosting platform for version control and collaboration.

Security:

  • IAM (Identity and Access Management): Access management for AWS and GCP resources.

Monitoring and Logging:

  • Google Cloud Console and AWS Console: Monitoring and managing cloud resources.

  • CloudShell (Google Cloud): Web-based shell for managing resources and running scripts.

Networking:

  • VPC Peering: Networking feature for connecting VPCs in Google Cloud.

Additional Tools:

  • AWS CLI and gcloud CLI: Command-line interfaces for AWS and Google Cloud, respectively.

  • Cloud Editor (Google Cloud): Web-based editor for editing files directly in Google Cloud.

  • Cloud Storage (Google Cloud): Object storage service for storing files and objects.

Documentation and Collaboration:

  • GitHub README: Documentation and project information.

Part 1: Enable a MultiCloud architecture deployment through Terraform, with resources running in AWS and Google Cloud Platform

Create a new user using the IAM service

  • Access the AWS console (https://aws.amazon.com) and log in with your newly created account. In the search bar, type IAM. In the Services section, click on IAM.

  • Click on Users and then Add users, enter the name terraform-en-1 and click Next to create a programmatic type user.

  • Set Permissions: AmazonS3FullAccess

  • Review all the details

  • Click on Create user

Create the Access Key for the terraform-en-1 user

  • Access the terraform-en-1 user

  • At Security credentials tab

  • Navigate to the Access keys section

  • Click on Create access key

  • Select Command Line Interface (CLI) and I understand the above recommendation and want to proceed to create an access key.

  • Click on Next and click on Create access key

  • Make sure to DOWNLOAD .csv file

  • Once the download is complete, rename the .csv file to key.csv

Prepare the environment to run Terraform

  • On Google Cloud Platform. Create a Google Cloud project. Name it multicloud-project

  • Access the Google Cloud Console (console.cloud.google.com) and log in with your newly created account

  • Open the Cloud Shell

  • Download the part1.zip file in the Google Cloud shell using the wget command:

wget https://github.com/agcdtmr/onpremise-migration-to-multicloud-and-devops/blob/main/part1.zip
  • Upload the key.csv file to the Cloud Shell using the browser

  • Verify if the part1.zip and key.csv files are in the folder in the Cloud Shell using the command below

ls
  • Execute the file preparation commands:
unzip part1.zip

​
mv key.csv part1/en

​
cd part1/en

​
chmod +x *.sh
  • Execute the commands below to prepare the AWS and GCP environment
mkdir -p ~/.aws/

​
touch ~/.aws/credentials_multiclouddeploy

​
./aws_set_credentials.sh key.csv

​
GOOGLE_CLOUD_PROJECT_ID=$(gcloud config get-value project)

​
gcloud config set project $GOOGLE_CLOUD_PROJECT_ID
  • Click on Authorize

  • Execute the command below to set the project in the Google Cloud Shell

./gcp_set_project.sh
  • Execute the commands to enable the Kubernetes, Container Registry, and Cloud SQL APIs
gcloud services enable containerregistry.googleapis.com

​
gcloud services enable container.googleapis.com

​
gcloud services enable sqladmin.googleapis.com

​
gcloud services enable cloudresourcemanager.googleapis.com

​
gcloud services enable serviceusage.googleapis.com
​
gcloud services enable compute.googleapis.com
​
gcloud services enable servicenetworking.googleapis.com --project=$GOOGLE_CLOUD_PROJECT_ID
  • Run Terraform to provision MultiCloud infrastructure in AWS and Google Cloud

  • Execute the following commands to provision infrastructure resources

cd ~/part1/en/terraform/

​
terraform init

​
terraform plan

​
terraform apply

Attention: The provisioning process can take between 15 to 25 minutes to finish. Keep the CloudShell open during the process. If disconnected, click on Reconnect when the session expires (the session expires after 5 minutes of inactivity by default)

  • Go to GCP Cloud SQL, GCP Kubernetes Engine, AWS Console Amazon S3 to see all the resources you've successfully created!

Security Tips

  • For production environments, it's recommended to use only the Private Network for database access.

  • Never provide public network access (0.0.0.0/0) to production databases. ⚠️

Part 2: Convert a database and an application to run on the MultiCloud architecture (AWS and Google Cloud), including Docker and Kubernetes

Create another new user using the IAM service

  • Access AWS console and go to IAM service. Create "luxxy-covid-testing-system-en-app1" user using the IAM service

  • Under Access management, Click in "Users", then "Add users". Insert the User name luxxy-covid-testing-system-en-app1 and click in Next to create a programmatic user.

  • Set Permissions: AmazonS3FullAccess

  • Review all the details

  • Click on Create user

Create access key

  • Select Command Line Interface (CLI) and I understand the above recommendation and want to proceed to create an access key checkbox.

  • Click Next

  • Click on Create access key

  • Click on Download .csv file

  • After download, click Done.

  • Now, rename .csv file downloaded to luxxy-covid-testing-system-en-app1.csv

Create Database User in Cloud SQL

  • In Google Cloud Platform (GCP). Navigate to Cloud SQL instance and create a new user "app" with password welcome123456 on Cloud SQL MySQL database

Connect to Google Cloud Shell

  • Access Google Cloud Shell and prepare for the application deployment.

Download Part2 Files to Google Cloud Shell

  • Use the following commands to download the necessary files for Part 2:
cd ~
wget https://github.com/agcdtmr/onpremise-migration-to-multicloud-and-devops/blob/main/part2.zip
unzip part2.zip

Connect to MySQL DB on Cloud SQL

  • Connect to MySQL DB running on Cloud SQL (once it prompts for the password, provide welcome123456). Don’t forget to replace the placeholder with your Cloud SQL Public IP
mysql --host=<replace_with_public_ip_cloudsql> --port=3306 -u app -p
  • Once you're connected to the database instance, create the products table for testing purposes
use dbcovidtesting;
​
source ~/part2/en/db/create_table.sql
​
show tables;
​
exit;

Enable Cloud Build API

  • Enable Cloud Build API via Cloud Shell.
gcloud services enable cloudbuild.googleapis.com

Build and Push Docker Image to Google Container Registry

  • Build the Docker image and push it to Google Container Registry.
GOOGLE_CLOUD_PROJECT_ID=$(gcloud config get-value project)
cd ~/part2/en/app
gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT_ID/luxxy-covid-testing-system-app-en

Update Kubernetes Deployment File

  • Open the Cloud Editor, go to part2 directory and edit the Kubernetes deployment file (luxxy-covid-testing-system.yaml) and update the variables below with your <PROJECT_ID> on the Google Container Registry path, AWS Bucket name, AWS Keys (open file luxxy-covid-testing-system-en-app1.csv and use Access key ID and Secret access key) and Cloud SQL Database Private IP.
cd ~/part2/en/kubernetes
Sample of luxxy-covid-testing-system.yaml file:

                image: gcr.io/<PROJECT_ID>/luxxy-covid-testing-system-app-en:latest
...
                - name: AWS_BUCKET
          value: "luxxy-covid-testing-system-pdf-en-xxxx"
        - name: S3_ACCESS_KEY
          value: "xxxxxxxxxxxxxxxxxxxxx"
        - name: S3_SECRET_ACCESS_KEY
          value: "xxxxxxxxxxxxxxxxxxxx"
        - name: DB_HOST_NAME
          value: "172.21.0.3"

Connect to GKE Cluster and Deploy Luxxy Application

  • Connect to the GKE (Google Kubernetes Engine) cluster via Console. Go to GKE (Google Kubernetes Engine) Clusters. Find "luxxy-kubernetes-cluster-en" and click Connect. And Click "Run in Cloud Shell"

  • Deploy the application in the Cluster

      cd ~/part2/en/kubernetes
    
      kubectl apply -f luxxy-covid-testing-system.yaml
    

Obtain Public IP for the Application

  • Under GKE > Workloads > Exposing Services, get the application Public IP. You should see the app up & running!

Test Application and Add Entry

  • Download PDF and add an entry in the application for testing in "Add Guest Results" button

Part 3: Migrate the application files and data from a database

Download the Test Dumps

  • Connect to Google Cloud Shell

  • Use the following commands to download the test dumps

      cd ~
      wget https://github.com/agcdtmr/onpremise-migration-to-multicloud-and-devops/blob/main/part3.zip
      unzip part3.zip
    

Connect to MySQL DB on Cloud SQL

  • Connect to MySQL DB running on Cloud SQL (once it prompts for the password, provide welcome123456). Don’t forget to replace the placeholder with your Cloud SQL Public IP
mysql --host=<replace_with_public_ip_cloudsql> --port=3306 -u app -p

Import the Test Dump on Cloud SQL

sqlCopy codeuse dbcovidtesting;
source ~/part3/en/db/db_dump.sql

Check Data Import

sqlCopy codeselect * from records;
exit;

Connecting to AWS Cloud Shell and Syncing PDF Files

  • Connect to the AWS Cloud Shell
  • Download the PDF files
wget https://github.com/agcdtmr/onpremise-migration-to-multicloud-and-devops/blob/main/part3.zip

unzip mission3.zip
  • Sync PDF Files with your AWS S3 used for COVID-19 Testing Status System. Replace the bucket name with yours.
cd mission3/en/pdf_files
aws s3 sync . s3://**luxxy-covid-testing-system-pdf-en-xxxx**

Testing the Application

  • After migrating data and files, verify entries under the “View Guest Results” page to ensure a successful migration of an "on-premises" application & database to a MultiCloud Architecture.

Part 4: Delete all the resources created. Destroying the (testing) environment permanently.

After completing the this project and gathering the implementation evidence, follow the step-by-step instructions below to remove the entire MultiCloud environment.

  • [Google Cloud] Delete Kubernetes resources. Go to GKE (Google Kubernetes Engine) Clusters. Find "luxxy-kubernetes-cluster-en" and click Connect. And Click "Run in Cloud Shell". And use the command below to delete Kubernetes resources
kubectl delete deployment luxxy-covid-testing-system

kubectl delete service luxxy-covid-testing-system
  • [Google Cloud] Delete VPC Peering

  • [AWS] Delete files inside of S3

  • [Google Cloud] Delete remaining resources w/ Terraform - Cloud Shell

cd ~/mission1/en/terraform/
​
terraform destroy
  • Clean the Cloud Shell in AWS
cd ~
​
rm -rf mission*
  • Clean the Cloud Shell in Google Cloud
cd ~
​
rm -rf mission*
​
rm -rf .ssh

Congratulations! 🎉 You just successfully completed the migration process and transitioning an on-premises application and database to a MultiCloud Architecture.

💡
The inspiration for this project originates from the valuable content and insights shared at thecloudbootcamp.com.

Did you find this article valuable?

Support anj in tech by becoming a sponsor. Any amount is appreciated!