Automate WordPress Apps with MySQL on Kubernetes over AWS using Ansible

Rishabh Arya
8 min readJun 21, 2021
  • Automate Kubernetes Cluster Using Ansible.
  • Launch EC2-instances on AWS Cloud for master and slave.
  • Create roles that will configure master node and slave node separately.
  • Launch a WordPress and MySQL database connected to it in the respective slaves.
  • Expose the WordPress pod and client able hit the WordPress IP with its respective port.

Let us first create a dynamic inventory

Installing Python3: $ yum install python3 -y

Installing the boto3 library: $ pip3 install boto

Creating a inventory directory:

$ mkdir -p /opt/ansible/inventory
$ cd /opt/ansible/inventory

Creating a file aws_ec2.yaml in the inventory directory with the following configuration:

plugin: aws_ec2
aws_access_key: <YOUR-AWS-ACCESS-KEY-HERE>
aws_secret_key: <YOUR-AWS-SECRET-KEY-HERE>
keyed_groups:
- key: tags
prefix: tag

Open /etc/ansible/ansible.cfg and find the [inventory] section and add the following line to enable the ec2 plugin:

[inventory]
enable_plugins = aws_ec2

Now let’s test the dynamic inventory configuration by listing the EC2 instances:

$ ansible-inventory -i /opt/ansible/inventory/aws_ec2.yaml --list

The above command returns the list of EC2 instances with all its parameters in JSON format.

Now, check if Ansible is able to ping all the machines returned by the dynamic inventory: ansible all -m ping

Dynamic inventory setup done.

Launching one master and two slave nodes on AWS

ansible-playbook <file_name>

I used aws.yml file to launch three instance on top of AWS cloud for launching two slave nodes and one master node.

- hosts: localhost
vars_files:
secret.yml tasks:
- name: "Creating Master Node"
ec2:
region: ap-south-1
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
vpc_subnet_id: subnet-0288bbf00ed3128d7
count: 1
state: present
instance_type: t2.micro
key_name: redhat-key
assign_public_ip: yes
group_id: sg-0612a79a1fdb041ff
image: ami-08f63db601b82ff5f
instance_tags:
name: master
- name: "Creating Slave Nodes"
ec2:
region: ap-south-1
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
vpc_subnet_id: subnet-0288bbf00ed3128d7
count: 2
state: present
instance_type: t2.micro
key_name: redhat-key
assign_public_ip: yes
group_id: sg-0612a79a1fdb041ff
image: ami-08f63db601b82ff5f
instance_tags:
name: slave

The secret.yml file:

aws_access_key: <YOUR-AWS-ACCESS-KEY-HERE>
aws_secret_key: <YOUR-AWS-SECRET-KEY-HERE>

Creating yml file for MySQL and Wordpress

The file creates a secret that contains the username and password of the database and service which is exposed in private world only so that Wordpress application can connect with port number 3306 only, and deployment with recreate strategy using version mysql:5.6:

apiVersion: v1
kind: Secret
metadata:
name: mysecure
data:
rootpass: ********
userpass: ********
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysecure
key: rootpass
- name: MYSQL_USER
value: vd
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysecure
key: userpass
- name: MYSQL_DATABASE
value: sqldb
ports:
- containerPort: 3306
name: mysql

This file creates service with type called LoadBalancer exposed with port number 80, deployment with same strategy called recreate, and getting user name and password of database by using secret database which created in above steps.

apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: mysql
type: LoadBalancer---apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: wordpress:latest
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_USER
value: vd
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecure
key: userpass
- name: WORDPRESS_DB_NAME
value: sqldb
ports:
- containerPort: 80
name: wordpress

Ansible-playbook for master node

First add the kubeadm reposiotry so that it can download kubeadm, kubelet, and kubectl. We can use the COPY module instead of yum_repository module:

- name: Adding Kubeadm repo
copy:
src: kubernetes.repo
dest: /etc/yum.repos.d

The kubernetes.repo file is as follows:

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Now let’s install Docker and kubeadm using package module.

We can also use cri-o instead of Docker:

- name: Installing docker
package:
name: "docker"
state: present - name: Installing kubeadm
package:
name: "kubeadm"
state: present

Enabling the Docker service:

- name: Enabling docker service
service:
name: docker
state: started
enabled: yes

Now pull all image from Docker hub which is important to setup master node.

These images are related to api-server, flannel, kube-controller, etcd, controller-manager, scheduler:

- name: Pulling all kubeadm config image
command: kubeadm config images pull
ignore_errors: no

Now as we know that kubernetes support systemd driver, we have to change this cgroup to systemd. For this we can a file called daemon.json which can be copied to /etc/docker/daemon.json so that it automatically changes to systemd driver.

- name: Changing driver cgroup to systemd
copy:
src: daemon.json
dest: /etc/docker/daemon.json

The daemon.json file is as follows:

{
"exec-opts": ["native.cgroupdriver=systemd"]
}

Now remove all swap file from /etc/fstab because it shows errors while initializing the master node:

- name: Removing swapfile from /etc/fstab
mount:
name: "{{ item }}"
fstype: swap
state: absent
with_items:
- swap
- none

Now Enable kubelet service and restart the Docker as we change the driver(systemd):

- name: Enabling kubelet service
service:
name: kubelet
daemon_reload: yes
state: started
enabled: yes - name: Restarting docker service
service:
name: docker
state: "restarted"

Install iproute-tc software because Kubernetes master uses this software while initializing as a master node.

- name: Installing iproute-tc
package:
name: iproute-tc
state: present
update_cache: yes

Now we can initialize the node as a master node. Remember, it’s not ideal to configure the RAM less than 2200MB and CPU less than 2 as it throws an error. For ignoring this error we can use — ignore-preflight-error.

- name: Initializing the kubeadm
shell: "kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem --node-name=master"

register: kubeadm
ignore_errors: yes

- debug:
msg: "{{ kubeadm }}"

Now setup the kubeconfig for home user so that master node can also work as a client and can use kubectl command.

- name: Setup kubeconfig for home user
shell: "{{ item }}"
with_items:
- "mkdir -p $HOME/.kube"
- "cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
- "chown $(id -u):$(id -g) $HOME/.kube/config"

Now add flannel network in master node so that it can setup some internal overlay setup.

- name: Adding flannel network
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Now we create token for slave node for authentication purpose and store this token into on file called token.sh by using local_action module.

- name: Joining token
shell: "kubeadm token create --print-join-command"
register: token
- debug:
msg: "{{ token }}"
ignore_errors: yes - name: Storing token into a file
local_action: copy content={{ token.stdout_lines[0] }} dest=../slave1/token.sh

Now we copy the database.yml and wordpress.yml files.

- name: Copying mysql-database.yml file
copy:
src: database.yaml
dest: /root

- name: Copying wordpress.yml file
copy:
src: wordpress.yml
dest: /root

Now finally running database and wordpress both file using shell module. Remember to specify the path of the files database.yml and wordpress.yml.

We can see output of command using debug module:

- shell: "kubectl apply -f /root/database.yaml"
register: mysql - shell: "kubectl apply -f /root/wordpress.yml"
register: wordpress - debug:
msg: "{{ mysql }}- debug:
msg: "{{ wordpress }}"

Ansible-playbook for slave node

Almost all the steps are similar to the master node. It’s the same till step 8 except step 4. So lets looks at what extra we have to do setup node as a slave node.

9. Copying k8s.conf file at /etc/sysctl.d/ path. It is important do initialized any node as a slave node.

- name: Copying k8s.conf file
copy:
src: k8s.conf
dest: /etc/sysctl.d/k8s.conf

k8s.conf file:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

10. Now enable the sysctl:

- name: Enabling sysctl
shell: "sysctl --system"

11. Now before joining slave node to master node we have to use token created by master node. While running playbook i stored token created by master node into the token.sh file so in this steps i can use it. I used shell module to run token.sh file.

- name: Copying token file at /root location
copy:
src: token.sh
dest: /root/token.sh - name: Joining slave node to master node
shell: "sh /root/token.sh"
register: joined - debug:
msg: "{{ joined }}"

Now after running this role entire master node and slave nodes will be configured.

Wordpress & MySQL

  1. Now let’s login master node and running the kubectl get pods command to cross verify.
  2. Now WordPress and MySQL Database is running properly. Now check the Port Number where Wordpress is running using below command
$ kubectl get svc

3. Copy the public IP of any of the node & respective port number and Go to your Google-chrome browser and paste it.

We have successfully performed the task.

Alternatively, we can use the following ansible playbook to configure Wordpress over the multi node cluster we setup over AWS

We can create a role for Wordpress and MySQL instead of using the Kubernetes files.

Now we need to create a pod for the Wordpress and MySQL to launch them respectively.

---
#tasks file for wordpress-mysql
#task to launch wordpress- name: "Launching Wordpress"
shell: "kubectl run mywp1 --image=wordpress:5.1.1-php7.3-apache"
register: Wordpress- debug:
var: "Wordpress.stdout_lines"
#task to launch mysql
- name: "Launching MySql"
shell: "kubectl run mydb1 --image=mysql:5.7 --env=MYSQL_ROOT_PASSWORD=redhat --env=MYSQL_DATABASE=wpdb --env=MYSQL_USER=vd --env=MYSQL_PASSWORD=redhat"
register: MySql

To launch mysql pod we need to set the user name and password. Here you can user secret resource Kubernetes or vault module from Ansible.

#mysql root password
MYSQL_ROOT_PASSWORD=redhat
#mysql database name
MYSQL_DATABASE=wpdb
#mysql user name
MYSQL_USER=vd
#myd=sql password
MYSQL_PASSWORD=redhat

These are the required variables that need to be answered while launching the MySQL pod. If you don’t use these variables then it will throw an error.

Exposing Wordpress pod:

- name: "Exposing wordpess"
shell: "kubectl expose pods mywp1 --type=NodePort --port=80"
register: expose
ignore_errors: yes
- debug:
var: "expose.stdout_lines"

To access the Wordpress in public world we need to expose the NodePort.

- name: "get service"
shell: "kubectl get svc"
register: svc
- debug:
var: "svc.stdout_lines"

As we are automating whole thing so we don’t need to go inside the master node and check the exposed service port. The above code will get the services to give an output while running the main playbook.

- name: "Pausing playbook for 60 seconds"
pause:
seconds: 60
- name: "Getting the Database IP"
shell: "kubectl get pods -o wide"
register: Database_IP
- debug:
var: "Database_IP.stdout_lines"

After launching the pods it takes time to launch. So, we pause the playbook for 60 seconds so all the pods will be ready and we get the complete information of the pods.

Now we just need to the run the playbook.

That’s all.
Thank You for reading.

--

--

Rishabh Arya

I am an active learner who likes to challenge every problem with a can-do mindset in order to make any idea a reality.