About Motifworks

At Motifworks, we are AZURESMART. We are one of the fastest-growing cloud solutions providers, specializing in Cloud Adoption, Application Innovation, and Effective Data Strategies. Our passion is to empower you to accelerate your digital transformation initiatives using the Microsoft Azure cloud. We’re here to simplify your path to explore what’s possible.

Corporate Office

200 W Townsontown Blvd, Suite 300D, Towson, MD, 21204, US

Regional Offices

Philadelphia ¦ Detroit ¦ Dallas ¦ Florida ¦ Cincinnati ¦ Ohio

Development Office

101 Specialty Business Center, Balewadi Road, Pune- 411045, Maharashtra, India

Connect with us
info@motifworks.com +1 443-424-2340
Don’t be a victim of cyber-attacks. Assess your Cyber security risks before it is too late! Know more ➜

 

A guide for deploying stateful applications designed to run on Azure Kubernetes Service

A guide for deploying stateful application designed to run on Azure Kubernetes Service

A guide for deploying stateful application designed to run on Azure Kubernetes Service

Women working on deploying stateful application

Containers were initially developed to serve stateless, transient workloads. It’s taken a long time, but Kubernetes’ core team has worked hard to improve stateful apps in containers. Many businesses use stateful apps and their data because they are crucial for a company. 

Kubernetes’ ability to handle data-drivenapplications opens the door for more enterprises to employ containers to upgrade their old systems and support more mission-critical use cases, such as stateful ones. After going through this Kubernetes blog thoroughly, you will be able to leverage kubernetes for stateful applications using Azure Kubernetes Service.

Creating the Cluster for deploying stateful application

To begin, a cluster must be created, the default cluster for AKS must be configured, and kubectl must be provided the credentials for the cluster.

# create an Azure resource group
$ az group create –name ghost-blog-resource –location eastus
# locations: eastus, westeurope, centralus, canadacentral, canadaeast
# —–
# create a cluster
$ az aks create –resource-group ghost-blog-resource –name ghost-blog-cluster –node -count 1 –generate-ssh-keys
# return a JSON with information
# ——
# pass AKS Cluster credentials to kubectl
$ az aks get-credentials –resource-group ghost-blog-resource –name ghost-blog-cluster
$ kubectl get node

Container and the Deployment

# deployment.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ghost-blog
labels:
app: ghost-blog
spec:
replicas: 1
selector:
matchLabels:
app: ghost-blog
template:
metadata:
labels:
app: ghost-blog
spec:
containers:
# ghost container
– name: ghost-container
image: ghost:alpine
imagePullPolicy: IfNotPresent
# ghost always starts on this port
port:
– containerPort: 2368

Storage of our Images and Themes in Persistent Disks

We’ll use Dynamic Provisioning to generate our disk. The storageClassName will not be specified since Kubernetes will be using the default if it is not. For this, we have used GKE, but we wanted to show how the disk was created in more detail. StorageClass default on GKE was referred to as standard, but on AKS it is referred to as default.

# PersistentVolumeClaim.yml
api
Version: v1
kind: PersistentVolumeClaim
metadata:
name: pd-blog-volume-claim
spec:
access
Modes:
– ReadWriteOnce
resources:
requests:
storage: 10Gi

Send this yaml to the server by running the following command:

$ kubectl apply -f PersistentVolumeClaim.yml
$ kubectl get pvc
# it may take a few minutes for the binding to complete; if it takes more than a minute, check ‘kubectl describe’ to ensure nothing unusual occurred.
$ kubectl describe pvc

As previously, the deployment needs are modified as well:

# deployment.yml
api
Version: apps/v1beta1
kind: Deployment
metadata:
name: ghost-blog
labels:
app: ghost-blog
spec:
replicas: 1
selector:
match
Labels:
app: ghost-blog
template:
metadata:
labels:
app: ghost-blog
spec:
containers:
# ghost container
– name: ghost-container
image: ghost:alpine
image
PullPolicy: IfNotPresent
# ghost always starts on this port
ports:
– container
Port: 2368
volume
Mounts: # define persistent storage for themes and images
– mount
Path: /var/lib/ghost/content/
name: pd-blog-volume
volumes:
– name: pd-blog-volume
persistentVolumeClaim:
claim
Name: pd-blog-volume-claim

A MySQL Instance Is Created and SSL Is Connected To It

A prerequisite is that the MySQL extension for Azure Databases must be added first.

$ az extension add –name rdbms

We may now proceed to set up our MySQL server.

$ az mysql server create –resource-group ghost-blog-resource –name ghost-database –location eastus –admin-user admin –admin-password password –sku-name GP_Gen4_2 –version 5.7
#this may take a few minutes to finish

Configuring the Firewall Rule

$ az mysql server firewall-rule create –resource-group ghost-blog-resource –server ghost-database –name allowed IPrange –start-ip-address 0.0.0.0 –end-ip-address 255.255.255.255

Every IP address will be allowed to access the database with this rule. It’s not a good idea to leave everything open. Our cluster’s nodes, on the other hand, will each have a unique IP address, making it impossible to predict which one will be used when.

If we know that there will be a certain number of Nodes, say three, we may define IP addresses for those nodes in advance. However, if we want to take advantage of Node autoscaling, we’ll have to enable access from a broad variety of IP addresses. A Vnet is unquestionably preferable, but this can be a quick and dirty workaround.

Install and configure the Azure Database for MySQL Vnet Service Endpoints

MySQL service endpoint rules are a firewall protection mechanism for the VNet. Our Azure MySQL server may only accept requests from a particular subnet on a virtual network by using this feature.

To provide access to our Kubernetes cluster, we don’t have to establish Firewall Rules and enter the IP addresses of each node individually. Instead, we use VNet rules.

$ az extension add –name rdbms-vnet
# make sure it got installed
$ az extensions list | grep “rdbms-vnet”
{ “extensionType”: “whl”, “name”: “rdbms-vnet”, “version”: “10.0.0” }

Security on Azure Kubernetes Service (AKS) for stateful applications

When we talk about security, we should talk about SSL. Disabling or enabling it may be done with the following command:

$ az mysql server update –resource-group ghost-blog-resource –name ghost-database –ssl-enforcement Disabled/Enable

We’ll need the cert file when we build secrets, so save it somewhere safe before continuing. Also, you may use the certificate file to check the SSL connection in the MySQL client.

$ mysql -h ghost-database.mysql.database.azure.com -u admin@ghost-database -p –ssl-ca=BaltimoreCyberTrustRoot.crt.pem
mysql> status
# output should show: `SSL: Cipher in use is AES256-SHA`

Setting Passwords for Credentials in Secrets

We’ll need to send sensitive information to our pods, and that data will be stored in the secrets. Base64 encoding is required because secret objects can hold binary data.

$ echo -n “transport” | base64
$ echo -n “service” | base64
$ echo -n “user” | base64
$ echo -n “pass” | base64

It is necessary to use the -n option to prevent echo from appending a n to the end of the echoed text. Transport, service, user, and pass are all encoded in base64.

# mail-secrets.yml
api
Version: v1
kind: Secret
metadata:
name: mail-credentials
type: Opaque
data:
transport: QSBsbGFtYS4gV2hhdCBlbHNl
service: VGhlIFJveWFsIFBvc3QuIE5vbmUgZWxzZSB3b3VsZCBJIHRydXN0
user: SXQncy1hIG1lISBNYXJpbw
==
pass: WW91IHNoYWxsIG5vdA==

A secret file containing your MySQL credentials should be created

# db-secrets.yml
api
Version: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
user: SXQncy1hIG1lISBNYXJpbw==
host: QSB2ZXJ5IGZyaWVuZGx5IG9uZSwgSSBtaWdodCBhZGQ=
pass: R2FuZGFsZiEgSXMgdGhhdCB5b3UgYWdhaW4/
dbname: V2FuZGEsIGJ1dCBoZXIgZnJpZW5kcyBjYWxsIGhlciBFcmlj

To access the secrets in your deployment, upload the secrets.

Takeaway

Containerizing and moving stateful application to Kubernetes-managed environments is a frequent practice. Data-driven applications can no longer be supported on Kubernetes due to developments in storage structures and operations in the container orchestration system.

Still facing difficulties creating and maintaining cloud-native applications?

Let us help you deploy a stateful application within minutes