Getting started with Kubernetes: Moving Ghost blogs to containers
I have for several years used droplets from DigitalOcean (DO) when I need to host web applications. For a while wanted to check out Kubernetes (K8s), but did not find time to dig into the subject. When my old MacBook Pro from 2015 crashed and I just reinstalled it without thinking and for instance my private SSH-keys got lost, I had to start thinking what I should do with the DO droplet. The droplet that I cared about used a unique SSH-key pair that I did not use anywhere else and I only had that one at my machine. This was the time to check out and move to a k8s cluster.
Spinning up a k8s cluster
All the major cloud providers today offer a k8s cluster solution, since I have been a customer of DigitalOcean for several years and been happy, I decided to spin up the cluster at DigitalOcean as well.
Creating the cluster is dead simple, click the green create button at the top of the page and choose cluster. Then there is just to pick the version of Kubernetes that you want and the region. Since I have other resources in the London region, I choose that region for everything I create (almost). The rest is to choose how many nodes and the size of the nodes. Pick a name for the pool and the cluster and then it is just to wait for DigitalOcean to create it.
While the cluster is created, it is a good opportunity to install the tooling that is required and "good to have" locally. The kubectl
tool is required and is needed to communicate with the cluster. I wanted to separate applications and type of applications in namespaces, then a tool like kubens
is good to have. What that tool is doing is show me which namespace that is active and make it easy to change between namespaces. To be able to connect to the cluster, it is needed to have the configuration for the cluster. The easiest way to get that is using the DigitalOcean cli tool. Following the guide from DigitalOcean how to connect is a good start.
Now, you should have the DO k8s cluster as an active context in kubectl and it is time to continue to set up some services.
// Show the contexts that is on your system
$ kubectl config view
// Show the current cobtext
$ kubectl config current-context
// Set context
$ kubectl config set-context NAME
Setting up MySql database and Ghost in a k8s cluster
The first thing I need to setup is a MySql database. This should not be accessible from the outside, but just inside the cluster. I do not want to have one installation each time I need a MySql database. Therefore I place the database in the default namespace. Each Ghost blog I want to separate in different namespaces. One namespace for my personal stuff and one namespace for the music festival I am a part of.
MySQL
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
namespace: default
type: Opaque
stringData:
password: mysql-Secure-PassWord
---
apiVersion: v1
kind: Service
metadata:
name: ghost-mysql
namespace: default
labels:
app: ghost
spec:
ports:
- port: 3306
selector:
app: ghost
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: ghost
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: do-block-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ghost-mysql
namespace: default
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
In the yaml-file above, I am creating a secret, a service, a volume and a deployment itself. The service is created since that do not change IP-address. If a pod is failing and is recreated, the IP-address change. That do not work very well when applications should connect to the database. Since the pod can be removed and replaced, I need a volume to store the data. Since I am using a storage class do-block-storage
, it is automatically created a volume in DO. That is visible at the dashboard when logged in. Then I am setting up the MySql deployment where the volume is mounted to the path and password is pick from the secret.
Then I just applied the Kubernetes manifest kubectl apply -f mysqldeployment.yml
.
Ghost
Setting up Ghost in a k8s cluster is not that different from setting up MySql. There exists a Ghost image at Docker Hub that is pretty easy to use. The same resources is needed for the Ghost blog. A service, a PersistentVolumeClaim and a deployment.
apiVersion: v1
kind: Service
metadata:
name: blog
namespace: blogns
spec:
selector:
app: blog
ports:
- protocol: TCP
port: 80
targetPort: 2368
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: blog-content
namespace: blogns
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: do-block-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog
namespace: blogns
labels:
app: blog
spec:
replicas: 1
selector:
matchLabels:
app: blog
template:
metadata:
labels:
app: blog
spec:
containers:
- name: blog
image: ghost:3.16.1
imagePullPolicy: Always
ports:
- containerPort: 2368
env:
- name: url
value: http://myblog.com
- name: database__client
value: mysql
- name: database__connection__host
value: 10.244.1.6
- name: database__connection__user
value: root
- name: database__connection__password
value: mysql-password
- name: database__connection__database
value: teilinnet-db
volumeMounts:
- mountPath: /var/lib/ghost/content
name: content
volumes:
- name: content
persistentVolumeClaim:
claimName: blog-content
I am using the DO storage class to setup a permanent volume to use for the content directory of the Ghost blog. This is where the themes are place, log files and the images. The enviroment variables differ from image to image, but they are very often well documented at Docker Hub or the Docker registry that is being used. One thing with the Ghost blog image is that is exposing 2368 as the port for the blog. In the service, I have been specified a port and a target port.
Then I just applied the Kubernetes manifest kubectl apply -f ghostdeployment.yml
.
Moving Ghost blogs to the cluster
Since I was able to delete the SSH-keys that I needed to remote into the DO droplet, I had to use the Console Access from the droplet page in the DO dashboard. Then I was able to login in to one user account using username and password, login in the "normal" way was not possible since it then required the SSH-keys. When I got into the droplet, I could make a compressed file (zip) of the content folder to every Ghost blog that I needed to move.
$ cd /var/www/ghostblog/content
$ zip -r blogcontent.zip .
The complete process that I went though to "rescue" the content directory at the old droplet is written down in detail at another blog post. It included transferring the zip file from the droplet to my MacBook Pro using SSH and do a remote ssh transfer from the pod that had the volume mounted.
After I had rescued the content directory, I logged into the the Ghost blog and in the Labs menu, there is an option to export content. This export everything that is in the database, but not the content directory on disk.
When I had all I needed from the Ghost blog at the old DO droplet, I was ready to continue with the new instance that I had setup in the k8s cluster.
Service discovery and securing with Lets Encrypt
I was setting up Ingress to act as a load balancer and service discovery and to deal with the certification manager. I followed the instructions here, but skipped step 1 since I already had a service I wanted to use (the Ghost blog).
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.1/deploy/static/mandatory.yaml
The first thing I did to setup the Kubernetes Nginx Ingress Controller was to apply the mandatory yaml file.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.1/deploy/static/provider/cloud-generic.yaml
The next is to apply the cloud-generic yaml file. After this is done, run and check that all the pods created is running before continuing.
$ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx
When all the pods are running, it is time to see that the Load Balancer service is running and that it has an external IP-address. This IP-address is the one that the domain A-record should point against. It is also possible to see if the deployment of the Load Balancer is ready through the DO dashboard.
$ kubectl get svc --namespace=ingress-nginx
When I had setup the service with load balancer type, there was created a load balancer in the DO dashboard for me. To be able to continue with configuring Ingress to route the traffic to the correct services, I changed the DNS A-record to the external IP-address that can be found in the DO dashboard (look at the Load Balancer resource) or to execute a little command against the k8s cluster.
$ kubectl get svc --namespace=ingress-nginx
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.245.247.67 203.0.113.0 80:32486/TCP,443:32096/TCP 20h
Note: I needed to update the externalTrafficPolicy to make ingress work with the cert manager later on. The manifest below is the one that I applied.
# Service_ingress-nginx.yaml
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
# default for externalTrafficPolicy = 'Local', but due to an issue
# (https://stackoverflow.com/questions/59286126/kubernetes-cluterissuer-challenge-timeouts,
# https://github.com/danderson/metallb/issues/287)
# it has to be 'Cluster' for now
externalTrafficPolicy: Cluster
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
I was doing a diff to see what would change compared to what was there already and then it was time to apply the changes.
$ kubectl diff -f Service_ingress-nginx.yaml
$ kubectl apply -f Service_ingress-nginx.yaml
When the external IP is ready, it is time to change the A-record that should be used at the DNS-provider that you are using.
I am using DNSimple as my DNS provider, so I logged into the dashboard and changed the A-record that previously was pointing at my old DO droplet.
Setting up Ingress Resource
The Ingress resource definition is using another apiVersion than in the DO article, but this is working fine for my setup.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: blog-ingress
namespace: blogns
spec:
rules:
- host: myblog.com
http:
paths:
- backend:
serviceName: blog
servicePort: 80
- host: www.myblog.com
http:
paths:
- backend:
serviceName: blog
servicePort: 80
When I defined the ingress rules and pointed them to the service created earlier, it is time to apply the changes.
$ kubens blogns && kubectl apply -f blog_ingress.yml
At this point and as long as the DNS is pointing at the correct IP-address, the Ghost blog should be available at the HTTP protocol. The Ghost blog can be reached and used.
Securing the connection using Lets encrypt
To setup a certification manager, the first thing I did was to create a namespace for it.
$ kubectl create namespace cert-manager
Then it was time to setup the ClusterIssuer in the cert-manager namespace.
$ kubens cert-manager
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
That should give an output that a bunch of resources is created. At least, none of the resources should fail with an error. It is time to check that the pods in the cert-manager namespace is running.
$ kubectl get pods --namespace cert-manager
Now it is time to define the issuer and I started with the lets encrypt staging enviroment to see that everything looked like it was working.
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: **your_email_address_here**
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
The certificate issuer, I am not applying but creating instead.
$ kubectl create -f staging_issuer.yaml
Now it is time to change the blog_ingress.yml file that created earlier to add the certificate that is wanted.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: blog-ingress
namespace: blogns
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- myblog.com
- www.myblog.com
secretName: blog-tls
rules:
- host: myblog.com
http:
paths:
- backend:
serviceName: blog
servicePort: 80
- host: www.myblog.com
http:
paths:
- backend:
serviceName: blog
servicePort: 80
Then I can apply the changes to the ingress rules in the blogns namespace and the certificate request will be sent to lets encrypt.
$ kubens blogns
$ kubectl apply -f blog_ingress.yml
To check if the certificate is issued correctly, run the following command.
$ kubectl describe certificate
When the command say that the certificate is issued, it is time to switch to the production version of Lets Encrypt. First thing that is needed is a prod_issuer.yml.
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: your_email_address_here
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
This is very similar to the staging one, except that the url do not say staging for instance. And in the blog_ingress.yml, the annotation that used staging, that line need to be replaced with:
```
cert-manager.io/cluster-issuer: "letsencrypt-prod"
```
The file will look like this almost.
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: blog-ingress
namespace: blogns
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- myblog.com
- www.myblog.com
secretName: blog-tls
rules:
- host: myblog.com
http:
paths:
- backend:
serviceName: blog
servicePort: 80
- host: www.myblog.com
http:
paths:
- backend:
serviceName: blog
servicePort: 80
```
When the ingress rule is applied now, a real tls certificate is issued and the Ghost blog is using https.
$ kubectl apply -f blog_ingress.yaml
What´s next?
When I have been getting started with Kubernetes and how it is to work with, I just have a bunch of manifest files in a git repo. What I want is to organize the Kubernetes infrastructure using Terraform or Ansible. I have used Ansible at work before, so maybe I want to use this opportunity to get to know Terraform a bit as well.
Links
- How to Restart Kubernetes Pod
- How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes
- How to setup nginx Ingress w/ automatically generated LetsEncrypt certificates on Kubernetes
- How to Use SCP Command to Securely Transfer Files
- Example: Deploying WordPress and MySQL with Persistent Volumes
- Quickly Change Clusters and Namespaces in Kubernetes
- Kubernetes best practices: Organizing with Namespaces
- DockerHub: ghost
- How to run Ghost in Kubernetes