Deploy Web Services on GKE Cluster with Node.js
Kubernetes is an open source system for automating deployments, scaling and managing containerized applications.
We will first briefly look at some of the concepts in Kubernetes that you may come across doing the Hands-on.
There is something called Images where you find the executable package of your application that includes your code, libraries, conf files, runtime, environment variables, etc. By running these images we launch something called Containers. These containers run in a container cluster which is managed using Kubernetes.
A container cluster is nothing but a group of compute engine VM (Virtual Machine) instances. In a container cluster, there are two types of VM instances.
- Master
- Node instances
Thus, in the above diagram, there are four VM instances, for each node instance and master.
Master is the supervising machine. It manages the cluster. Kublet is used to communicate with the master. The pods contain containers. Inside each pod, there can be multiple containers running. All the containers inside a pod share the same underlying resources. That means they all have the same IP address, share the same disk volumes, etc. A service is a grouping of pods that are running on the cluster.
If you want to have more in-depth knowledge on Kubernetes, I would recommend you refer their docs.
Hands-on
For this practical, I will be using Ubuntu 18.04.1 and node.js
Prerequisites:
- Sign in to your Google account. If you do not have one, sign up for a new account.
2. Install Google Cloud SDK.
3. Install Kubectl
sudo snap install kubectl --classic
Go to the Google Cloud Platform Console and create a new project.
Open the newly created project.
In the Navigation menu, click the APIs & Services and go to the Library page.
Search for Kubernetes Engine API and click Enable
Now let us create a cluster.
Go to the Navigation menu, select Kubernetes Engine -> Clusters.
Then you will be prompted to Create a Cluster. You can customize your cluster according to your needs, but for this project, I will be using all the default settings.
Click the create button, and wait till the cluster is created.
Configure kubectl command line access by running the following command:
gcloud container clusters get-credentials <cluster name> -- zone <zone name> --project <project ID>eg: gcloud container clusters get-credentials standard-cluster-service --zone us-central1-a --project k8s-api-service-project
Open the gcloud shell, and type
gcloud container clusters list
This will list out all the existing clusters for running containers. In our case, we have a cluster with three nodes.
Now, in your local machine, open the terminal, go inside the folder where your application is at.
Create a text file named “Dockerfile” inside the folder you are now at.
touch Dockerfile
Open the created text file and copy and paste the following commands, and save it.
FROM node:8# Create app directory
WORKDIR /usr/src/app# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./RUN npm install
# If you are building your code for production
# RUN npm ci --only=production# Bundle app source
COPY . .CMD [ "npm", "start" ]
Dockerfile is a text file that has a series of instructions on how to build your image. It supports a simple set of commands that you need to use in your Dockerfile.
WORKDIR <path>
: Sets the working directory for anyRUN
,CMD
,ENTRYPOINT
,COPY
andADD
instructions that follow it in theDockerfile
. If theWORKDIR
doesn’t exist, it will be created even if it’s not used in any subsequentDockerfile
instruction.
COPY <src>... <dest>
: Copy new files or directories from<src>
and adds them to the filesystem of the container at the path<dest>
.
RUN <command>
: The command is run in a shell, which by default is/bin/sh -c
on Linux
CMD [“executable”,”param1",”param2"]
: Sets the command to be executed when running the image.
Similarly, create a “.dockerignore” file using touch .dockerignore
and copy and paste the following commands.
node_modules
npm-debug.log
In most cases, you’ll be copying in the source code of your application into a docker image. Typically you would do that by adding
COPY src/ dest/
or similar to yourDockerfile
. That’s a great way to do it, but that’s also going to include things like your.git/
directory or/tmp
folders that belong to your project, which you really do not need for building the docker image. Including such files will increase the docker image size unnecessarily.We can exclude files and directories we do not need within our final image. All you have to do is create a
.dockerignore
file alongside yourDockerfile
.At this point, it’s pretty similar to what a
.gitignore
file does for your git repos. You just need to tell it what you want to ignore.
Then, build the container image for the data service application.
docker image build -t <image repository name>:<tag name> .eg: docker image build -t dataserver:v2 .
Note the “dot” at the end of the line. It specifies the current working directory you are in. Here, it is the directory where you are running the ‘docker image build’ command from, which is also where your Dockerfile is at.
To see your built image, type docker images
Now, let’s put this image to the Docker Hub. ( If you do not have an account at docker hub, first you need to sign up there)
Create a new repository. Mine would be “apiservice”
Now push the image we built into the docker hub
Log in to the Docker Hub from the terminal
docker login
Enter your username and password
Get your image ID by typing docker images
Tag your image
docker image tag <image ID> <docker hub repository name>:<tag>eg: docker image tag 377026348163 varuni95/dataservice:v2
If you do not specify the tag, it will always default to “latest”.
Push your image
docker image push <docker hub repository name>eg: docker image push varuni95/dataservice
Now let’s pull our image and create a new container from it.
Go to the Gcloud shell,
kubectl run <container name> --image=<docker hub repository name>:<tag name> --port=<port number>eg: kubectl run data --image=varuni95/dataservice:v2 --port=3101
Now expose the Kubernetes deployment through a load balancer
kubectl expose deployment data --type=LoadBalancer
Get the external IP address
kubectl get svc
Copy that IP address ( 35.232.48.65), and paste it there as the request URL in your api_service application (go inside api_service>routes>states_hash.js. Do the same for api_service>routes>states_titlecase.js)
Now let’s follow the same steps as mentioned above to expose the api service
docker image build -t apiserver:v2 .
Go to the Docker Hub and create a new repository (eg: apiservice)
Get your image Id from docker images
Tag your image
docker image tag c174a3d43afa varuni95/apiservice:v2
Push your image
docker image push varuni95/apiservice
Pull our image and run a new container in the cluster
kubectl run api --image=varuni95/apiservice:v2 --port=3100
Expose the deployment through a load balancer
kubectl expose deployment api --type=LoadBalancer
Get your external IP address
Now you should now be able to access the service by pointing your browser to this address:
Now your URLs would look like something as below. ( note that the following URLs will not be working any longer since I have deleted my cluster)
http://35.184.37.140:3100/codeToState?code=AL
http://35.184.37.140:3100/stateToCode?state=Alabama
Try out with different US state codes 😉
Here is the link to the GitHub Repo
📝 Read this story later in Journal.
🗞 Wake up every Sunday morning to the week’s most noteworthy Tech stories, opinions, and news waiting in your inbox: Get the noteworthy newsletter >