Disclaimer: I am not a DevOps Engineer so there are probably things that can be done differently but this is a good starting point!
TL;DR, just read the article… I have a Github repo setup for this project
You read that title and probably had one of two reactions, “holy crap that sounds so complicated” or “ok that sounds pretty easy”. If it was the latter then you are probably a DevOps engineer, so hello! If it was the former then you are like me when I dreamt this whole thing up!
So why did I dream this whole thing up? Well, I love working on side projects and want to start deploying more of them, but let's be honest if you are not a DevOps engineer it can be a real pain in the 🍑 to deploy and maintain a single side project let alone multiple!
To address this issue I architected this seemingly complicated but rather simple pipeline to automate most of the heavy lifting to make it easier for me to continuously develop and deploy projects
We will be using Jenkins to test and build our projects to make sure that tests pass and that the project successfully builds. Once the Jenkins job passes Jenkins will hit the Portainer instance via its API to build the Docker image for the project and deploy/redeploy the projects stack. Lastly, Traefik will redirect all traffic, to the appropriate docker container(s). Sounds simple right?
If you already know what the various pieces of this pipeline do you can skip the remainder of this section.
Jenkins is an automation server which allows you, amongst many things, to create continuous integration (CI) and continuous deployment (CD) pipelines. We will be using Jenkins and its pipelines to automatically test, build, and deploy our project when a new commit is made to the master branch. Ideally, you would have this triggered when a PR is merged or something and I highly encourage you to do that!
We will be using Docker to containerize each of our projects which will make it much easier to deploy them. I don’t want to even try to explain what Docker is so you can check out this explainer video:
Traefik is a reverse proxy and load balancer which will redirect all incoming traffic(get the pun 😉) to the appropriate docker container(s). Traefik has some handy features as well such as auto-discovery of services, auto-reconfiguration, and built-in support for let's encrypt SSL certs. Plus it’s written in Go which is all the rage right now!
Portainer is a nice tool/service that makes it extremely easy to maintain and inspect your docker environment through a nice web UI instead of doing it all through the terminal. Along with that it has some basic security and exposes an API which we will use to rebuild our Docker images and deploy our projects.
Portainer + Traefik Config
Finally the good stuff 😃
At the root of your project, you’ll need to create a directory where the Traefik files will live, I’ve named this directory Traefik. Inside of this directory, you will need to create 2 files, acme.json, which you will leave blank, and traefik.toml, which will contain the configuration for Traefik.
This is pretty much copy and paste, you will need to change 2 lines. Line 19, which you will change from localhost to whatever your domain is and line 24 should be changed to your actual email.
Next, you’ll need to create a docker-compose.yml file at the root of your project. This compose file will contain (😆) all the stuff you need to deploy both Portainer and Traefik.
Let us configure the Traefik service in the docker-compose file first. You will want to:
- expose ports 80 and 443
- Set the network to a docker swarm network, web, which will allow Traefik to communicate with the various containers on the same network
- Mount 3 volumes, docker.sock which allows Traefik to listen to docker and automatically detect new containers, traefik.toml file which contains the configuration for Traefik, and acme.json file which Traefik will use for the SSL cert it generates using let's encrypt.
Next, we’ll configure the Portainer service, to do so you will need to:
- expose port 32768
- set the networks to web and default
- mount the docker.sock volume and Portainer/data volume which we will create later
- add Traefik specific labels for, specifying the docker network Traefik is on, saying the Traefik is enabled for this container, specifying the frontend rule (the domain at which this can be accessed), the port, and the protocol
Oof, all together it should look like this:
All you have to do now is run the following script which creates a directory for Portainer which is mounted as a volume, creates the docker swarm network, edits the permissions of the acme.json file, and deploys the stack. Once run you should be able to access Portainer on whatever URL you specified on line 30 of the docker-compose file, in this case, http://portainer.localhost/
I’m assuming that you already have a Jenkins server deployed, if not, follow the official instalation instructions
For this project, we will be using Jenkins for our CI and CD. Before we can begin, make sure that you have the following Jenkins plugins installed:
- Allows us to specify an environment for the job https://wiki.jenkins.io/display/JENKINS/Build+Environment+Plugin
- Makes making HTTP requests from the Jenkins job much easier https://wiki.jenkins.io/display/JENKINS/HTTP+Request+Plugin
- Jenkins integration for NodeJS and npm https://wiki.jenkins.io/display/JENKINS/NodeJS+Plugin
- Allows us to configure a pipeline with multiple stages for a job https://wiki.jenkins.io/display/JENKINS/Pipeline+Plugin
You will need to configure the NodeJS installation by navigating to manage Jenkins 👉 Global Tool Configuration and scrolling down the section labeled NodeJS. Add an installation with a name of
Lastly, navigate to the credentials page and create 2 Username with password credentials, one for Portainer using the admin username and password, and one for your Github credentials
Example React App Setup
I will be using my react boilerplate repo from a previous post for an example project to be deployed via this pipeline.
The Dockerfile in the repo just creates a simple Docker container which runs the react app on port 8080. This Dockerfile can, however, be substituted with whatever Dockerfile you want.
The docker-compose.yml file for the project looks extremely similar to the Portainer service that we created earlier.
In this docker-compose file, the image name on line 6 refers to the docker image that this service will use (remember this for when we configure the Jenkinsfile). The port on line 14 refers to the port that is exposed, in this case, 8080. Lastly for the labels section, the “traefik.docker.network” refers to the docker swarm network we created earlier, “traefik.reactboilerplate.frontend.rule” defines what URL we want to go to this service, and “traefik.reactboilerplate.frontend.port” is the port we want Traefik to forward to.
The Jenkinsfile basically defines a set of commands that Jenkins needs to execute for each job. In this case, I’ve simply put everything into a single Jenkinsfile but I know it is best practice to separate out running tests, CI, and CD into their own individual Jenkinsfiles.
Portainer exposes a restful API that we will be calling from Jenkins to build our docker images and deploy our application stacks. The documentation for the Portainer rest API can be found here:
In your Jenkinsfile, inside of the pipeline, we will want to create 7 stages, 3 of which are specific to the react-boilerplate project. I won’t be walking you through the react-boilerplate specific stages but you can find them here.
The first stage we will need to create is for retrieving the JWT token which we will use to authenticate the rest of the calls in the pipeline.
So if you remember back to the Jenkins config step, we added credentials to Jenkins for Portainer. The way we access those credentials in a stage is by wrapping whatever section we want to have access to it in a
withCredentials method, where we specify the id of the credentials and the variables we want to assign to the username and password as parameters to the method.
On line 12 we create a
json variable which contains the JSON body that we will need to send as part of the request. This is where we will use the username and password for portainer.
On line 15 we create and send the request to our Portainer instance by using the HTTP request plugin. For the request we specify the following:
APPLICATION_JSONas we are only expecting a JSON object to be returned
APPLICATION_JSONas we are sending a JSON object
200as we want this to fail if we get anything other than a 200 returned
POSTas this is a post request
trueas Jenkins will complain about the certs generated by Traefik
true(this is optional as it just logs the response)
jsonvariable we created on line 12
https://portainer.<yourdomain>.com/api/auth, obviously, change the URL to whatever URL your Portainer instance is sitting at.
We then use groovy JsonSlurper to parse the returned JSON object and we then create an environment variable on line 17 and set it to the authorization header,
Bearer xxxxxxx which we will use in subsequent stages.
Next, we will send a request to Portainer to build the docker image for us from the Github repo
Similar to the last stage, we access the credentials stored in Jenkins, in this case, the Github credentials. We then create the request URL, you may note that this is not part of the Portainer API docs as this is making a call to the docker API through Portainer. This request URL has a number of params:
nameof the image to build, this should be the same as the image name in the react-boilerplate docker-compose.yml, in this case,
remoteis the URL to the Github repo where the Dockerfile used to create the image is located, at the beginning of the URL, we pass the Github username and password so that this can work for private repos as well.
dockerfileis the location of the dockerfile in the repo
nocacheis set to true so that Docker does not use the cache when building the image
The next stage will delete the old stack if it already exists, we must do this as Portainer does not properly support redeploying stacks as of writing.
All we are doing here is calling the Portainer API to get a list of stacks that already exist and checking if the one for this project is in that list. If it is then we make another call to the Portainer API to delete the stack
Lastly, we deploy the stack to Portainer
We use the withCredentials again to access the Github credentials which will be used for the request body. First, we make a call to Portainer to retrieve the Docker swarm ID. We then create a JSON object in a variable which has the following properties:
Name👉 name of the stack
SwarmId👉 swarm which we want to deploy to
RepositoryURL👉 URL to the Github repo for the project
ComposeFilePathInRepository👉 path to the docker-compose.yml file in the repo
RepositoryAuthenticationis set to
trueso that we authenticate against the repo (this is can be set to false if it is a public repo)
RepositoryPasswordwhich are the repo username and password to be used when authenticating (don’t need these if RepositoryAuthentication is set to false)
We then make sure that the JSON object was created and then we make a request to Portainer to deploy the stack
Oof ok well if you kept the final JenkinsFile should look like the following:
Jenkins Job Creation
Last step I promise….
Go to your Jenkins and create a new job and configure the following options:
Github projectand set the
Project urlto the URL of the Github repo
- Specify your build trigger under the
- In the
Pipeline script from SCMwhich tells Jenkins to pull the Jenkinsfile from the Github repo
- In the
- In the
Repositoriesput the Github repo URL and select the Github credentials from the
- In the
Pipelinesection for the
Script Pathput the path in the repo to the Jenkinsfile
Awesome! You should now have a functional pipeline which automatically runs the tests for your project, creates a new Docker image for said project, and deploy/redeploys it all in one fell swoop.