Set Up an Integration + Deployment Pipeline Using Jenkins + Portainer + Traefik + Docker

23 May, 201912 min read

Disclaimer: I am not a DevOps Engineer so there are probably things that can be done differently but this is a good starting point!

TL;DR, just read the article… I have a Github repo setup for this project

ameersami/ci-cd-pipeline-boilerplate
Contribute to ameersami/ci-cd-pipeline-boilerplate development by creating an account on GitHub.

Quick Background

You read that title and probably had one of two reactions, “holy crap that sounds so complicated”  or “ok that sounds pretty easy”. If it was the latter then you are  probably a DevOps engineer, so hello! If it was the former then you are  like me when I dreamt this whole thing up!

So  why did I dream this whole thing up? Well, I love working on side  projects and want to start deploying more of them, but let's be honest  if you are not a DevOps engineer it can be a real pain in the 🍑 to  deploy and maintain a single side project let alone multiple!

To  address this issue I architected this seemingly complicated but rather  simple pipeline to automate most of the heavy lifting to make it easier  for me to continuously develop and deploy projects


Project Architecture

We  will be using Jenkins to test and build our projects to make sure that  tests pass and that the project successfully builds. Once the Jenkins  job passes Jenkins will hit the Portainer instance via its API to build  the Docker image for the project and deploy/redeploy the projects stack.  Lastly, Traefik will redirect all traffic, to the appropriate docker  container(s). Sounds simple right?

If you already know what the various pieces of this pipeline do you can skip the remainder of this section.

Jenkins

Jenkins is an automation server which allows you, amongst many things, to create continuous integration (CI) and continuous deployment (CD) pipelines. We will be using Jenkins and its pipelines to  automatically test, build, and deploy our project when a new commit is  made to the master branch. Ideally, you would have this triggered when a  PR is merged or something and I highly encourage you to do that!

Docker

We  will be using Docker to containerize each of our projects which will  make it much easier to deploy them. I don’t want to even try to explain  what Docker is so you can check out this explainer video:

Traefik

Traefik  is a reverse proxy and load balancer which will redirect all incoming  traffic(get the pun 😉) to the appropriate docker container(s). Traefik  has some handy features as well such as auto-discovery of services,  auto-reconfiguration, and built-in support for let's encrypt SSL certs.  Plus it’s written in Go which is all the rage right now!

Portainer

Portainer  is a nice tool/service that makes it extremely easy to maintain and  inspect your docker environment through a nice web UI instead of doing  it all through the terminal. Along with that it has some basic security  and exposes an API which we will use to rebuild our Docker images and  deploy our projects.

Portainer + Traefik Config

Finally the good stuff 😃

At  the root of your project, you’ll need to create a directory where the  Traefik files will live, I’ve named this directory Traefik. Inside of  this directory, you will need to create 2 files, acme.json, which you  will leave blank, and traefik.toml, which will contain the configuration  for Traefik.

debug = false

logLevel = "ERROR"
defaultEntryPoints = ["https","http"]

[entryPoints]
  [entryPoints.http]
  address = ":80"
    [entryPoints.http.redirect]
    entryPoint = "https"
  [entryPoints.https]
  address = ":443"
  [entryPoints.https.tls]

[retry]

[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "localhost"
watch = true
exposedByDefault = false

[acme]
email = "youremail@email.com"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
traefik.toml

This  is pretty much copy and paste, you will need to change 2 lines. Line  19, which you will change from localhost to whatever your domain is and  line 24 should be changed to your actual email.

Next,  you’ll need to create a docker-compose.yml file at the root of your  project. This compose file will contain (😆) all the stuff you need to  deploy both Portainer and Traefik.

Let us configure the Traefik service in the docker-compose file first. You will want to:

  • expose ports 80 and 443
  • Set  the network to a docker swarm network, web, which will allow Traefik to  communicate with the various containers on the same network
  • Mount  3 volumes, docker.sock which allows Traefik to listen to docker and  automatically detect new containers, traefik.toml file which contains  the configuration for Traefik, and acme.json file which Traefik will use  for the SSL cert it generates using let's encrypt.

Next, we’ll configure the Portainer service, to do so you will need to:

  • expose port 32768
  • set the networks to web and default
  • mount the docker.sock volume and Portainer/data volume which we will create later
  • add  Traefik specific labels for, specifying the docker network Traefik is  on, saying the Traefik is enabled for this container, specifying the  frontend rule (the domain at which this can be accessed), the port, and  the protocol

Oof, all together it should look like this:

version: '3'

services:
  traefik:
    image: traefik:1.7-alpine
    restart: always
    ports:
      - 80:80
      - 443:443
    networks:
      - web
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./traefik/traefik.toml:/traefik.toml
      - ./traefik/acme.json:/acme.json
  portainer:
    image: portainer/portainer:1.20.2
    restart: always
    ports:
      - "32768"
    networks:
      - web
      - default
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./portainer/data:/data portainer/portainer
    labels:
      - "traefik.docker.network=web"
      - "traefik.enable=true"
      - "traefik.portainer.frontend.rule=Host:portainer.localhost"
      - "traefik.portainer.port=9000"
      - "traefik.portainer.protocol=http"

networks:
  web:
    external: 
      name: web
docker-compose.yml

All  you have to do now is run the following script which creates a  directory for Portainer which is mounted as a volume, creates the docker  swarm network, edits the permissions of the acme.json file, and deploys  the stack. Once run you should be able to access Portainer on whatever  URL you specified on line 30 of the docker-compose file, in this case, http://portainer.localhost/

mkdir portainer portainer/data

echo 'Create docker web network'
docker network create --scope=swarm web

echo 'Running chmod on acme.json'
chmod 600 ./traefik/acme.json

echo 'Starting services....'
docker stack deploy proxy --compose-file docker-compose.yml

echo 'All services up I think....'
setup.sh

Jenkins Config

I’m assuming that you already have a Jenkins server deployed, if not, follow the official instalation instructions

For  this project, we will be using Jenkins for our CI and CD. Before we can  begin, make sure that you have the following Jenkins plugins installed:

You  will need to configure the NodeJS installation by navigating to manage  Jenkins 👉 Global Tool Configuration and scrolling down the section  labeled NodeJS. Add an installation with a name of recent node

Lastly, navigate to the credentials page and create 2 Username with  password credentials, one for Portainer using the admin username and  password, and one for your Github credentials

Example React App Setup

I will be using my react boilerplate repo from a previous post for an example project to be deployed via this pipeline.

ameersami/react-bolierplate
Contribute to ameersami/react-bolierplate development by creating an account on GitHub.

The  Dockerfile in the repo just creates a simple Docker container which  runs the react app on port 8080. This Dockerfile can, however, be  substituted with whatever Dockerfile you want.

The docker-compose.yml file for the project looks extremely similar to the Portainer service that we created earlier.

version: '3'

services:
  react-app:	
    image: reactApp:latest	
    deploy:
      restart_policy:
        condition: "any"
        delay: "0"
        max_attempts: 3
        window: "30s"	
    ports: 	
      - "8080"	
    networks:	
      - web	
      - default	
    labels:	
      - "traefik.docker.network=web"	
      - "traefik.enable=true"	
      - "traefik.reactboilerplate.frontend.rule=Host:reactApp.yourdomain.com"
      - "traefik.reactboilerplate.frontend.port=8080"
      - "traefik.reactboilerplate.frontend.protocol=http"

networks:
  web:
    external: 
      name: web
docker-compose.yml

In this docker-compose file, the image name on line 6 refers to the  docker image that this service will use (remember this for when we  configure the Jenkinsfile). The port on line 14 refers to the port that  is exposed, in this case, 8080. Lastly for the labels section, the  “traefik.docker.network” refers to the docker swarm network we created  earlier, “traefik.reactboilerplate.frontend.rule” defines what URL we  want to go to this service, and “traefik.reactboilerplate.frontend.port”  is the port we want Traefik to forward to.

JenkinsFile Creation

The  Jenkinsfile basically defines a set of commands that Jenkins needs to  execute for each job. In this case, I’ve simply put everything into a  single Jenkinsfile but I know it is best practice to separate out  running tests, CI, and CD into their own individual Jenkinsfiles.

Portainer  exposes a restful API that we will be calling from Jenkins to build our  docker images and deploy our application stacks. The documentation for  the Portainer rest API can be found here:

Build, Collaborate & Integrate APIs | SwaggerHub
Join thousands of developers who use SwaggerHub to build and design great APIs. Signup or login today.

In  your Jenkinsfile, inside of the pipeline, we will want to create 7  stages, 3 of which are specific to the react-boilerplate project. I  won’t be walking you through the react-boilerplate specific stages but  you can find them here.

The  first stage we will need to create is for retrieving the JWT token  which we will use to authenticate the rest of the calls in the pipeline.

So  if you remember back to the Jenkins config step, we added credentials  to Jenkins for Portainer. The way we access those credentials in a stage  is by wrapping whatever section we want to have access to it in a withCredentials method,  where we specify the id of the credentials and the variables we want to  assign to the username and password as parameters to the method.

On line 12 we create a json variable which contains the JSON body that we will need to send as part  of the request. This is where we will use the username and password for  portainer.

On  line 15 we create and send the request to our Portainer instance by  using the HTTP request plugin. For the request we specify the following:

  1. acceptType to APPLICATION_JSON as we are only expecting a JSON object to be returned
  2. contentType to APPLICATION_JSON as we are sending a JSON object
  3. validResponseCode to 200 as we want this to fail if we get anything other than a 200 returned
  4. httpMode to POST as this is a post request
  5. ignoreSslErrors to true as Jenkins will complain about the certs generated by Traefik
  6. consoleLogResponseBody to true (this is optional as it just logs the response)
  7. requestBody to the json variable we created on line 12
  8. url to https://portainer.<yourdomain>.com/api/auth , obviously, change the URL to whatever URL your Portainer instance is sitting at.

We  then use groovy JsonSlurper to parse the returned JSON object and we  then create an environment variable on line 17 and set it to the  authorization header, Bearer xxxxxxx which we will use in subsequent stages.

Next, we will send a request to Portainer to build the docker image for us from the Github repo

Similar  to the last stage, we access the credentials stored in Jenkins, in this  case, the Github credentials. We then create the request URL, you may  note that this is not part of the Portainer API docs as this is making a  call to the docker API through Portainer. This request URL has a number  of params:

  1. name of the image to build, this should be the same as the image name in the react-boilerplate docker-compose.yml, in this case, reactApp:latest
  2. remote is the URL to the Github repo where the Dockerfile used to create the  image is located, at the beginning of the URL, we pass the Github  username and password so that this can work for private repos as well.
  3. dockerfile is the location of the dockerfile in the repo
  4. nocache is set to true so that Docker does not use the cache when building the image

The  next stage will delete the old stack if it already exists, we must do  this as Portainer does not properly support redeploying stacks as of  writing.

All  we are doing here is calling the Portainer API to get a list of stacks  that already exist and checking if the one for this project is in that  list. If it is then we make another call to the Portainer API to delete the stack

Lastly, we deploy the stack to Portainer

We  use the withCredentials again to access the Github credentials which  will be used for the request body. First, we make a call to Portainer to  retrieve the Docker swarm ID. We then create a JSON object in a  variable which has the following properties:

  1. Name 👉 name of the stack
  2. SwarmId 👉 swarm which we want to deploy to
  3. RepositoryURL 👉 URL to the Github repo for the project
  4. ComposeFilePathInRepository 👉 path to the docker-compose.yml file in the repo
  5. RepositoryAuthentication is set to true so that we authenticate against the repo (this is can be set to false if it is a public repo)
  6. RepositoryUsername and RepositoryPassword which are the repo username and password to be used when authenticating  (don’t need these if RepositoryAuthentication is set to false)

We then make sure that the JSON object was created and then we make a request to Portainer to deploy the stack

Oof ok well if you kept the final JenkinsFile should look like the following:

#!/bin/groovy
pipeline {
  agent any
  tools {
    nodejs 'recent node'
  }
  stages {
    stage('Prepare') {
      steps {
        script {
          sh 'npm install yarn -g'
          sh 'yarn install'
        }
      }
    }
    stage('Test') {
      steps {
        script {
          sh 'yarn test'
        }
      }
    }
    stage('Build') {
      steps {
        script {
          sh 'yarn build'
        }
      }
    }
    stage('Get JWT Token') {
      steps {
        script {
          withCredentials([usernamePassword(credentialsId: 'Portainer', usernameVariable: 'PORTAINER_USERNAME', passwordVariable: 'PORTAINER_PASSWORD')]) {
              def json = """
                  {"Username": "$PORTAINER_USERNAME", "Password": "$PORTAINER_PASSWORD"}
              """
              def jwtResponse = httpRequest acceptType: 'APPLICATION_JSON', contentType: 'APPLICATION_JSON', validResponseCodes: '200', httpMode: 'POST', ignoreSslErrors: true, consoleLogResponseBody: true, requestBody: json, url: "https://portainer.<yourdomain>.com/api/auth"
              def jwtObject = new groovy.json.JsonSlurper().parseText(jwtResponse.getContent())
              env.JWTTOKEN = "Bearer ${jwtObject.jwt}"
          }
        }
        echo "${env.JWTTOKEN}"
      }
    }
    stage('Build Docker Image on Portainer') {
      steps {
        script {
          // Build the image
          withCredentials([usernamePassword(credentialsId: 'Github', usernameVariable: 'GITHUB_USERNAME', passwordVariable: 'GITHUB_PASSWORD')]) {
              def repoURL = """
                https://portainer.<yourdomain>.com/api/endpoints/1/docker/build?t=reactApp:latest&remote=https://$GITHUB_USERNAME:$GITHUB_PASSWORD@github.com/$GITHUB_USERNAME/react-bolierplate.git&dockerfile=Dockerfile&nocache=true
              """
              def imageResponse = httpRequest httpMode: 'POST', ignoreSslErrors: true, url: repoURL, validResponseCodes: '200', customHeaders:[[name:"Authorization", value: env.JWTTOKEN ], [name: "cache-control", value: "no-cache"]]
          }
        }
      }
    }
    stage('Delete old Stack') {
      steps {
        script {

          // Get all stacks
          String existingStackId = ""
          if("true") {
            def stackResponse = httpRequest httpMode: 'GET', ignoreSslErrors: true, url: "https://portainer.<yourdomain>.com/api/stacks", validResponseCodes: '200', consoleLogResponseBody: true, customHeaders:[[name:"Authorization", value: env.JWTTOKEN ], [name: "cache-control", value: "no-cache"]]
            def stacks = new groovy.json.JsonSlurper().parseText(stackResponse.getContent())
            
            stacks.each { stack ->
              if(stack.Name == "BOILERPLATE") {
                existingStackId = stack.Id
              }
            }
          }

          if(existingStackId?.trim()) {
            // Delete the stack
            def stackURL = """
              https://portainer.<yourdomain>.com/api/stacks/$existingStackId
            """
            httpRequest acceptType: 'APPLICATION_JSON', validResponseCodes: '204', httpMode: 'DELETE', ignoreSslErrors: true, url: stackURL, customHeaders:[[name:"Authorization", value: env.JWTTOKEN ], [name: "cache-control", value: "no-cache"]]

          }

        }
      }
    }
    stage('Deploy new stack to Portainer') {
      steps {
        script {
          
          def createStackJson = ""

          // Stack does not exist
          // Generate JSON for when the stack is created
          withCredentials([usernamePassword(credentialsId: 'Github', usernameVariable: 'GITHUB_USERNAME', passwordVariable: 'GITHUB_PASSWORD')]) {
            def swarmResponse = httpRequest acceptType: 'APPLICATION_JSON', validResponseCodes: '200', httpMode: 'GET', ignoreSslErrors: true, consoleLogResponseBody: true, url: "https://portainer.<yourdomain>.com/api/endpoints/1/docker/swarm", customHeaders:[[name:"Authorization", value: env.JWTTOKEN ], [name: "cache-control", value: "no-cache"]]
            def swarmInfo = new groovy.json.JsonSlurper().parseText(swarmResponse.getContent())

            createStackJson = """
              {"Name": "BOILERPLATE", "SwarmID": "$swarmInfo.ID", "RepositoryURL": "https://github.com/$GITHUB_USERNAME/react-bolierplate", "ComposeFilePathInRepository": "docker-compose.yml", "RepositoryAuthentication": true, "RepositoryUsername": "$GITHUB_USERNAME", "RepositoryPassword": "$GITHUB_PASSWORD"}
            """

          }

          if(createStackJson?.trim()) {
            httpRequest acceptType: 'APPLICATION_JSON', contentType: 'APPLICATION_JSON', validResponseCodes: '200', httpMode: 'POST', ignoreSslErrors: true, consoleLogResponseBody: true, requestBody: createStackJson, url: "https://portainer.<yourdomain>.com/api/stacks?method=repository&type=1&endpointId=1", customHeaders:[[name:"Authorization", value: env.JWTTOKEN ], [name: "cache-control", value: "no-cache"]]
          }

        }
      }
    }
  }
}
JenkinsFile

Jenkins Job Creation

Last step I promise….

Go to your Jenkins and create a new job and configure the following options:

  • Check Github project and set the Project url to the URL of the Github repo
  • Specify your build trigger under the Build Triggers section
  • In the Pipeline section select Pipeline script from SCM which tells Jenkins to pull the Jenkinsfile from the Github repo
  • In the Pipeline section select Git for the SCM section
  • In the Pipeline section under Repositories put the Github repo URL and select the Github credentials from the Credentials dropdown
  • In the Pipeline section for the Script Path put the path in the repo to the Jenkinsfile

Summary

Awesome! You should now have a functional pipeline which automatically  runs the tests for your project, creates a new Docker image for said  project, and deploy/redeploys it all in one fell swoop.

Useful Links