One common use case for Pipelines is to automatically build a Docker image for your code and push that image to a container registry whenever you git push your code to Bitbucket. This article assumes you are at least minimally familiar with the structure of Bitbucket Pipelines files. If not, you can see the official documentation here: https://confluence.atlassian.com/bitbucket/get-started-with-bitbucket-pipelines-792298921.html and here: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html

Imagine you have an extremely simple Node.js app. In your top-level git directory, you will need two files: a Dockerfile and a bitbucket-pipelines.yml. Your directory structure might look like this:

myapp
|-- .git
|-- Dockerfile
|-- bitbucket-pipeline.yml
|-- src
    |-- node_modules
    |-- package.json
    |-- main.js

The specifics of your Dockerfile depend on your application and is out of scope for this article, but it might look something like this:

FROM node:10.15.3

WORKDIR /app

COPY ./src/package.json /app
RUN npm install

COPY ./src /app
CMD node main.js

Now that we have an app and a Dockerfile, we need to set up our Docker registry. At Think Engineer, we use Docker Hub. If you dont already have an account on Docker Hub, go to https://hub.docker.com/ and sign up. Once you have an account, you can either create an image repository under your own personal account, or you can create an Organization. An Organization is the best option if you are working as part of a team, and has the benefit that you can add service accounts - well get into service accounts in a little bit.

Under your organisation (if you have one) click Repositories:

docker click repositories

Click Create Repository and fill int the name and description. You can decide to make the repository public or private; private repositories will not be available to anyone outside your organisation, but will also cost money. Public repositories can be downloaded and used by anyone, but are free.

docker create repository

At this point, I like to add in a service user. A service user is a user account that is not tied to a real person, but is used to access certain services from tools such as Bitbucket Pipelines. Create a new user account on Docker Hub, invite them to your organisation, and give them read/write access to the relevant image repository.

The reason for creating this additional service user is so that we can have access to our Dockerhub organization from Bitbucket Pipelines without using an actual humans log in details. This is important, because if that user leaves the team and their access is revoked we dont want our builds to suddenly break.

The next step is to go to the git repository settings for our project in Bitbucket and enable Pipelines:

bitbuket repo settings

The final set up step before we can start writing our pipeline file is to add in our service account credentials in the Pipelines Repository Variables page. This will allow us to access the Docker Hub username and password as environment variables in the pipeline.

Since the password is sensitive information, make sure to mark it as Secured.

bitbuket mark password secure

Now that we have all the pieces in place, we can start writing our bitbucket-pipelines.yml file.

Workflow

We assume that you also want to deploy your code to different environments based on the git branch. For example, whenever you push to the develop branch, you want the code to automatically deploy to the development environment. Whenever you push to the master branch, you want the code to deploy to the staging branch. Finally, when you have done stability testing in your staging environment, you want to git tag your master branch with a version number and promote the already built Docker container to the production environment.

The overall workflow could be visualised like this:

workflow

Pipelines

Open up bitbucket-pipelines.yml in your favourite code editor and add the following lines at the top of the file:

options:
  docker: true
  

This tells Pipelines that we want to use the Docker service in each step of our pipelines. Alternatively, you can enable Docker on a per-step basis. See the official docs for more information: https://confluence.atlassian.com/bitbucket/run-docker-commands-in-bitbucket-pipelines-879254331.html

Next we need to add a pipeline for our develop branch:

options:
  docker: true

pipelines:
  branches:
    develop:
      - step:
          name: build and push docker image
          script:
            # build the Docker image 
            - export IMAGE_NAME=myorg/myapp:develop-\(BITBUCKET_COMMIT
            - docker build -t \)IMAGE_NAME .
            # authenticate with the Docker Hub registry   
            - docker login --username \(DOCKER_HUB_USERNAME --password \)DOCKER_HUB_PASSWORD
            # push the new Docker image to the Docker registry
            - docker push \(IMAGE_NAME
   

Lets break this down a bit. We have a step in our develop branch pipeline called build and push docker image which defines a script. The first line of the script defines the name of the image we wish to build and exports it to an environment variable so that we can use it later in this script. Docker images have the format :.

Notice that we give the tag as develop-\)BITBUCKET_COMMIT. The \(BITBUCKET_COMMIT is a default variable that will be replaced with the actual commit hash which acts as a unique ID for this build. This allows us to easily trace any image in Docker Hub back to the exact commit that caused it to be built. Specifying the branch is also useful so that we can use deployment tools further down the line to automatically deploy any develop-* images to our development environment.

The rest of the script is just standard Docker commands. The docker build command generates an image, and uses the -t flag to set the name and tag for the built image. The docker login command grabs the username and password of the service user account we set up earlier in our Pipeline Variables page and logs us in to Docker Hub. Finally, we upload the image to the registry using docker push.

You can double check that your pipleine file is valid by pasting into the validator here: https://bitbucket-pipelines.atlassian.io/validator

After committing the file, it may take a few minutes for Pipelines to get to work. You may need to refresh the Pipelines page under the repo in Bitbucket before you see anything. After a while, you should see a Pipeline job pop up. You can check the progress by clicking on the job and watch the output of the script running.

Once the the Pipeline has finished, you should be able to see a new tag for your image in Docker Hub.

Deployment

You may want to define an additional step in your pipeline file under the develop branch that triggers some sort of deployment:

options:
  docker: true

pipelines:
  branches:
    develop:
      - step:
          name: build and push docker image
          script:
            # build the Docker image 
            - export IMAGE_NAME=myorg/myapp:develop-\)BITBUCKET_COMMIT
            - docker build -t \(IMAGE_NAME .
            # authenticate with the Docker Hub registry   
            - docker login --username \)DOCKER_HUB_USERNAME --password \(DOCKER_HUB_PASSWORD
            # push the new Docker image to the Docker registry
            - docker push \)IMAGE_NAME
      - step:
          name: deploy to development environment
          script:
            # your deployment script here

Alternatively, you might use a “pull” strategy where you have an agent running in your development environment that watches for new images on Docker Hub, and pulls any image matching the format myorg/myapp:develop-*. This is the approach that we have taken at Think Engineer for our Kubernetes based deployment environments, where we use an open source tool by Weaveworks called Flux. See more about this approach, often called GitOps, on Weaveworks' website: https://www.weave.works/technologies/gitops/ and the Flux GitHub page: https://github.com/fluxcd/flux

Extending the Pipeline for Master Branch

Now that we have our develop pipeline happily building our Docker images and (hopefully) deploying them to the development environment, we can extend the functionality to the master branch.

We could just copy-paste the entire develop pipeline and change any instance of develop to master, but that means if we have to change the pipeline in the future, we have to change it in two places. Its much smarter to instead define our original pipeline using a YAML anchor so we can reuse the same configuration multiple times:

definitions: 
  steps:
    - step: &build-and-push
        name: build and push docker image
        script:
          # build the Docker image 
          - export IMAGE_NAME=myorg/myapp:\(BITBUCKET_BRANCH-\)BITBUCKET_COMMIT
          - docker build -t \(IMAGE_NAME .
          # authenticate with the Docker Hub registry   
          - docker login --username \)DOCKER_HUB_USERNAME --password \(DOCKER_HUB_PASSWORD
          # push the new Docker image to the Docker registry
          - docker push \)IMAGE_NAME

The line - step: &build-and-push defines a YAML anchor called build that can be used with *build-and-push:

options:
  docker: true

definitions: 
  steps:
    - step: &build-and-push
        name: build and push docker image
        script:
          # build the Docker image 
          - export IMAGE_NAME=myorg/myapp:\(BITBUCKET_BRANCH-\)BITBUCKET_COMMIT
          - docker build -t \(IMAGE_NAME .
          # authenticate with the Docker Hub registry   
          - docker login --username \)DOCKER_HUB_USERNAME --password \(DOCKER_HUB_PASSWORD
          # push the new Docker image to the Docker registry
          - docker push \)IMAGE_NAME
          
pipelines:
  branches:
    develop:
      - step: *build-and-push
    master:
      - step: *build-and-push

Notice that we have also set the image name as myorg/myapp:\(BITBUCKET_BRANCH-\)BITBUCKET_COMMIT so that the branch name will be automatically populated depending on which branch triggered the build.

Retagging an Existing Image

Now that we have images building for both of our branches, we need one more pipeline to deal with git tags.

Ideally, we dont want to rebuild the image when we deploy to production as there is not a 100% guarantee that we will end up with an identical container image in the end. We want to deploy the exact same image as the one running in staging.

Pipelines supports running a pipeline on tag push like this:

options:
  docker: true
  
definitions:
  # ...

pipelines:
  branches:
    develop:
      - step: *build-and-push
    master:
      - step: *build-and-push
  tags:
    '*':
      - step:
          name: retag and repush image
          script:
            # get the image that was built in the master branch
            - export IMAGE_NAME=myorg/myapp:master-\(BITBUCKET_COMMIT
            - export NEW_IMAGE_NAME=myorg/myapp:\)BITBUCKET_TAG
            # authenticate with the Docker Hub registry   
            - docker login --username \(DOCKER_HUB_USERNAME --password \)DOCKER_HUB_PASSWORD
            # pull the image down
            - docker pull \(IMAGE_NAME
            # retag the image using the git tag
            - docker tag \)IMAGE_NAME \(NEW_IMAGE_NAME
            # push the image back
            - docker push \)NEW_IMAGE_NAME

Lets break this down. The first element under tags is the glob patter for which tags we want to run this pipeline for. In this example we have used '*' which will match any and all tags. We could have used '*.*.*' instead to specify that only SemVer style tags are to be passed to this pipeline. This is useful if we have various tag formats for other purposes.

Just like the branches, we can now run a series of steps. Using IMAGE_NAME=myorg/myapp:master-\(BITBUCKET_COMMIT we can find the image that was created by the commit pointed at by this git tag. We then authenticate with Docker Hub and use docker pull to pull a copy of the image. Next we use the docker tag command to assign a new name and tag to the image. We make use of the built in variable \)BITBUCKET_TAG which will be replaced by the actual tag. Finally we push the image back to Docker Hub with the new name and tag. This results in a new tag for the already existing image in Docker Hub - no rebuild required.

Now, any automated deploy agents or scripts can grab that image and deploy it in our production environment.

There is one potential gotcha here: if you push both the master branch and the tag at the same time (or without sufficient time in between), your tag pipeline will fail. This is because two pipelines can run concurrently and you have a race condition; the tag pipeline may try to pull the image from the registry before the master branch pipeline has finished pushing it.

This isn't the end of the world, as pushing a tag should be a conscious action taken by the developer (or deployment team) as it usually represents a full release or milestone. In our staging/production example, we would want to make sure our master branch code has been thoroughly tested on staging before we push the tag which represents a release to production.

Alternative Workflow using DeploymentsA variation on the workflow presented in this article can be used to demonstrate one more feature of Bitbucket, Bitbucket Deployments: https://confluence.atlassian.com/bitbucket/bitbucket-deployments-940695276.html

Lets say that instead of deploying all master branch commits to staging, we instead want to wait until we have committed a tag. We want the tagged build to be deployed to staging, and then once we are happy that the code is stable, we want to manually promote the tagged build to the production environment.

Here is a bitbucket-pipelines.yml that would achieve that:

options:
  docker: true
  
definitions:
  # ...

pipelines:
  branches:
    develop:
      - step: *build-and-push
    master:
      - step: *build-and-push
  pipelines:
  tags:
    '*':
      - step:
          name: deploy to staging
          deployment: staging
          script:
            # build the Docker image 
            - export IMAGE_NAME=myorg/myapp:staging-\(BITBUCKET_TAG
            - docker build -t \)IMAGE_NAME .
            # authenticate with the Docker Hub registry   
            - docker login --username \(DOCKER_HUB_USERNAME --password \)DOCKER_HUB_PASSWORD
            # push the new Docker image to the Docker registry
            - docker push \(IMAGE_NAME
      - step:
          name: deploy to production
          deployment: production
          trigger: manual
          script:
            # get the image that was built in the master branch
            - export IMAGE_NAME=myorg/myapp:staging-\)BITBUCKET_TAG
            - export NEW_IMAGE_NAME=myorg/myapp:production-\(BITBUCKET_TAG
            # authenticate with the Docker Hub registry   
            - docker login --username \)DOCKER_HUB_USERNAME --password \(DOCKER_HUB_PASSWORD
            # pull the image down
            - docker pull \)IMAGE_NAME
            # retag the image using the git tag
            - docker tag \(IMAGE_NAME \)NEW_IMAGE_NAME
            # push the image back
            - docker push \(NEW_IMAGE_NAME

This time, weve added a two-step process on a tag pipeline. Whenever the tag is first pushed, the Docker image is built and tagged with staging-\)BITBUCKT_TAG. Weve also marked this step with deployment: staging so that it will show up in the deployments window.

The second step will retag the image as before, but this time weve added deployment: production and trigger: manual. The manual trigger means that this step will not run automatically, but will instead wait for a user to give the go ahead.

If we now push a tag to Bitbucket, the first step will run and we will get a new image appear in Docker Hub. If we navigate to the Deployments page in Bitbucket we can see the commit that caused the current “Staging” deployment.

After weve done whatever testing we need to do in our staging environment, we can now promote the build, either by pressing Deploy from the Pipeline status screen or Promote in the Deployments screen. The final step in our script will now run, and the Docker image will be retagged for production.

The three possible environment types are Test, Staging, and Production. Test can be promoted to Staging, and Staging to Deployment. It is also possible to set up multiple environments of the same type from the Deployments settings screen. This could be useful, for example, to deploy to different geographical regions separately.

Conclusion

I hope that this overview of how to build and publish Docker containers using Bitbucket pipelines has been useful. Weve gone over how to run Docker commands in Pipelines, how to tag our image for different branches, how to utilise git tags to retag existing images, and how to use Deployments to manually promote our images to different environments.

There are of course many other options for how to set up your CI/CD workflow. What is presented here should be enough for you to experiment with what works best for your team and for the manner in which you actually deploy your Docker images to your infrastructure.