Continous Deployments

Continuous Deployments is the next stage of automation following on from it’s predecessors continuous integration (CI) and continuous delivery (CD).

The integration phase of the project used to be the most painful step, depending on the size of the project developers work on isolated teams dedicated to seperate components of the application for a very long time, when the time came to integrate those components a lot of issues, like unmet dependencies, interfaces that don’t communicate etc, are dealt with for the first time - the idea of CI was thought out to combat this problem.

Continuous Integration (CI)

In CI code is integrated, built and tested within a development environment, usually on a CI server (like Jenkins, Drone or Travis-CI), frequently (at least a couple of times a day) so as to catch integration issues, and deal with them, early.

CI

The drawing above illustrates the basics of CI;

  • Developers work on the software, possibly on separate components
    • They commit their code frequently to a shared code repository (using Git or Mercurial for example)
    • They usually work on different features on separate branches
    • Once they are comfortable their work is stable (aided with passing tests) they merge their feature branches into the mainline branch
  • These commits trigger a CI pipeline on the CI server to execute
  • The pipeline usually runs the following steps
    • Retrieve the code from the repository
    • Runs static analysis tools to check on coding standards and guidelines are adhered to and catch obvious errors in the code
    • Build the project (including any dependencies)
    • Run tests that do not require the code to be deployed, these are usually unit tests
    • Package the software and deploy to a test environment, using containers this will produce an immutable image and deploy it.
    • Run integration tests on the test environment
  • When the above have succeded without errors then the CI activity is finished, generating artefacts - including the immutable image of the software, that can be used further along (for deployment to other environments for example)
  • If any of the steps in the pipeline fail the relevant personell is notified, usually the developer, and corrective measures must be taken with high priority.

The above steps describe a very general pipeline, different teams would have slightly varying cases. What constitutes a failure in the pipeline may be influenced by the standards of the team, for example a team may decide to set a threshold of percentage in code coverage for unit tests to be a criteria for failing the pipeline while another team may fail the pipeline on non-compliance of coding standards.

CI brings several advantages to development teams including;

  • Less pain in integration, as integration is done frequently and issues are caught early
  • Allows good practice, like coding standards, to be enforced somehow and have a measure of how the team is doing in keeping up with good practices
  • Encourages automation of the manual jobs that are done automatically by the CI tool in the pipeline
  • Encourages automation of tests, since to have a measure of failure in the pipeline there needs to be a method of testing the build automatically. Unit tests, and things like code coverage, helps in improving the quality of the code produced in development.
  • The artefacts produced during the CI pipeline, like the immutable image for containers mentioned earlier, can be used for other purposes elsewhere, like QA testing etc.

The outcome of the CI pipeline is simply a failure or a successfuil build that passed post-deployment testing, it is assumed that there are manual validations to be performed afterwards before the package is deployed to production.

Continuous Delivery (CD)

CD takes this further by automating these validations such that at the end of the process the artefacts can be deployed to production, so the main difference between CI and CD is in the confidence the team puts in the pipeline, i.e. while the result of a successful CI pipeline is artefacts which require manual validations before deployment to production, the result of a successful CD pipleine is artefacts which can be deployed to production.

The diagram below describes the CD process;

CD

The decision to make the deployment live relies on other factors, which may include marketing deciding when to launch a certainset of features, or the agreed timed release cycles by the team.

Note that the process is essentially the same as the CI one with the following differences;

  • While the CI server monitors every commit to the shared code repository, the CD server monitors certain milestones, this is configurable by the team. For example the team may decide to use tags created in certain branches to mark as releases that the CD server will build packages from.
  • Since there are no manual validation steps on the built packages on the CD process, the automated tests in this process tend to be more robust.
  • Note that even though there are products marketed as CD servers, since the execution process of the pipeline is the same (albeit the actions taken in the pipeline might be different), the same products can be used for both CI and CD - Jenkins, for example, is well recommended for both.

Continuous Deployments

Continous Deployments, which is the next stage of automation from these predecessors, takes the process further and automatically deploys every build that passed all automated validations - the whole process is fully automated with no human intervention.

In this pipeline it is required to have automated tests running in the production environment post deployment, it may be the same tests that were used in testing (or QA) environment are run again or the team may choose to have full tests run in testing while only integration tests run in production.

One thing to note about automated deployments is that it is better to deploy smaller pieces of software in this manner rather than huge monoliths - one of the advantages of using microservices, simply because it is much easier to track, and test, features deployed automatically if they are a small number of clearly defined, related (belonging to the same Bounded Context) set of features rather than fully blown set of features for a monolith.

Since the deployments happen automatically, it is a good idea to use a technique called feature toggles to control which features are made available to end users when. It is also recommended to use a proxy service to control which instance is serving the end users, since deployments are done automatically and then post-deployment tests are done using proxy service allows us to keep pointing to the current instance while these steps are taking place. If the tests pass the proxy service is updated to point to the new instance, if they fail a notification is sent out without any update to the proxy service and hence no interruption to the end users.

Conclusion

Continuous Deployments is indeed an advanced method of fully automating the deployment pipeline and brings about a lot of benefits to the team, however to adopt it succesfully it requires an adoption in phases, ss you might have noticed the first phase of the pipeline (CI) is the most detailed, with the other phases building on top of that, they also are performed in a certain order. So most teams would begin with CI first and build on that until they reach fully automated deployments.

It also requires discipline and improved ways of working, for example automated tests are not just an enhancement to the development process, they are a requirement for automated deployments to work. And the use of proxy services, and techniques like feature toggling and blue/green deployments are very useful to adopt. But what complements Continuous Deployments exceptionally well in creating the next level of software delivery is containers and microservices

 
comments powered by Disqus