This in-depth blog strips away the unnecessary mystique of CI/CD, looking at the tools and processes that are key to speeding up your software delivery lifecycle.nn
When it comes to the benefits of adopting a DevOps culture, the jury is in: In today’s highly competitive and fast-moving markets, creating strongly synergistic workflows for development and operations teams is no longer just nice to have. Enterprises that are consistently ahead of the business performance curve use DevOps to achieve greater agility and flexibility, faster time to innovation, and better-quality code. These winning enterprises are also embracing microservices-based application architectures that replace monolithic codebases with loosely coupled services and components.
The backbone of a DevOps shop is a continuous integration and continuous delivery/deployment (CI/CD) pipeline. While legacy waterfall Software Development Lifecycles (SDLC) tack on merging and testing processes at the end of a lengthy and often complex cycle, CI/CD’s automated process continuously integrates testing throughout development and deploys your application into production—all with minimal human intervention.
The advantages of implementing a CI/CD pipeline may be widely acknowledged, but it still can be seen as a daunting task. This blog post demystifies CI/CD by explaining what it is and the considerations you need to apply when designing and building your own pipeline.
CI/CD Building Blocks
The basic building blocks of a CI/CD pipeline, as shown in the diagram below, are:
Continuous integration of small (minimally viable) code chunks into the application; merges take place only if automated unit tests are successful.
Continuous delivery of the merged application to staging and test environments; here, further automated testing is conducted to ensure the application is fit for deployment into production.
Continuous deployment of the tested application into the production environment; there is minimal disruption to end-users, and rollback procedures are in place in case a deployment causes runtime issues.
Three overriding principles drive a CI/CD pipeline:
Automation: Each step (build, test, merge, deliver, deploy) is a set of rules-based automated processes. The successful completion of one step automatically triggers the next one, reducing routine manual intervention to a minimum—often limited to a mere final approval prior to deployment into production.
Short feedback loops: Failure to uphold predefined tests, runtime metrics, or any other anomalous application behaviour automatically triggers alerts. Given the continuous nature of the pipeline, these feedback loops are short (‘fail fast’), allowing DevOps teams to troubleshoot and fix issues at an early stage.
Infrastructure as code: CI/CD pipelines promote ‘governed’ self-service provisioning and automated scaling through infrastructure-as-code methodologies such as pre-approved templates and images.
Jobs: These indicate what has to be done, e.g., compiling code or conducting a test.
Stages: These dictate when the jobs run. The pipeline proceeds to the next stage only when all jobs for a given stage are completed successfully; if any job fails, the pipeline terminates.
Runners: These execute the jobs. With an adequate number of runners, multiple jobs in a single stage can be completed at the same time.
A simple GitLab CI/CD pipeline would look something like this:
The ideal CI/CD pipeline will differ from organisation to organisation and even from team to team within the same organisation. Since there is no one-size-fits-all CI/CD solution, you need to consider various aspects when designing and implementing the pipeline. This is, of course, after you have first defined your requirements and objectives, with buy-in for a roadmap from all business and technical stakeholders.
At the highest level, you have to choose between an open-source and a proprietary (third-party vendor) platform. In either case, you also need to decide whether to embrace a managed or self-hosted solution. Your choices here will depend to a great extent on the skills and capacity of your IT team, regulatory requirements, and the degree of flexibility that you want or need in your CI/CD implementation.
Here’s a tip, based on our experience in helping numerous customers transition to CI/CD: Start your implementation small and in a private cloud on-premises. It is often easier to start with CI, for example, and only later add CD. As you gain mastery, you can extend your CI/CD capabilities, both vertically (more features) and horizontally (across the organization), and move your CI/CD pipeline to the public cloud for greater flexibility and scalability.
Whether building your own CI/CD stack or adopting a fully managed solution, the types of tools or capabilities you will need include:
Version control/ source code management/revision control: There are a number of free and open-source systems available, such as Git and CVS. Pushing code to your repository triggers the CI/CD pipeline, and feedback from the pipeline triggers versioning updates in the system.
CI/CD orchestrator: As your CI/CD pipeline and environments become more complex and extensive, you will need a centralised CI/CD management and orchestration platform. There are many solutions available, from fully managed (GitLab, Atlassian Bamboo, AWS CodePipeline) to self-managed automation servers (Jenkins, GoCD, Spinnaker, to name just a few).
Automated testing: Automation, in general, and automated testing, in particular, are at the heart of CI/CD—from unit testing, which is often built into the IDE framework, to integration testing and even final QA checks. There are many commercial and open-source automated testing systems to choose from.
Infrastructure as code (IaC): This handles automated and consistent environment provisioning, including development, testing, staging, and production environments. Some of the better known IaC tools are Ansible, Chef, Puppet, and Terraform.
Container runtime and orchestration tools: For containerised applications, your CI/CD stack must be integrated with your container management ecosystem, such as Docker, Kubernetes, Amazon Elastic Container Service (ECS) or Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and more.
Monitoring and telemetry: The key metrics to monitor are deployment frequency (the more the better), mean lead time for changes (the shorter the better), mean time to recovery (the shorter the better), and change failure rate (the lower the better). The CI/CD orchestrators mentioned above typically provide metrics, but it is up to you to use them to fine-tune and optimise the performance of your CI/CD pipeline.
A CI/CD Case Study
A great example of CI/CD maturity in action is a client within the Ekco group that provides an application for the finance space, enabling communication and collaboration among stakeholders involved in the insolvency process. Their flagship legacy product was a monolithic SQL-based Windows application that was hosted on-prem and consumed over a local network. The company realised that in order to maintain their competitive edge, they would need to offer more flexible delivery options, as well as accelerate their time to new features.
Initially, they turned to Ekco brand Cloudhelix to provide a temporary pre-modernisation solution while the client rebuilt the application from the ground up as a cloud-native web app. Ekco implemented a simple and reliable solution that used VMware Horizon to stream the legacy application from the data centre to the user’s desktop, where it ran on a thin client or web browser. Once the client had built a modern application, they found the deployment process unwieldy, limiting their ability to iterate quickly. Cloudhelix were asked to create a CI/CD pipeline to streamline their deployment workflows.
The solution distilled the application installation process down to a single, exportable .WAR file that contained all the required app components. The exported file is run on a custom container created by Cloudhelix that’s hosted in a Kubernetes cluster. The Kubernetes hosting architecture ensures non-disruptive deployments, using methodologies like Canary and Blue-Green to eliminate most deployment-related downtime.
Ekco wrapped up all the steps of the automated deployment process in an end-to-end GitLab pipeline. This mature, repeatable CI/CD process allows the company’s geographically dispersed developers to push code and provide new features to its clients with zero downtime.
Using Kubernetes also means that their cloud product can be run reliably in any environment, from private cloud to public cloud and even bare-metal.
The aim of this article is to strip away the unnecessary mystique surrounding CI/CD. Hopefully, you’ve found this useful and it provides a practical set of guidelines your development and operations teams can use. However, we cannot emphasise enough that the effectiveness of any CI/CD stack is dependent on fundamental organisational changes that promote agile versus waterfall software development lifecycles.
The adoption of CI/CD also goes hand in hand with the adoption of modern microservices-based application architectures over legacy monolithic architectures. It’s all part of a wider transition.
At Ekco, we have a track record for helping organisations deepen their DevOps and CI/CD capabilities, with a particular focus on private cloud deployments. This suits organisations at a particular stage in their cloud lifecycle – who have cloud-native ambitions, but also have legacy issues to conquer first.
If you’re interested to learn how you can forge ahead fearlessly with DevOps, CI/CD and application modernisation, contact us today.
Got a question?
Our experts have an answer