Skip to content
Containerisation

Virtualisation offered a major step forward for software companies, who could reduce their IT costs while retaining overall control of the application by keeping it on their infrastructure. It also made things appealing for end users, who began to enjoy free trials, flexible, monthly billing and better integration between their favourite tools.

Fast forward a decade and it’s hard to imagine anything but the SaaS model for application deployment from a financial perspective. However, on the technology side, the way SaaS applications are hosted and managed is fundamentally changing. The world is moving on from virtualisation and it seems everybody is talking about containers.

In this blog post, we’ll look at why companies are containerising their software; the benefits on offer for those who do; what’s involved in moving from to containers; and more.

Before we get stuck in, if you’d like to read a high-level overview of containers, check out our e-book, An Introduction to Containerisation, which you can download for free today.

Why are software companies containerising their applications?

Delivering a fully managed, secure, always-on platform for a large-scale application is expensive. With a virtualised application, companies must ensure security and availability across each component within the environment, which is also pretty complex.

While the complexity can be dealt with by good in-house skills and the right external providers, many companies are on a constant mission to reduce their hosting costs. Or rather, they often feel like their hosting costs are growing disproportionately alongside the growth of the business.

Rather than cutting corners or looking at substandard hosting options, containers offer a solution to this challenge by decoupling the application from its supporting infrastructure.

As a container holds its own runtime information, the infrastructure platform supporting the application is now architected differently. Your ops teams will define a set of requirements (in line with an application SLA) into a generic, fault-tolerant and scalable infrastructure platform. With the runtime info held within each container, the need for a hypervisor is removed and every container can be run on a single host OS.

There are many reasons why containerisation is becoming so popular but perhaps this is the most significant for the software vertical: at the infrastructure level, companies are able to abstract the operational cost of the application away from the infrastructure.

Moving away from monolithic applications

The architecture of containers provide a standard, unified way of breaking up an application into distributed portions. This means that, despite being a part of one application, individual containers can exist on virtual machines, physical machines, in the cloud or on-premise.

This means you can host part of an application where its best suited, which is a great way to help reduce application complexity and cost. With a standard, virtualised application, you have to configure the VMs to run in a specific environment, which is something you can leave behind with containers.

This is a huge benefit because it allows you to ensure the resilience or security of a particular container through its location, knowing that you don’t have to configure it for its environment or worry about the dependencies or knock-on effect this might have.
It also becomes easier to debug and deploy your application, because issues can be fixed within that particular container without taking down the whole application, makes the process of debugging far less frustrating for your engineers.

New call-to-action

Container management

Kubernetes and other container management systems allow you to build the environment you require on top of any cloud platform. Your teams build and manage a cluster of containers while the services within those containers are then used to deal with requests from end-users. This takes the focus of your development teams away from the infrastructure while ensuring your engineers still have the flexibility to decide where a particular cluster or container should be hosted.

Kicking off a containerisation project

A project begins with a very high-level discussion between the right stakeholders. Deciding what your business is trying to achieve and, more importantly, what you’re all trying to avoid, will form some high-level objectives. One stakeholder from client services might see success as no upskilling or change for current customers once containerised – there will be a range of goals and motives and these will dictate the decisions you make going forward.

Once you’ve generated a list of objectives, it’s time to start thinking about a proof of concept: picking a part of your application, detaching the source code and doing everything you can to get it into a container. This will provide a hands-on understanding of the kind of work involved in the project ahead.

There’s a chance that it isn’t possible to containerise the application in its current form. If it can’t be broken down into workable modules, then perhaps a starting point is to look at creating a single container for the entire application.

At this stage, or at an agreed point early on in the project, your developers should agree not to add anything new to the legacy application. Instead, new features will be added iteratively as new, separate microservices down the line. If you’re looking to fully convert to containers, you may reach a point where rebuilding the application from scratch is the only option, but this depends on your project aims, which have already been defined.

As every application is unique, so is every containerisation project. Hopefully, this provides a very simplistic idea of how these projects are planned out. Containers fundamentally change how code is commited and how engineers interact with the application – so it’s worth spending the time up front to really understand everything before jumping in.

Enabling application portability

With a container image, you have packaged up the application and its dependencies into a self-contained bubble. This means that, in order for it to run on an environment, you no longer have to worry about building the OS before installing the application.

The application and its dependencies are held within its Dockerfile, which acts as a blueprint for the application, that can be stored with version control alongside different versions of the app stored in a container registry. This enables the concept of infrastructure as code, where you can easily make changes to the image, try things out and, when required, roll back to an earlier version of the image, failing fast without thinking about the supporting infrastructure in any great detail.

If your end goal is to run your entire application on AWS, for example, containerising it within its current environment would play a big part in helping you achieve this. The really powerful thing here is that, once you have containerised, it is easier to run your application on any cloud platform. You are at much less risk from vendor lock-in and have way more application flexibility.

Besides this, once containerised, there’s requirement for an upfront migration – you don’t need a new platform ready for the day your application has been containerised – the containerised version of the application will use less resource than the old version and it doesn’t need to be tied to a particular environment.

This is brilliant from a project management perspective as each part of the project can be dealt with on its own. Once containerised and running on existing infrastructure, your teams can begin thinking about the next steps, including where the application could be migrated to and why.

Considering containers? Get in touch with us today – we can advise on where to start your project, how we might decouple your application and more. We can scope, plan and even carry out your application replatforming project out for you.

modern vibrant office Woman smiling at laptop

Question?
Our specialists have the answer