Introduction
Containerisation has become the de facto standard for making deployments to cloud environments. In today’s world, it is almost impossible to talk about deployments and not mention containerisation. From making it easy to set up development environments to adding stability and consistency to our deployments, containers have come to make our lives easier.
In this introduction to containerisation, we are going to be looking at containerisation, how it works, its benefits, challenges, and some use cases.
What is containerisation?
Containerisation is a process that involves the packaging of an application’s code, and everything it needs to run successfully into a single portable resource called a container image. The resources that are bundled with the container may include the code for the application, the runtime, dependencies, and configuration details.
As we have stated earlier, the result of containerisation is a container image. But what is an image and what is a container?
What is a container image?
An image is like the blueprint for creating containers. Even though the image contains application code and all it needs to run, the image is just an executable package. We cannot access our application from the image. That’s because it contains only the packaged files. To use this image, we have to create a container from it.
What is a container?
A container is a running instance of an image. While the image is just files bundled together, a container is the running version of those files. Imagine that you have your NodeJs application with your code, node_modules, images, and other files. When you have not started your application, it is not accessible. That’s how it is in the image. But with the container, your NodeJS server is started and running. You can access the app, and perform actions through an exposed port.
When you want to think of images and containers, think of the image as the blueprint for the containers. Just like the blueprint for a house, you can build many houses using a single blueprint, and if you follow the blueprint all the houses will be the same. Similarly, you can create many containers from a single image. You can even share your images with other people to use in creating containers.
How containers work
Every month, my wife prepares different types of Nigerian soups. She puts them in plastic containers and freezes them. Whenever we want to eat, we take one of the containers out and warm the food. Imagine if we didn’t put the soups in different containers. Everything will be a mess; the different soups will mix with each other, the taste will be lost, and we won’t be able to eat it. Containers provide a form of isolation that enables the preservation of the different foods in the same freezer.
Containerisation is a form of virtualisation. Virtualisation works by creating a layer over computer hardware, allowing multiple containers to run on a single physical machine. It achieves this using container engines. These container engines create isolated environments that use the kernel of the existing operating system, and the computer’s resources. However, the processes in the container do not in any way interfere with the processes on the host machine.
How container engines achieve isolation
Namespaces:
- Think of these like separate rooms in a house – each container gets its own room
- Controls what a container can see (processes, networks, files)
- Windows equivalent: Silos
- macOS: Uses Linux namespaces through VM
Control Groups (cgroups):
- Like a parent setting limits for kids – controls how much resources a container can use
- Manages CPU, memory, disk, and network usage
- Windows equivalent: Job Objects
- macOS: Uses Linux cgroups through VM
Seccomp:
- Acts like a bouncer at a club – decides which system operations are allowed
- Filters dangerous system calls to protect the host
- Windows equivalent: Windows Defender Application Control (WDAC)
- macOS: Uses Linux seccomp through VM
Filesystem Isolation:
- Like giving each container its own personal locker
- Each container gets its own private view of files
- Windows equivalent: Filter Driver and NTFS reparse points
- macOS: Uses Linux filesystem isolation through VM
This isolation ensures that:
- Containers can’t see or interfere with each other
- Resources are fairly distributed
- The host system stays protected
- Each container thinks it’s running on its own computer
The key difference is that Linux does this natively, Windows has its own native mechanisms, and macOS runs a small Linux virtual machine to achieve the same results. Linux is preferred for most container deployments because it provides the most native and efficient container support.
Benefits of Containerisation
- Portability and Consistency
Applications packaged in containers work the same way everywhere – whether on a developer’s laptop, in the cloud, or on company servers. It’s like having a complete meal kit that contains all ingredients and instructions, ensuring the same dish can be prepared perfectly in any kitchen. - Resource Efficiency
Containers share the host’s operating system, making them lightweight and fast to start. Think of it like an apartment building where residents share common facilities (like the elevator and parking) instead of everyone needing their own. This sharing of resources leads to significant cost savings and better server utilization. - Quick Deployment and Scaling
Containers start up in seconds and can be easily multiplied when needed. It’s similar to quickly opening new checkout counters at a store when customer traffic increases. You can automatically add more containers during high demand and remove them when traffic decreases. - Simplified Development
Containers make development smoother by providing consistent environments. Developers can create and test applications in the same environment where they’ll run in production. It’s like practising on the actual stage where you’ll perform, ensuring no surprises during the show. - Better Isolation
Each container runs independently, like separate sealed boxes. If one container has a problem, it won’t affect others. This isolation also improves security since containers have limited access to the host system and other containers. - Easy Updates and Rollbacks
Containers make it easy to update applications or roll back to previous versions. Think of it like having a save point in a game – if something goes wrong, you can quickly return to the last working version. - Dependency Management
Each container includes everything it needs to run – the application, libraries, and dependencies. It’s like having a complete toolbox for each job, eliminating conflicts between different applications’ requirements. - Cost-Effective Testing
Containers allow teams to create and destroy test environments quickly and cheaply. It’s like having a practice space that can be set up and cleared away in minutes, making it easier to test new features or fixes. - Disaster Recovery
Containers can be quickly restored from backups or redeployed if something fails. Since containers are lightweight and include everything they need, recovery is fast and reliable, minimizing application downtime. - Resource Monitoring
Container platforms provide easy ways to monitor resource usage and performance. You can see exactly how much CPU, memory, and storage each container uses, making it easier to plan and optimize resource allocation.
These benefits make containers especially valuable for modern applications, development teams, and businesses looking to improve their software delivery and operation efficiency.
Popular Containerisation Tools
- Docker
The most popular container tool that started it all. Docker makes it easy to create and run containers, like having a simple recipe book that everyone understands. It’s user-friendly and has the largest community support, making it perfect for beginners and experts alike. - Podman
A newer alternative to Docker that’s more secure because it doesn’t need a background process (daemon) to run. It’s like Docker’s safety-conscious cousin, popular in enterprise environments where security is a top priority. - Kubernetes (K8s)
The container orchestra conductor – it manages multiple containers across many computers. Kubernetes handles everything from scaling to updates, making sure your containers work together smoothly. It’s like a smart traffic system that ensures all containers run efficiently. - ContainerD
A simpler container runtime that powers Docker behind the scenes. It’s becoming popular on its own because it’s lightweight and focuses on just running containers. Think of it as the engine that powers the car, without all the extra features. - LXC (Linux Containers)
The original Linux container technology that inspired Docker. It’s like the grandfather of modern containers, still used today for system containers rather than application containers. It’s perfect for running entire operating systems in containers.
These tools form the backbone of modern containerisation, each serving different needs from simple application containers to complex enterprise deployments.
Containerisation Workflow
- Development and Planning
First, developers plan what goes into the container – the application, its dependencies, and configurations. It’s like making a packing list before a trip, ensuring nothing important is forgotten. - Building a Container Image
Developers write a Dockerfile, lxc configuration file, lxd profile, or – a simple text file with instructions for building the container image. Note that the files depend on which container engine you are using. Think of it like a recipe: it lists all ingredients (software) and steps (commands) needed. The Dockerfile is then built into an image that can be shared and reused. - Testing the Image
Before deploying, the image is tested locally to ensure everything works. Like test-driving a car before buying, developers run the container locally to catch any issues early. - Running Containers
Images are used to start containers, which run the actual application. You can start multiple containers from the same image, like making multiple copies of a document from a master template. Each container runs independently in its own isolated environment. - Container Networking
Containers can be connected to talk to each other, like phones in a network. They can be:
- Connected to the internet
- Linked to other containers
- Isolated in private networks
- Load balanced for better performance
- Data Management
Containers typically start fresh each time, but important data can be saved using:
- Volumes: Dedicated storage spaces that persist data
- Bind Mounts: Links to folders on the host computer
- Config Maps: For configuration data
- Secrets: For sensitive information like passwords
- Monitoring and Maintenance
Containers need regular monitoring to ensure they’re healthy:
- Check resource usage (CPU, memory)
- Monitor container logs
- Update container images
- Remove unused containers and images
- Scaling and Orchestration
As applications grow:
- Add more containers to handle the increased load
- Use orchestration tools like Kubernetes to manage multiple containers
- Automatically scale based on demand
- Handle container failures and restart
This workflow creates a reliable, repeatable process for deploying and managing containerized applications, from development to production.
Use Cases of Containerisation
- Microservices Architecture
Modern apps split into smaller, independent services run perfectly in containers. Each microservice operates in its own container, making it easy to update and manage services independently. - Cloud Applications
Containers run consistently across any cloud platform or data center. This flexibility makes them perfect for businesses using multiple cloud providers or moving between different environments. - Development and Testing
Containers create identical development and testing environments quickly. This ensures applications work the same way from a developer’s laptop all the way to production servers. - Edge Computing and IoT
The lightweight nature of containers makes them ideal for running applications on small devices and remote locations, from smart home devices to industrial sensors. - Legacy Application Modernization
Old applications can be packaged into containers to make them more portable and easier to maintain, helping businesses modernize without completely rebuilding their applications.
Challenges of Containerisation
- Resource Management
Managing resources becomes more complex as the number of containers grows. Organizations need effective monitoring and management systems to ensure containers have the right amount of CPU, memory, and storage. Without proper management, containers can compete for resources and affect application performance. - Storage Challenges
Containers are designed to be stateless, making persistent storage a significant challenge. Organizations need to carefully plan how to store and manage data that must survive container restarts. This often requires specialized storage solutions and careful consideration of data backup and recovery strategies. - Security Concerns
Container security requires constant attention. Organizations must regularly scan container images for vulnerabilities, manage access controls, and ensure proper network security. Container isolation must be properly configured to prevent unauthorized access between containers and protect sensitive data. - Complexity in Management
Operating a containerized environment at scale requires sophisticated orchestration tools and skilled personnel. Teams need to manage container deployment, scaling, networking, and monitoring. This complexity can be overwhelming without proper tools and expertise. - Network Management
Container networking adds another layer of complexity to infrastructure management. Teams must handle communication between containers, set up load balancing, and manage service discovery. Network security and performance become critical considerations as container deployments grow.
Despite these challenges, containerisation continues to provide significant benefits when implemented with proper planning and management strategies.