Containerisation: an introduction to containers and how they work

David Essien Avatar
,

Introduction

Containerisation has become the de facto standard for making deployments to cloud environments. In today’s world, it is almost impossible to talk about deployments and not mention containerisation. From making it easy to set up development environments to adding stability and consistency to our deployments, containers have come to make our lives easier.

In this introduction to containerisation, we are going to be looking at containerisation, how it works, its benefits, challenges, and some use cases.

What is containerisation?

Containerisation is a process that involves the packaging of an application’s code, and everything it needs to run successfully into a single portable resource called a container image. The resources that are bundled with the container may include the code for the application, the runtime, dependencies, and configuration details.

As we have stated earlier, the result of containerisation is a container image. But what is an image and what is a container?

What is a container image?

An image is like the blueprint for creating containers. Even though the image contains application code and all it needs to run, the image is just an executable package. We cannot access our application from the image. That’s because it contains only the packaged files. To use this image, we have to create a container from it.

What is a container?

A container is a running instance of an image. While the image is just files bundled together, a container is the running version of those files. Imagine that you have your NodeJs application with your code, node_modules, images, and other files. When you have not started your application, it is not accessible. That’s how it is in the image. But with the container, your NodeJS server is started and running. You can access the app, and perform actions through an exposed port.

When you want to think of images and containers, think of the image as the blueprint for the containers. Just like the blueprint for a house, you can build many houses using a single blueprint, and if you follow the blueprint all the houses will be the same. Similarly, you can create many containers from a single image. You can even share your images with other people to use in creating containers.

How containers work

picture of an open fridge with food, depicting containerisation.

Every month, my wife prepares different types of Nigerian soups. She puts them in plastic containers and freezes them. Whenever we want to eat, we take one of the containers out and warm the food. Imagine if we didn’t put the soups in different containers. Everything will be a mess; the different soups will mix with each other, the taste will be lost, and we won’t be able to eat it. Containers provide a form of isolation that enables the preservation of the different foods in the same freezer.

Containerisation is a form of virtualisation. Virtualisation works by creating a layer over computer hardware, allowing multiple containers to run on a single physical machine. It achieves this using container engines. These container engines create isolated environments that use the kernel of the existing operating system, and the computer’s resources. However, the processes in the container do not in any way interfere with the processes on the host machine.

How container engines achieve isolation

Diagram showing containerisation.

Namespaces:

  • Think of these like separate rooms in a house – each container gets its own room
  • Controls what a container can see (processes, networks, files)
  • Windows equivalent: Silos
  • macOS: Uses Linux namespaces through VM

Control Groups (cgroups):

  • Like a parent setting limits for kids – controls how much resources a container can use
  • Manages CPU, memory, disk, and network usage
  • Windows equivalent: Job Objects
  • macOS: Uses Linux cgroups through VM

Seccomp:

  • Acts like a bouncer at a club – decides which system operations are allowed
  • Filters dangerous system calls to protect the host
  • Windows equivalent: Windows Defender Application Control (WDAC)
  • macOS: Uses Linux seccomp through VM

Filesystem Isolation:

  • Like giving each container its own personal locker
  • Each container gets its own private view of files
  • Windows equivalent: Filter Driver and NTFS reparse points
  • macOS: Uses Linux filesystem isolation through VM

This isolation ensures that:

  • Containers can’t see or interfere with each other
  • Resources are fairly distributed
  • The host system stays protected
  • Each container thinks it’s running on its own computer

The key difference is that Linux does this natively, Windows has its own native mechanisms, and macOS runs a small Linux virtual machine to achieve the same results. Linux is preferred for most container deployments because it provides the most native and efficient container support.

Benefits of Containerisation

  1. Portability and Consistency
    Applications packaged in containers work the same way everywhere – whether on a developer’s laptop, in the cloud, or on company servers. It’s like having a complete meal kit that contains all ingredients and instructions, ensuring the same dish can be prepared perfectly in any kitchen.
  2. Resource Efficiency
    Containers share the host’s operating system, making them lightweight and fast to start. Think of it like an apartment building where residents share common facilities (like the elevator and parking) instead of everyone needing their own. This sharing of resources leads to significant cost savings and better server utilization.
  3. Quick Deployment and Scaling
    Containers start up in seconds and can be easily multiplied when needed. It’s similar to quickly opening new checkout counters at a store when customer traffic increases. You can automatically add more containers during high demand and remove them when traffic decreases.
  4. Simplified Development
    Containers make development smoother by providing consistent environments. Developers can create and test applications in the same environment where they’ll run in production. It’s like practising on the actual stage where you’ll perform, ensuring no surprises during the show.
  5. Better Isolation
    Each container runs independently, like separate sealed boxes. If one container has a problem, it won’t affect others. This isolation also improves security since containers have limited access to the host system and other containers.
  6. Easy Updates and Rollbacks
    Containers make it easy to update applications or roll back to previous versions. Think of it like having a save point in a game – if something goes wrong, you can quickly return to the last working version.
  7. Dependency Management
    Each container includes everything it needs to run – the application, libraries, and dependencies. It’s like having a complete toolbox for each job, eliminating conflicts between different applications’ requirements.
  8. Cost-Effective Testing
    Containers allow teams to create and destroy test environments quickly and cheaply. It’s like having a practice space that can be set up and cleared away in minutes, making it easier to test new features or fixes.
  9. Disaster Recovery
    Containers can be quickly restored from backups or redeployed if something fails. Since containers are lightweight and include everything they need, recovery is fast and reliable, minimizing application downtime.
  10. Resource Monitoring
    Container platforms provide easy ways to monitor resource usage and performance. You can see exactly how much CPU, memory, and storage each container uses, making it easier to plan and optimize resource allocation.

These benefits make containers especially valuable for modern applications, development teams, and businesses looking to improve their software delivery and operation efficiency.

Popular Containerisation Tools

  1. Docker
    The most popular container tool that started it all. Docker makes it easy to create and run containers, like having a simple recipe book that everyone understands. It’s user-friendly and has the largest community support, making it perfect for beginners and experts alike.
  2. Podman
    A newer alternative to Docker that’s more secure because it doesn’t need a background process (daemon) to run. It’s like Docker’s safety-conscious cousin, popular in enterprise environments where security is a top priority.
  3. Kubernetes (K8s)
    The container orchestra conductor – it manages multiple containers across many computers. Kubernetes handles everything from scaling to updates, making sure your containers work together smoothly. It’s like a smart traffic system that ensures all containers run efficiently.
  4. ContainerD
    A simpler container runtime that powers Docker behind the scenes. It’s becoming popular on its own because it’s lightweight and focuses on just running containers. Think of it as the engine that powers the car, without all the extra features.
  5. LXC (Linux Containers)
    The original Linux container technology that inspired Docker. It’s like the grandfather of modern containers, still used today for system containers rather than application containers. It’s perfect for running entire operating systems in containers.

These tools form the backbone of modern containerisation, each serving different needs from simple application containers to complex enterprise deployments.

Containerisation Workflow

  1. Development and Planning
    First, developers plan what goes into the container – the application, its dependencies, and configurations. It’s like making a packing list before a trip, ensuring nothing important is forgotten.
  2. Building a Container Image
    Developers write a Dockerfile, lxc configuration file, lxd profile, or – a simple text file with instructions for building the container image. Note that the files depend on which container engine you are using. Think of it like a recipe: it lists all ingredients (software) and steps (commands) needed. The Dockerfile is then built into an image that can be shared and reused.
  3. Testing the Image
    Before deploying, the image is tested locally to ensure everything works. Like test-driving a car before buying, developers run the container locally to catch any issues early.
  4. Running Containers
    Images are used to start containers, which run the actual application. You can start multiple containers from the same image, like making multiple copies of a document from a master template. Each container runs independently in its own isolated environment.
  5. Container Networking
    Containers can be connected to talk to each other, like phones in a network. They can be:
  • Connected to the internet
  • Linked to other containers
  • Isolated in private networks
  • Load balanced for better performance
  1. Data Management
    Containers typically start fresh each time, but important data can be saved using:
  • Volumes: Dedicated storage spaces that persist data
  • Bind Mounts: Links to folders on the host computer
  • Config Maps: For configuration data
  • Secrets: For sensitive information like passwords
  1. Monitoring and Maintenance
    Containers need regular monitoring to ensure they’re healthy:
  • Check resource usage (CPU, memory)
  • Monitor container logs
  • Update container images
  • Remove unused containers and images
  1. Scaling and Orchestration
    As applications grow:
  • Add more containers to handle the increased load
  • Use orchestration tools like Kubernetes to manage multiple containers
  • Automatically scale based on demand
  • Handle container failures and restart

This workflow creates a reliable, repeatable process for deploying and managing containerized applications, from development to production.

Use Cases of Containerisation

  1. Microservices Architecture
    Modern apps split into smaller, independent services run perfectly in containers. Each microservice operates in its own container, making it easy to update and manage services independently.
  2. Cloud Applications
    Containers run consistently across any cloud platform or data center. This flexibility makes them perfect for businesses using multiple cloud providers or moving between different environments.
  3. Development and Testing
    Containers create identical development and testing environments quickly. This ensures applications work the same way from a developer’s laptop all the way to production servers.
  4. Edge Computing and IoT
    The lightweight nature of containers makes them ideal for running applications on small devices and remote locations, from smart home devices to industrial sensors.
  5. Legacy Application Modernization
    Old applications can be packaged into containers to make them more portable and easier to maintain, helping businesses modernize without completely rebuilding their applications.

Challenges of Containerisation

  1. Resource Management
    Managing resources becomes more complex as the number of containers grows. Organizations need effective monitoring and management systems to ensure containers have the right amount of CPU, memory, and storage. Without proper management, containers can compete for resources and affect application performance.
  2. Storage Challenges
    Containers are designed to be stateless, making persistent storage a significant challenge. Organizations need to carefully plan how to store and manage data that must survive container restarts. This often requires specialized storage solutions and careful consideration of data backup and recovery strategies.
  3. Security Concerns
    Container security requires constant attention. Organizations must regularly scan container images for vulnerabilities, manage access controls, and ensure proper network security. Container isolation must be properly configured to prevent unauthorized access between containers and protect sensitive data.
  4. Complexity in Management
    Operating a containerized environment at scale requires sophisticated orchestration tools and skilled personnel. Teams need to manage container deployment, scaling, networking, and monitoring. This complexity can be overwhelming without proper tools and expertise.
  5. Network Management
    Container networking adds another layer of complexity to infrastructure management. Teams must handle communication between containers, set up load balancing, and manage service discovery. Network security and performance become critical considerations as container deployments grow.

Despite these challenges, containerisation continues to provide significant benefits when implemented with proper planning and management strategies.

David Essien Avatar

Please share