Skip to main content

Introduction

What is Containerization?

Containerization is a lightweight form of virtualization that allows you to package applications and their dependencies into isolated units called containers. This approach addresses several challenges faced in traditional application deployment:

  • Environment Consistency: Applications often behave differently in development, testing, and production due to varying environments. Containerization ensures that the application runs the same way across all environments by encapsulating all dependencies, libraries, and configuration files within the container.

  • Simplified Dependency Management: Managing multiple dependencies for an application can be tedious and error-prone. Containerization simplifies this process by bundling all necessary components together, eliminating conflicts and ensuring that the application has everything it needs to run.

  • Isolation: Containers run in isolation from each other and from the host system, allowing multiple applications or services to coexist on the same machine without interference.

  • Scalability: Containers can be quickly created, started, stopped, or destroyed, making it easy to scale applications up or down based on demand.

Overall, containerization enhances application deployment and management, particularly for complex systems like FreeSWITCH, by providing a consistent and efficient approach.

What is Docker?

Docker is a tool designed to facilitate the creation, deployment, and management of containers. It streamlines the containerization process and provides a standardized environment for running applications. With Docker, developers can easily package applications along with their dependencies, ensuring they run reliably across various environments.

Key Terms

  • Image: A snapshot of your application and its environment, including all dependencies required to run the application.
  • Container: A running instance of an image. It contains everything needed for the application to execute.
  • Dockerfile: A script that contains instructions on how to build a Docker image for your application.

Why Containerize FreeSWITCH?

There are several reasons to run FreeSWITCH inside a Docker container:

  • Portability: Docker ensures that FreeSWITCH will run the same way on any machine, regardless of the operating system.
  • Isolation: Docker containers provide an isolated environment, so changes in system libraries or configurations won’t affect FreeSWITCH.
  • Scalability: With Docker, you can quickly spin up multiple FreeSWITCH instances, allowing for easier scaling.
  • Simplicity: Docker simplifies the process of setting up and maintaining FreeSWITCH by bundling all dependencies into a single package (container).

Getting Started: Steps to Containerize FreeSWITCH

Step 1: Install Docker

First, you need to install Docker. Follow these steps based on your operating system:

  • Windows and macOS: Visit the Docker website and download Docker Desktop. Follow the installation instructions.

  • Linux: Use the package manager to install Docker. To install on Ubuntu, following guide can be used:

    • Uninstall previously installed version (if required):

      for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

      apt-get might report that you have none of these packages installed and this is completely fine.

    • Setup Docker's apt registry

      Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker apt repository. Afterward, you can install and update Docker from the repository.

      # Add Docker's official GPG key:

      sudo apt-get update
      sudo apt-get install ca-certificates curl
      sudo install -m 0755 -d /etc/apt/keyrings
      sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
      sudo chmod a+r /etc/apt/keyrings/docker.asc

      # Add the repository to Apt sources:

      echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
      $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
      sudo apt-get update
    • Install Docker package:

      sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    • Verify installation: We can verify that installation is successful by running the hello-world image:

      sudo docker run hello-world

      This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.

    You have now successfully installed and started Docker Engine.

  • Offline Installation: If you can't use Docker's apt repository to install Docker Engine, you can download the deb file for your release and install it manually. You need to download a new file each time you want to upgrade Docker Engine.

    1. Go to Docker Distibution Download Page.

    2. Select your Ubuntu version in the list.

    3. Go to pool/stable/ and select the applicable architecture (amd64, armhf, arm64, or s390x).

    4. Download the following deb files for the Docker Engine, CLI, containerd, and Docker Compose packages:

      • containerd.io*<version>*<arch>.deb
      • docker-ce_<version>_<arch>.deb
      • docker-ce-cli_<version>_<arch>.deb
      • docker-buildx-plugin_<version>_<arch>.deb
      • docker-compose-plugin_<version>_<arch>.deb
    5. Zip and copy deb files to the server where you want to install docker.

    6. Install the .deb packages. Update the paths in the following example to where you downloaded the Docker packages.

      sudo dpkg -i ./containerd.io_<version>_<arch>.deb \
      ./docker-ce_<version>_<arch>.deb \
      ./docker-ce-cli_<version>_<arch>.deb \
      ./docker-buildx-plugin_<version>_<arch>.deb \
      ./docker-compose-plugin_<version>_<arch>.deb

      The Docker daemon starts automatically.

    7. Verify that the installation is successful by running the hello-world image:

      sudo service docker start
      sudo docker run hello-world

      This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.

You have now successfully installed and started Docker Engine.

Step 2: Configure Docker as a non-root user

The Docker daemon binds to a Unix socket, not a TCP port. By default it's the root user that owns the Unix socket, and other users can only access it using sudo. The Docker daemon always runs as the root user.

If you don't want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group. On some Linux distributions, the system automatically creates this group when installing Docker Engine using a package manager. In that case, there is no need for you to manually create the group.

To create the docker group and add your user:

  1. Create the docker group.

    sudo groupadd docker
  2. Add your user to the docker group.

    sudo usermod -aG docker $USER
  3. Log out and log back in so that your group membership is re-evaluated. You can also run the following command to activate the changes to groups:

    newgrp docker
  4. Verify that you can run docker commands without sudo.

    docker run hello-world

    This command downloads a test image and runs it in a container. When the container runs, it prints a message and exits.

    If you initially ran Docker CLI commands using sudo before adding your user to the docker group, you may see the following error:

    WARNING: Error loading config file: /home/user/.docker/config.json -
    stat /home/user/.docker/config.json: permission denied

    This error indicates that the permission settings for the ~/.docker/ directory are incorrect, due to having used the sudo command earlier.

    To fix this problem, either remove the ~/.docker/ directory (it's recreated automatically, but any custom settings are lost), or change its ownership and permissions using the following commands:

    sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
    sudo chmod g+rwx "$HOME/.docker" -R

Step 3: Configure Docker to start on boot with systemd

Many modern Linux distributions use systemd to manage which services start when the system boots. On Debian and Ubuntu, the Docker service starts on boot by default. To automatically start Docker and containerd on boot for other Linux distributions using systemd, run the following commands:

sudo systemctl enable docker.service
sudo systemctl enable containerd.service

To stop this behavior, use disable instead.

sudo systemctl disable docker.service
sudo systemctl disable containerd.service

You can use systemd unit files to configure the Docker service on startup, for example to add an HTTP proxy, set a different directory or partition for the Docker runtime files, or other customizations.

Step 4: Build FreeSWITCH image

You can follow Containerizing FreeSWITCH to build docker image for freeSWITCH and running it as container.