ZNHOO Whatever you are, be a good one!

Migrate Docker Desktop to colima

  1. Introduction
  2. Uninstall Docker Desktop
  3. colima Setup
    1. Install colima
    2. colima profile
      1. Config colima
    3. Start colima
    4. ssh colima
    5. Inspect colima
    6. Update colima
    7. Clean colima
  4. Docker CLI Setup
    1. Install Docker CLI
    2. Config Docker CLI
    3. Switch Docker Engine
    4. Verify Docker Setup
    5. Unix Socket File
    6. Multi-platform Build
    7. SSH Agent Forwarding
    8. Mount Volumes
  5. Kubernetes Setup
    1. Enable Kubernetes
    2. Install kubectl
    3. Config kubectl
    4. Inspect Kubernetes
    5. EXTREME CAUTION
  6. References

As required by the company, we have to remove Docker Desktop from our macOS due to the pricing model change. After searching over the web, I found the colima - container runtime a good alternative.

Introduction

CoLiMa stands for Containers on Lima (Linux Virtual Machine). It supports three container runtimes, namely official Docker (default), pure containerd, Incus. In this post, we elect the default runtime.

To put it simple, colima creates a Linux Virtual Machine on macOS based on Hypervisor and Lima, and then spanws the specified runtime process within.

Containers communicate with the VM instead of macOS host. To make resources on the macOS available to containers, please mount them to the VM first. See the SSH Agent Forwarding example.

Uninstall Docker Desktop

Docker Desktop includes both the Docker CLI and Docker engine. After uninstallation, both components are removed, including images, containers, volumes, networks, etc.

Once completed, all Docker objects are gone, inlcuding containers! See https://docs.docker.com/desktop/uninstall/.

colima Setup

Install colima

~ $ brew install colima
...

==> Installing dependencies for colima: lima
==> Installing colima dependency: lima
==> Downloading https://ghcr.io/v2/homebrew/core/lima/manifests/1.0.3
Already downloaded: /Users/jim/Library/Caches/Homebrew/downloads/1433d7017c4773c2360cdd9ae90a6bf68d86b93c83daac1ed445db685427e583--lima-1.0.3.bottle_manifest.json
==> Pouring lima--1.0.3.arm64_sonoma.bottle.tar.gz
🍺  /opt/homebrew/Cellar/lima/1.0.3: 107 files, 205.6MB
==> Installing colima
==> Pouring colima--0.8.1.arm64_sonoma.bottle.tar.gz
==> Caveats
Bash completion has been installed to:
  /opt/homebrew/etc/bash_completion.d

To start colima now and restart at login:
  brew services start colima
Or, if you don't want/need a background service you can just run:
  /opt/homebrew/opt/colima/bin/colima start -f
==> Summary
🍺  /opt/homebrew/Cellar/colima/0.8.1: 11 files, 5.7MB
==> Running `brew cleanup colima`...

...

Check version.

~ colima version
colima version 0.8.1
git commit: 96598cc5b64e5e9e1e64891642b91edc8ac49d16

runtime: docker
arch: aarch64
client: v27.4.1
server: v27.4.0

~ $ limactl --version
limactl version 1.0.3

~ $ qemu-img --version
-bash: qemu-img: command not found

Lima is a dependency of the colima, and hence it is also installed.

colima profile

colima has an important concept of profile. Each profile represents the configuration arguments (e.g. CPU architecture) of the Linux virtual machine, namely a colima instance. The colima CLI has a global option --profile, -p that defines the default configuration of colima engines by colima template. For example, we can create two profiles for CPU architectures aarch64 and x86_64 respectively. The default profile name is default which uses the same CPU architecture as the macOS host.

In this post, the terms colima profile, colima instance, colima VM and colima engine could be used interchangeably.

For the list of supported configuration fields, please refer to the config colima section.

Config colima

The $COLIMA_HOME environment variable specifies the directory that holds everything of colima. By default, it points to $HOME/.colima.

21:52:17 jim@Jims-MacBook-Pro ~
$ echo 'export COLIMA_HOME=$HOME/.colima' >> .bashrc

$ export COLIMA_HOME=$HOME/.colima
$ mkdir -p $COLIMA_HOME

The list of supported configurations can be found by colima help start. They can be resources (CPU/memory) constraints, CPU architectures, engines (--runtime), dns, etc. We can even forward environment variables on the host to the VM via --env.

To set the default config of a profile, run colima template -p <profile>. The config file of a profile is located at ${COLIMA_HOME}/_templates/<profile>.yaml. Each CLI option has a corresponding field in this file with detailed explanation. Actually, the configuration of colima is transformed to configurations of lima VM and Docker components.

~ $ grep -E '^[a-z]+' $COLIMA_HOME/_templates/default.yaml
3:cpu: 4
9:disk: 100
13:memory: 8
19:arch: host
25:runtime: docker
30:hostname: colima
33:kubernetes:
53:autoActivate: true
56:network:
95:forwardAgent: true
114:docker: {}
123:vmType: vz
127:rosetta: false
131:nestedVirtualization: false
144:mountType: virtiofs
148:mountInotify: false
154:cpuType: host
173:provision: []
178:sshConfig: true
184:sshPort: 0
199:mounts: []
207:diskImage: ""
217:env: {}

The default resources assigned to the Linux VM is 2 CPUs, 2 GiB memory and 100 GiB disk space. We want to increase the CPU and memory a bit. Additionally, we want to update more fields as follows.

~ $ colima template -p default --editor nano
>cpu: 4
>memory: 8
>autoActivate: true
>forwardAgent: true
>vmType: vz
>rosetta: true
>mountType: virtiofs
>network:
>  address: true
>docker:
>  features:
>    buildkit: true
>  userland-proxy: false

~ $ less ~/.colima/_templates/default.yaml

The vmType is set to vzinstead of qemu, mountType is set to virtiofs, and rosetta is turned on. The three configs can boost performance on Apple M1 chip. Optionally, we can enable Kubernetes support.

We can customize the default config with the --edit CLI option upon start (colima start), with the specific config CLI option (e.g. --dns), or manually edit the ${COLIMA_HOME}/<profile>/<hostname>.yaml. For example, the hostname of the colima VM of the default profile is colima. It can be customized with the hostname field.

Reference Can config file be used instead of cli flags?.

Start colima

List existing instances.

~ $ colima list
WARN[0000] No instance found. Run `colima start` to create an instance.
PROFILE    STATUS    ARCH    CPUS    MEMORY    DISK    RUNTIME    ADDRESS

Start an instance of the default profile.

~ $ colima start -p default
INFO[0000] starting colima
INFO[0000] runtime: docker+k3s
INFO[0001] starting ...                                  context=vm
INFO[0013] provisioning ...                              context=docker
INFO[0014] starting ...                                  context=docker
INFO[0015] provisioning ...                              context=kubernetes
INFO[0015] downloading and installing ...                context=kubernetes
INFO[0109] loading oci images ...                        context=kubernetes
INFO[0118] starting ...                                  context=kubernetes
INFO[0122] updating config ...                           context=kubernetes
INFO[0123] Switched to context "colima".                 context=kubernetes
INFO[0124] done

The colima instance exposes a Docker Unix socket file docker.sock to the macOS host.

~ $ colima list
PROFILE    STATUS     ARCH       CPUS    MEMORY    DISK      RUNTIME    ADDRESS
default    Running    aarch64    4       8GiB      100GiB    docker

~ $ colima status -p default
INFO[0000] colima is running using macOS Virtualization.Framework
INFO[0000] arch: aarch64
INFO[0000] runtime: docker
INFO[0000] mountType: sshfs
INFO[0000] socket: unix:///Users/jim/.colima/default/docker.sock

A new Docker context called colima is created and automatically activated (autoActivate). The associated docker.sock is mapped from the colima VM to the macOS host.

~ $ docker context list
NAME       DESCRIPTION                               DOCKER ENDPOINT                                     ERROR
colima *   colima                                    unix:///Users/jim/.colima/default/docker.sock
default    Current DOCKER_HOST based configuration   unix:///var/run/docker.sock

The original Docker official context called default is still bound to the DOCKER_HOST environment variable. We can use this variable to switch between different engines.

A new Docker builder colima is created and automatically activated. Regarding the error status of the Docker official default builder, see Unix socket file.

~ $ docker buildx ls
NAME/NODE    DRIVER/ENDPOINT   STATUS    BUILDKIT   PLATFORMS
colima*      docker
 \_ colima    \_ colima        running   v0.17.3    linux/amd64 (+2), linux/arm64, linux/386
default                        error

Cannot load builder default: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

We can also make use of brew services to start colima. But currently, it does not support CLI options. What is worse, it can not create the docker context! The advantage of brew services start is it would regiester the service and autostart at system booting.

# take around 30s
~ $ brew services start colima

~ $ brew services info colima

ssh colima

A SSH config entry is inserted to ${COLIMA_HOME}/ssh_config for the colima VM. Additionally, this entry is also inserted to the macOS SSH config.

~ $ less ${COLIMA_HOME}/ssh_config

~ $ head ~/.ssh/config
Include /Users/jim/.colima/ssh_config

So, we can get into the VM via ssh. Please check the config file for the hostname field.

~ $ ssh <hostname>
# -or-
~ $ ssh -F ${COLIMA_HOME}/ssh_config <hostname>

Alternatively, colima provides the ssh sub-command accepting the profile name. This sub-command preserves the local dir.

21:37:28 jim@Jims-MacBook-Pro ~/misc
$ colima ssh -p default --very-verbose
TRAC[0000] cmd ["limactl" "list" "colima" "--json"]
TRAC[0000] cmd int ["limactl" "shell" "--workdir" "/Users/jim/misc" "colima"]

Inspect colima

After getting into the VM, we find many facts.

The VM OS is Ubuntu 24.04. Please check the diskImage filed in the config file.

~ $ ssh colima
Last login: Tue Dec 31 11:54:26 2024 from 192.168.5.2

jim@colima:~$ uname -a
Linux colima 6.8.0-50-generic #51-Ubuntu SMP PREEMPT_DYNAMIC Sat Nov  9 18:03:35 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux

jim@colima:~$ cat /etc/os-release
PRETTY_NAME="Ubuntu 24.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.1 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo

The same user account as the macOS host is created. Attention please; the uid and gid numbers are different.

jim@colima:~$ pwd
/home/jim.linux

jim@colima:~$ id
uid=501(jim) gid=1000(jim) groups=1000(jim),991(docker)

~$ tail -1 /etc/passwd
jim:x:501:1000:Jim Zhan Hu:/home/jim.linux:/bin/bash

The dockerd, containerd, containerd-shim and runc all reside within the VM.

jim@colima:~$ ps -eF --forest | grep [c]ontainerd
root         446       1  0 465190 40872  1 13:58 ?        00:00:11 /usr/bin/containerd
root        1218       1  0 546591 71184  2 13:58 ?        00:00:03 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --host-gateway-ip=192.168.5.2

We check the logs of colima.

# logs of daemons in the VM
jim@colima:~$ sudo journalctl -xfe -u docker
jim@colima:~$ sudo journalctl -xfe -u containerd

# logs of colima
~ $ ls -l $COLIMA_HOME/_lima/colima/*.log
-rw-r--r--  1 jim  staff  12184 Jan  6 20:30 /Users/jim/.colima/_lima/colima/ha.stderr.log
-rw-r--r--  1 jim  staff    167 Jan  6 20:28 /Users/jim/.colima/_lima/colima/ha.stdout.log
-rw-------  1 jim  staff     51 Jan  6 20:28 /Users/jim/.colima/_lima/colima/serialv.log

Data of the colima Engine is stored under ${COLIMA_HOME}/_lima/<hostname>. There are multiple meaningful files colima.yaml (configuration of colima), lima.yaml (configuration of lima) and cloud-config.yaml (user account). Whenever you encounter issues, please inspect these files.

Update colima

When it comes to "update colima", we mean updating either the whole colima or just the Docker runtime part.

  1. To update the Docker runtime part, run colima update -p <profile>.
  2. To update the whole colima (Docker runtime included), refer to Install colima.

Clean colima

Run colima k8s stop and/or colima stop to shutdown the colima instance.

Run colima delete to destroy everything, simulating a fresh installation. It would expunge both colima config and Docker objects (images/containers included)!

~ $ colima delete -p default -v
are you sure you want to delete colima and all settings? [y/N]

colima caches downloaded assets under ~$HOME/Library/Caches/{colima,lima}, we can further prune the cache.

~ $ colima prune -a -v
'/Users/jim/Library/Caches/colima' and '/Users/jim/Library/Caches/lima' will be emptied, are you sure? [y/N] y
INFO[0001] Pruning "/Users/jim/Library/Caches/colima"
INFO[0000] Pruning "/Users/jim/Library/Caches/lima"

Docker CLI Setup

Install Docker CLI

Install Docker CLI. Please do NOT add the --cask CLI option!

~ $ brew install docker

Additionally, we install additional Docker plugins.

~ $ brew install docker-compose docker-buildx

Config Docker CLI

Firstly, let us set the configuration directory via the DOCKER_CONFIG environment variable.

~ $ 17:12:05 jim@Jims-MacBook-Pro ~
$ echo 'export DOCKER_CONFIG=$HOME/.config/docker' >> ~/.bashrc

$ mkdir -p $DOCKER_CONFIG

We configure the Docker CLI let it find the plugins, otherwise docker compose would report 'compose' is not a docker command.

~ $ 17:13:15 jim@Jims-MacBook-Pro ~
$ cat .config/docker/config.json
{
    "auths": {},
    "cliPluginsExtraDirs": [
        "/opt/homebrew/lib/docker/cli-plugins"
    ]
}

Switch Docker Engine

In the Start colima section, the Docker Context and the Docker builder are automatically switched to the colima Engine. But to definitely let Dokcer CLI communicate with the colima Engine, we should explicitly switch to it.

~ $ docker context ls
~ $ docker context use colima
colima
Current context is now "colima"
~ $ less $DOCKER_CONFIG/contexts/meta/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/meta.json

~ $ docker buildx ls
~ $ docker buildx use colima 
~ $ less $DOCKER_CONFIG/buildx/current

This is required if there are multiple engines available on the macOS host.

Verify Docker Setup

Let us start with a simple apline container. From the output, an aarch64 container was created. So, we can say the colima VM is also aarch64. This matches the fact that the macOS is an Apple M1 chip.

~ $ docker run --rm -ti alpine uname -a
Linux 12025f4e1ef2 6.8.0-50-generic #51-Ubuntu SMP PREEMPT_DYNAMIC Sat Nov  9 18:03:35 UTC 2024 aarch64 Linux

Here is an example of a Docker Compose with 7 containers.

~ $ docker compose ls
NAME                STATUS              CONFIG FILES
kong-dev            running(7)          /Users/jim/workspace/biji/archive/kong-dev-compose.yaml

Correspondingly, within the colima engine, there should be 7 containerd-shim-runc processes.

~ $ colima ssh -p default

jim@colima:/Users/jim$ ps -eFH | grep -i [r]unc
root       32809       1  0 309514 14604  3 13:51 ?        00:00:04   /usr/bin/containerd-shim-runc-v2 -namespace moby -id a439998840a3dec6c05c77eba72a7fa2301093e6f166e04a94864a288f91e8cf -address /run/containerd/containerd.sock
root       32886       1  0 309386 13768  3 13:51 ?        00:00:01   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 41227a00a3619d4409a9295112b0b2440227918c96d86809aca4580fe042c714 -address /run/containerd/containerd.sock
root       32925       1  0 309450 13880  1 13:51 ?        00:00:01   /usr/bin/containerd-shim-runc-v2 -namespace moby -id a6770fbd56d55312abfa23b200e57c3c9bd8a7d38c96c95c62aed387882493ea -address /run/containerd/containerd.sock
root       32953       1  0 309514 14640  3 13:51 ?        00:00:04   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 92346757d8056583cac160735a9c6e7b4b802ea161fd722242c3a33ee40b6b06 -address /run/containerd/containerd.sock
root       32961       1  0 309386 14520  0 13:51 ?        00:00:04   /usr/bin/containerd-shim-runc-v2 -namespace moby -id dfee6297c63fac0ddf412dacc925cf4bd7e7508d0cff5be29fd822a9251a84b3 -address /run/containerd/containerd.sock
root       33117       1  0 309450 15132  1 13:51 ?        00:00:04   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 909fed36235360ec776dc52b796bd33cb77b42409200a1415ce9300dd90e9ed4 -address /run/containerd/containerd.sock
root       33691       1  0 309450 13760  2 13:51 ?        00:00:01   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9a1ae83f551302204f7a2da0fd26ae538afb59e97ece44a0255e44e99ab20e95 -address /run/containerd/containerd.sock

Addtionally, for each container, there are several docker-proxy processes spawned by dockerd when a container is publishing ports, accompanyed by some iptables rules to forward traffic to or from the container within the colima engine.

~ $ colima ssh -p default

jim@colima:/Users/jim$ ps -eFH | grep [1]72.26.0.2
root       32538    1273  0 417873 3712   2 13:51 ?        00:00:00     /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 32768 -container-ip 172.26.0.2 -container-port 80
root       32567    1273  0 454705 3840   2 13:51 ?        00:00:00     /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 32768 -container-ip 172.26.0.2 -container-port 80
root       32586    1273  0 436321 3840   1 13:51 ?        00:00:00     /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 32772 -container-ip 172.26.0.2 -container-port 443
root       32616    1273  0 436257 3712   3 13:51 ?        00:00:00     /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 32772 -container-ip 172.26.0.2 -container-port 443

~$ sudo iptables -t nat -S
-A POSTROUTING -s 172.26.0.2/32 -d 172.26.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A POSTROUTING -s 172.26.0.2/32 -d 172.26.0.2/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A DOCKER ! -i br-7034e9493ba4 -p tcp -m tcp --dport 32768 -j DNAT --to-destination 172.26.0.2:80
-A DOCKER ! -i br-7034e9493ba4 -p tcp -m tcp --dport 32772 -j DNAT --to-destination 172.26.0.2:443

However, each published port requires a standalone docker-proxy process, consuming quite a lot of memory and CPU! Setting the field userland-proxy to false can solve the performance issue. See section Config colima.

Unix Socket File

From the Start colima section and the Inspect colima section, we know the Docker Unix socket docker.sock is mapped from the colima VM to the macOS host.

+---------------------------------------------------------------------------------------------------------------+
|                                                                                                               |
|                                                                                                               |
|    macOS Host                                                                                                 |
|                                                                                                               |
|                                                                                                               |
|        +-----------------------------------------------+                                                      |
|        |                                               |                                                      |
|        |   colima Engine                               |                                                      |
|        |                                               |                                                      |
|        |                                               |                                                      |
|        |     dockerd containerd containerd-shim runc   |                                                      |
|        |                                               |                                                      |
|        |                                               |                                                      |
|        |         unix:///var/run/docker.sock      -----+---> unix:///Users/jim/.colima/default/docker.sock    |
|        |                                               |                                                      |
|        +-----------------------------------------------+                                                      |
|                                                                                                               |
|                                                                                                               |
+---------------------------------------------------------------------------------------------------------------+

Actually, we can manage the containers directly within the colima VM.

~ $ ssh colima
Last login: Tue Jan 21 22:09:08 2025 from 192.168.5.2

jim@colima:~$ docker context ls
NAME        DESCRIPTION                               DOCKER ENDPOINT               ERROR
default *   Current DOCKER_HOST based configuration   unix:///var/run/docker.sock

jim@colima:~$ docker compose ls
NAME                STATUS              CONFIG FILES
kong-dev            running(7)          /Users/jim/workspace/biji/archive/kong-dev-compose.yaml

However, applications on the macOS host can not find the colima engine as they assume the default pathname /var/run/docker.sock. We should configure the DOCKER_HOST Docker environment variable to the colima engine.

Take VSCode for example, we can set the variable in the Docker extension.

docker-host.png

For CLI applications, here is an example.

~ $ DOCKER_HOST="unix://${COLIMA_HOME}/default/docker.sock" docker context ls
NAME        DESCRIPTION                               DOCKER ENDPOINT                                     ERROR
colima      colima                                    unix:///Users/jim/.colima/default/docker.sock
default *   Current DOCKER_HOST based configuration   unix:///Users/jim/.colima/default/docker.sock
Warning: DOCKER_HOST environment variable overrides the active context. To use a context, either set the global --context flag, or unset DOCKER_HOST environment variable.

~ $ DOCKER_HOST="unix://${COLIMA_HOME}/default/docker.sock" docker buildx ls
NAME/NODE     DRIVER/ENDPOINT   STATUS    BUILDKIT   PLATFORMS
colima        docker
 \_ colima     \_ colima        running   v0.17.3    linux/amd64 (+2), linux/arm64, linux/386
default*      docker
 \_ default    \_ default       running   v0.17.3    linux/amd64 (+2), linux/arm64, linux/386

See FAQ Cannot connect to the Docker daemon.

Multi-platform Build

The default docker build driver prioritizes simplicity but does not support advanced features like multi-platform build, caching, etc.

~ $ docker buildx ls
NAME/NODE    DRIVER/ENDPOINT   STATUS    BUILDKIT   PLATFORMS
colima*      docker
 \_ colima    \_ colima        running   v0.17.3    linux/amd64 (+2), linux/arm64, linux/386

To support multi-platform build, create a builder with the docker-container build driver that has multi-platform support. This driver would create a dedicated container as the builder backend.

The --use option would switch to the newly created builder, while --bootstrap would start the dedicated container in advanced.

~ $ docker buildx create --use --bootstrap --name multi-platform-builder --node multi-platform-builder --driver docker-container --platform "linux/arm64,linux/amd64"
[+] Building 4.4s (1/1) FINISHED
 => [internal] booting buildkit                                                                                                                                                                                               4.4s
 => => pulling image moby/buildkit:buildx-stable-1                                                                                                                                                                            4.0s
 => => creating container buildx_buildkit_multi-platform-builder                                                                                                                                                              0.4s
multi-platform-builder

~ $ docker buildx ls
NAME/NODE                    DRIVER/ENDPOINT    STATUS    BUILDKIT   PLATFORMS
multi-platform-builder*      docker-container
 \_ multi-platform-builder    \_ colima         running   v0.18.2    linux/amd64* (+2), linux/arm64*, linux/386
colima                       docker
 \_ colima                    \_ colima         running   v0.17.3    linux/amd64 (+2), linux/arm64, linux/386

~ $ docker ps
CONTAINER ID   IMAGE                           COMMAND                  CREATED          STATUS          PORTS     NAMES
80b4fa8ad380   moby/buildkit:buildx-stable-1   "buildkitd --allow-i…"   14 seconds ago   Up 14 seconds             buildx_buildkit_multi-platform-builder

Let's build a "linux/amd64" image on the "linux/arm64" machine.

~ $ grep -A1 platforms kong-dev-compose.yaml
72:      platforms:                    # Docker will determine the native platform unless you specify a different value.
73-      - "linux/amd64"               # linux/arm64 | linux/amd64

~ $ $ docker compose -f kong-dev-compose.yaml build kong
[+] Building 0/1s (0/1)                                                                                                                                                                           docker-container:multi-platform-builder
[+] Building 59.2s (7/16)                                                                                                                                                                         docker-container:multi-platform-builder
[+] Building 93.9s (7/16)                                                                                                                                                                         docker-container:multi-platform-builder
 => [kong internal] booting buildkit                                                                                                                                                                                                 3.3s
 => => pulling image moby/buildkit:buildx-stable-1                                                                                                                                                                                   2.9s
 => => creating container buildx_buildkit_multi-platform-builder                                                                                                                                                                     0.4s
 => [kong internal] load build definition from kong-dev-compose.Dockerfile                                                                                                                                                           0.0s
 => => transferring dockerfile: 6.34kB                                                                                                                                                                                               0.0s
 => [kong internal] load metadata for docker.io/library/ubuntu:24.04                                                                                                                                                                 4.3s
 => [kong auth] library/ubuntu:pull token for registry-1.docker.io                                                                                                                                                                   0.0s
 => [kong internal] load .dockerignore                                                                                                                                                                                               0.0s
 => => transferring context: 2B                                                                                                                                                                                                      0.0s
 => [kong emmyluadebugger 1/2] FROM docker.io/library/ubuntu:24.04@sha256:80dd3c3b9c6cecb9f1667e9290b3bc61b78c2678c02cbdae5f0fea92cc6734ab                                                                                           1.9s
 => => resolve docker.io/library/ubuntu:24.04@sha256:80dd3c3b9c6cecb9f1667e9290b3bc61b78c2678c02cbdae5f0fea92cc6734ab                                                                                                                0.0s
 => => sha256:8bb55f0677778c3027fcc4253dc452bc9c22de989a696391e739fb1cdbbdb4c2 28.89MB / 28.89MB                                                                                                                                     1.4s
 => => extracting sha256:8bb55f0677778c3027fcc4253dc452bc9c22de989a696391e739fb1cdbbdb4c2                                                                                                                                            0.5s
 => [kong emmyluadebugger 2/2] RUN <<-EOF (set -ex...)                                                                                                                                                                              84.0s
 => => # [ 68%] Building CXX object emmy_debugger/CMakeFiles/emmy_debugger.dir/src/transporter/transporter.cpp.o
 => => # [ 70%] Building CXX object emmy_debugger/CMakeFiles/emmy_debugger.dir/src/debugger/emmy_debugger.cpp.o
 => => # [ 71%] Building CXX object emmy_debugger/CMakeFiles/emmy_debugger.dir/src/debugger/emmy_debugger_manager.cpp.o
 => => # [ 73%] Building CXX object emmy_debugger/CMakeFiles/emmy_debugger.dir/src/debugger/emmy_debugger_lib.cpp.o
 => => # [ 75%] Building CXX object emmy_debugger/CMakeFiles/emmy_debugger.dir/src/debugger/hook_state.cpp.o
 => => # [ 76%] Building CXX object emmy_debugger/CMakeFiles/emmy_debugger.dir/src/debugger/extension_point.cpp.o
 => [kong baseimage  2/11] RUN echo "I am building target platform linux/amd64 on source platform linux/arm64 from base ubuntu:24.04"

...

~ $ docker image inspect 11e108b7f310
...
"Architecture": "amd64",

If we do not have the multi-platform build requirement, then just stick with the default driver for better performance. Read more at multi-platform-docker-build.

SSH Agent Forwarding

In order to reuse the SSH agent on the macOS host, we should complete two settings.

  1. In the section Config clima, we enabled the forwardAgent, --ssh-agent field to forward the SSH agent on the macOS host into the colima VM.

    The agent on the macOS host.

    ~ $ ls -l $SSH_AUTH_SOCK
    srw-------  1 jim  staff  0 Feb 28 18:05:19 2024 /var/folders/wc/fnkx5qmx61l_wx5shysmql5r0000gn/T//ssh-lNQDx9E3iDHJ/agent.90068
    

    The forwarded agent in the colima VM.

    ~ $ ssh colima echo '$SSH_AUTH_SOCK'
    /tmp/ssh-MwfhUrSSNj/agent.1141
    
    ~ $ colima ssh -p default --very-verbose -- eval ls -l '$SSH_AUTH_SOCK'
    TRAC[0000] cmd ["limactl" "list" "colima" "--json"]
    TRAC[0000] cmd int ["limactl" "shell" "--workdir" "/Users/jim" "colima" "eval" "ls" "-l" "$SSH_AUTH_SOCK"]
    srwxrwxr-x 1 jim jim 0 Jan  2 21:11 /tmp/ssh-NY12s8xDgK/agent.1142
    

    Additionally, within the colima VM, a symlink /run/host-services/ssh-auth.sock is created for the forwarded agent.

    ~ $ colima ssh -p default --very-verbose -- ls -l /run/host-services/ssh-auth.sock
    TRAC[0000] cmd ["limactl" "list" "colima" "--json"]
    TRAC[0000] cmd int ["limactl" "shell" "--workdir" "/Users/jim" "colima" "ls" "-l" "/run/host-services/ssh-auth.sock"]
    lrwxrwxrwx 1 jim root 30 Jan  2 21:11 /run/host-services/ssh-auth.sock -> /tmp/ssh-NY12s8xDgK/agent.1142
    

    The VM can resuse the forwarded SSH agent.

    ~ $ colima ssh -p default -- ssh-add -l
    384 SHA256:a2TyZj/tyhwUrs6JGhg//+Zwnpvti1yttted6OgmWtg jim.hu@konghq.com (Kong Dev) (ECDSA)
    4096 SHA256:x8gO3GTWyTkJLudIvoKwHZ9Mez0BFktfSc2FZ/jGqPU jim@zhtux (RSA)
    
  2. Bind mound the forwarded SSH agent to your container.

    Containers know nothing about the SSH agent on macOS host, but the one within the colima VM.

    ~ $ export COLIMA_SSH_AUTH_SOCK=$(ssh colima echo '$SSH_AUTH_SOCK')
    # -or-
    ~ $ export COLIMA_SSH_AUTH_SOCK='/run/host-services/ssh-auth.sock'
       
    ~ $ docker run --rm --name test-ssh-agent --mount "type=bind,src=$COLIMA_SSH_AUTH_SOCK,dst=$COLIMA_SSH_AUTH_SOCK" -e SSH_AUTH_SOCK=$COLIMA_SSH_AUTH_SOCK nicolaka/netshoot ssh-add -l
    384 SHA256:a2TyZj/tyhwUrs6JGhg//+Zwnpvwp9yttted4OgmWtg jim.hu@konghq.com (Kong Dev) (ECDSA)
    4096 SHA256:x2gO9GTWyTkJLudIwcFwHZ9Mez0BFktfSc9FZ/jGqPU jim@zhtux (RSA)
    

    If it reports permission error, we should add the write permission chmod a+w to the socket file. See SSH Agent Forwarding.

Mount Volumes

By default, the user home directory and /tmp/colima are bind mounted to the VM.

jim@colima:~$ cat /etc/fstab | grep -E 'colima|Users'
jim@colima:~$ mount | grep -E 'colima|Users'
mount0 on /Users/jim type virtiofs (rw,relatime)
mount1 on /tmp/colima type virtiofs (rw,relatime)

To mount other pathnames from macOS host to containers, we need firstly mount their parent directories to the colima VM! Otherwise, we would receive errors as follows.

otel-collector-1  | 2025-01-07T07:42:50.770341478Z Error: cannot start pipelines: open /tmp/file_exporter.json: is a directory
otel-collector-1  | 2025-01-07T07:42:50.770344603Z 2025/01/07 07:42:50 collector server run finished with error: cannot start pipelines: open /tmp/file_exporter.json: is a directory

Check the mounts field in the config file.

Kubernetes Setup

Regarding how to play with the Kubernetes environment, refer to "biji".

Enable Kubernetes

colima, under the hood, supports Kubernetes via K3s which is a lightweight Kubernetes distribution with small memory and CPU footprint - only 50% percent.

To create a K8s cluster, we can add the 'kubernetes' config field.

~ $ colima template -p default --editor nano
>kubernetes:
>  enabled: true

As K3s server would consume extra CPU and memory. Please create a separate profile for K8s specifically and start it on demand. Alternatively, we can specifiy the --kubernetes=true or --kubernetes=false CLI option. However, the --kubernetes CLI option would overwrite the colima config so that we have to always provide this CLI option.

# enable
~ $ colima start -p default --kubernetes=true

# disable
~ $ colima start -p default --kubernetes=false

Fortunately, colima supports dynamically start/stop the K8s cluster with the kubernetes, k8s sub-command! After we stop the cluster, remember to run docker container prune to remove the associated dangling containers.

~ $ colima status -p default
INFO[0000] colima is running using macOS Virtualization.Framework
INFO[0000] arch: aarch64
INFO[0000] runtime: docker
INFO[0000] mountType: virtiofs
INFO[0000] socket: unix:///Users/jim/.colima/default/docker.sock
INFO[0000] kubernetes: enabled

~ $ colima k8s stop -p default

~ $ colima status -p default
INFO[0000] colima is running using macOS Virtualization.Framework
INFO[0000] arch: aarch64
INFO[0000] runtime: docker
INFO[0000] mountType: virtiofs
INFO[0000] socket: unix:///Users/jim/.colima/default/docker.sock

We can customize K3s via the 'k3sArgs' config field. By default, --disable=traefik (Ingress controller) is passed to the k3s server process.

According to the FAQ, colima does not support multiple K8s clusters! But we can run Minikube, Kind or K3d (preferred) with colima as the backend. Alternatively, we can create multiple colima instances to simulate multiple K8s clusters.

Install kubectl

~ $ brew install kubectl

~ $ kubectl version
Client Version: v1.32.0
Kustomize Version: v5.5.0
Server Version: v1.31.2+k3s1

Config kubectl

The kubectl load config from kubeconfig files --kubeconfig, $KUBECONFIG or ${HOME}/.kube/config in the order of decreasing priority.

~ $ kubectl options

~ $ cat ~/.kube/config

~ $ kubectl config view [--minify]
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:62005
  name: colima
contexts:
- context:
    cluster: colima
    user: colima
  name: colima
current-context: colima
kind: Config
preferences: {}
users:
- name: colima
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

Inspect Kubernetes

The colima K8s cluster is created.

~ $ kubectl config get-clusters
NAME
colima

~ $ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:62005
CoreDNS is running at https://127.0.0.1:62005/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:62005/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

The clima node is created.

~ $ kubectl get nodes -o wide
NAME     STATUS   ROLES                  AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
colima   Ready    control-plane,master   14d   v1.31.2+k3s1   192.168.5.1   <none>        Ubuntu 24.04.1 LTS   6.8.0-50-generic   docker://27.4.0

The colima K8s context is created. Generally speaking, a K8s context is a combination of a cluster, a user (auth) and an optional namespace. Similar to Docker Context, we can switch between different K8s contexts by kubectl config use-context.

~ $ kubectl config current-context
colima

~ $ kubectl config get-contexts
CURRENT   NAME     CLUSTER   AUTHINFO   NAMESPACE
*         colima   colima    colima

Within the colima VM, a K3s server process is spawned.

jim@colima:~$ ps -eFF | grep -i [k]3s
root        1794       1 14 1484594 481076 0 20:30 ?       00:05:28 /usr/local/bin/k3s server

jim@colima:~$ sudo journalctl -efx -u k3s.service

We find 3 Docker containers created for different Kubernetes internal components (e.g. CoreDNS Cluster DNS). You will find each container has an accompanying pause container.

# on macOS host
~ $ docker ps
CONTAINER ID   IMAGE                        COMMAND                  CREATED          STATUS          PORTS     NAMES
b256190c220b   2f6c962e7b83                 "/coredns -conf /etc…"   50 minutes ago   Up 50 minutes             k8s_coredns_coredns-56f6fc8fd7-jqg9q_kube-system_ba370ccf-ea61-4357-a537-bf57ad963b09_0
be7f4f0b1fe3   d3dd7baae2fc                 "local-path-provisio…"   50 minutes ago   Up 50 minutes             k8s_local-path-provisioner_local-path-provisioner-5cf85fd84d-kh7gz_kube-system_7d860118-ebae-4964-a475-8df63e95533e_0
6a1a5c8f7902   5548a49bb60b                 "/metrics-server --c…"   50 minutes ago   Up 50 minutes             k8s_metrics-server_metrics-server-5985cbc9d7-pqbql_kube-system_917fc3ca-e55c-4a8b-9732-3a010a337a16_0
a6d8cade10b8   rancher/mirrored-pause:3.6   "/pause"                 50 minutes ago   Up 50 minutes             k8s_POD_metrics-server-5985cbc9d7-pqbql_kube-system_917fc3ca-e55c-4a8b-9732-3a010a337a16_0
175072564faf   rancher/mirrored-pause:3.6   "/pause"                 50 minutes ago   Up 50 minutes             k8s_POD_coredns-56f6fc8fd7-jqg9q_kube-system_ba370ccf-ea61-4357-a537-bf57ad963b09_0
96a19292498d   rancher/mirrored-pause:3.6   "/pause"                 50 minutes ago   Up 50 minutes             k8s_POD_local-path-provisioner-5cf85fd84d-kh7gz_kube-system_7d860118-ebae-4964-a475-8df63e95533e_0

# within colima VM
jim@colima:~$ ps -eF --forest | grep -A1 [c]ontainerd-shim-runc-v2
root        2444       1  0 309386 13576  2 20:30 ?        00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 96a19292498d036eb229cafba53fab0f5f61736b37c1f7d0379f68b978a38e75 -address /run/containerd/containerd.sock
65535       2509    2444  0   190   384   0 20:30 ?        00:00:00  \_ /pause
root        2445       1  0 309450 13528  2 20:30 ?        00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace moby -id a6d8cade10b8e260d6d0217036222e081f646648f7cfe28a14ff988d32942fa9 -address /run/containerd/containerd.sock
65535       2508    2445  0   190   384   0 20:30 ?        00:00:00  \_ /pause
root        2446       1  0 309450 13492  2 20:30 ?        00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 175072564faf2eb02f25385fa8e3fc8615c54999e22fdc4a389c2ad9ca0b17ec -address /run/containerd/containerd.sock
65535       2507    2446  0   190   384   1 20:30 ?        00:00:00  \_ /pause
root        2711       1  0 309450 13896  2 20:30 ?        00:00:03 /usr/bin/containerd-shim-runc-v2 -namespace moby -id be7f4f0b1fe33a4fb2b38d0c5cf7e45a654aef517824652efcd439ea6e360684 -address /run/containerd/containerd.sock
root        2770    2711  0 316386 33920  1 20:30 ?        00:00:02  \_ local-path-provisioner start --config /etc/config/config.json
root        2731       1  0 309450 14292  2 20:30 ?        00:00:03 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6a1a5c8f790287a28c6d0650096f8caf23815d7c8ba984bf7b7bd4a9dae506f8 -address /run/containerd/containerd.sock
1000        2802    2731  1 320556 59672  0 20:30 ?        00:00:33  \_ /metrics-server --cert-dir=/tmp --secure-port=10250 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
root        2751       1  0 309450 13928  2 20:30 ?        00:00:03 /usr/bin/containerd-shim-runc-v2 -namespace moby -id b256190c220bb96392fee9100b55b311900e88ab79acd1ba3f890e5b21a2b8a1 -address /run/containerd/containerd.sock
65532       2795    2751  0 321216 52356  3 20:30 ?        00:00:17  \_ /coredns -conf /etc/coredns/Corefile

Correspondingly, we find the associated 3 deployments and services.

~ $ $ kubectl get deployments -A
NAMESPACE     NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns                  1/1     1            1           14d
kube-system   local-path-provisioner   1/1     1            1           14d
kube-system   metrics-server           1/1     1            1           14d

~ $ kubectl get services -A
NAMESPACE     NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes       ClusterIP   10.43.0.1      <none>        443/TCP                  14d
kube-system   kube-dns         ClusterIP   10.43.0.10     <none>        53/UDP,53/TCP,9153/TCP   14d
kube-system   metrics-server   ClusterIP   10.43.39.244   <none>        443/TCP                  14d

Especially, with the default runtime, colima and K3s share container images!

~ $ docker images 'rancher/*'
REPOSITORY                         TAG                    IMAGE ID       CREATED         SIZE
rancher/klipper-helm               v0.9.3-build20241008   128d0eddd2c8   2 months ago    188MB
rancher/local-path-provisioner     v0.0.30                d3dd7baae2fc   3 months ago    51.7MB
rancher/mirrored-library-traefik   2.11.10                43b65488db50   3 months ago    165MB
rancher/mirrored-metrics-server    v0.7.2                 5548a49bb60b   4 months ago    65.5MB
rancher/mirrored-coredns-coredns   1.11.3                 2f6c962e7b83   5 months ago    60.2MB
rancher/klipper-lb                 v0.4.9                 8e8709f8caae   5 months ago    20MB
rancher/mirrored-library-busybox   1.36.1                 7db2ddde018a   19 months ago   4.04MB
rancher/mirrored-pause             3.6                    7d46a07936af   3 years ago     484kB

We also found a Docker network is created.

~ $ docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
7be6cae2031a   k3d-k3s-default   bridge    local

EXTREME CAUTION

As the Docker context and the K8s context share the same colima engine, please pay extreme caution to destruction operations like prune, rm, stop, kill, etc.

References

  1. colima FAQ.