Use containerd to handle deprecation

Photo by David Mullins / Unsplash

The Kubernetes community is getting ready for yet another major change. Until fall 2022, container registry hosted many Kubernetes community-managed container images like Cluster Autoscaler, metrics-server, cluster-proportional-autoscaler. The “” is Google Cloud Registry. In order to be vendor neutral, the community is moving away from using Google Cloud Registry to host container images.

As a result, starting March 20th, traffic from the old registry is being redirected to The older will remain functioning for sometime but it is eventually getting deprecated.


What’s the impact?

The change that occurred six days after Pi day is unlikely to cause major problems. There are some edge cases. But unless you operate in an airgapped or highly restrictive environment that applies strict domain name access controls, you won’t notice the change.

This doesn’t mean that there’s no action required. Now is the time to scan code repositories and clusters for usage of the old registry. Failing to act will result in cluster components failing.

Once the old registry goes away, Kubernetes will not be able to create new Pods (unless image is cached) if the container uses an image hosted on

What do you need to change?

Cluster owners and development teams have to ensure they are not using any images stored in the old registry. The change is fairly simple. It's pretty simple, really.

You need to change your manifests to use container registry.


You can find out which Pods use the old registry using kubectl:

kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\0;31;37M0;31;38m
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c | grep -i

Here are the Pods in my test cluster that use the old registry:


These are the at-risk Pods. I’ll have to update the container registry used in the Pods.

When hunting for references to old registry, be sure to include containers that may not be currently running in your cluster. Don't forget to scan code repositories.

What if I don’t control the workloads?

One of my colleagues raised an intriguing question. Is there’s a way to handle this change at a cluster level? He had a valid concern. Many large enterprises might not be able to implement this change in time before the community sunsets

I work with many customers that manage large Kubernetes clusters, but have little control over the workloads that get deployed into the cluster. Some of these clusters are shared by hundreds of development teams. The burden is on Central Platform Engineering teams to dissipate this information to individual dev teams (who are busy writing code and not checking Kubernetes news!).

So, what can these teams do to make sure when the old registry finally croaks, they don’t get paged for in the middle of the night for ErrImagePull and ImagePullBackOfferrors?

Turns out you can use containerd to handle this redirection at node level. Let’s find out how.

Using mirrors in containerd

Ever since Dockerhub started rate limiting image pulls, many have opted to store images in local registries. Mirrors save network bandwidth, reduce image pull time, and don’t rate-limit.

You can configure as a mirror to in containerd. This configuration will automatically pull images from whenever a Pod uses an image stored in

On your worker node, append these lines in the containerd config file at /etc/containerd/config.toml:

   config_path = "/etc/containerd/certs.d"

The final file on an Amazon EKS cluster looks like this:

version = 2
root = "/var/lib/containerd"
state = "/run/containerd"

address = "/run/containerd/containerd.sock"

default_runtime_name = "runc"

sandbox_image = ""

runtime_type = "io.containerd.runc.v2"

SystemdCgroup = true

bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"

config_path = "/etc/containerd/certs.d"

Next, create a directory called and a hosts.toml file inside it:

mkdir -p /etc/containerd/certs.d/

cat << EOF > /etc/containerd/certs.d/
server = ""

capabilities = ["pull", "resolve"]

Image pull requests to will now be sent to

Restart containerd and kubelet for the change to take effect.

systemctl restart containerd kubelet

Let’s validate that images are indeed getting pulled from the new registry. I added an entry to my /etc/hosts file to break


Containerd can no longer pull an image from


I can tell ctr to use the mirror by specifying the —hosts-dir parameter:

ctr images pull --hosts-dir "/etc/containerd/certs.d"

This time the operation succeeds.


Any Pods I create now onwards will use the new registry even though the manifests reference old registry. Here’s a test using a pause container.

kubectl create deployment pause --image

Perfect! Kubernetes could create Pods even though I blocked on the node.

What’s the best way to implement this in production?

In my little demo, I changed a single node in the cluster. What about the rest of the nodes?

There are three ways you can use to implement this change on every node in your cluster:

  1. The easiest way is to use a daemonset to change to containerd config.toml and add hosts.toml file. IBM cloud has shared this on Github
  2. You can use EC2 user data or AWS Systems Manager to make this change when a node gets created
  3. You can create your own AMI

What if I use Docker as runtime?

Starting Kubernetes version 1.24, containerd is the only runtime available in Amazon EKS AMIs. If you have an edge case that requires using Docker, there's still hope.

Docker also has support for registry mirrors. Here’s the documentation page you need.

Don’t rely on stop gaps

While the solution included in this post works, I recommend only using as a safety measure. The main reason is that you’ll need to customize the Amazon EKS AMI or create your own AMI to use it.

You’ll have less operational overhead if you can simply use EKS AMIs as is. The best way to handle this registry deprecation is to update manifests.

Oh, and by the way, you can also use mirrors to enforce pull through cache.