-
Notifications
You must be signed in to change notification settings - Fork 183
Description
For as-yet-unknown reasons my kind docker network injects IPv6 DNS resolvers, but they are unreachable from containers within the network. To be clear, the underlying problem here is broken resolution in my kind docker network. But the registry proxy's DNS handling makes this problem much worse.
docker-registry-proxy seems to prefer the IPv6 resolvers, even if DISABLE_IPV6=true is set, and ignores all other resolvers, leading to:
DEBUG, determined RESOLVERS from /etc/resolv.conf: '127.0.0.11 [2001:4860:4860::8888] [2001:4860:4860::8844]'
Possible resolver: 127.0.0.11
Possible resolver: [2001:4860:4860::8888]
Possible resolver: [2001:4860:4860::8844]
Final chosen resolver: resolver [2001:4860:4860::8844];
Using auto-determined resolver 'resolver [2001:4860:4860::8844]; ' via '/etc/nginx/resolvers.conf'
...
2025/09/04 22:46:44 [error] 85#85: *55 someregistry.azurecr.io could not be resolved (110: Operation timed out), client: 127.0.0.1, server: proxy_caching_, request: "HEAD /v2/somerepository/blobs/sha256:7d4b90e7bd7acdf0124ba6efdef329226d181efbf79d606e6d681e690f23c051 HTTP/1.1", host: "someregistry.azurecr.io"
The registry proxy should configure nginx with all available resolvers, and if DISABLE_IPV6=true it should filter out IPv6 resolver addresses.
I'll see if I can cook up a quick patch for this, in the mean time I'm posting this so I can share a workaround for others.
The underlying dns issue
Note that kind creates an ipv6-enabled network:
➜ ~ docker network inspect kind | jq -r '.[].EnableIPv6'
true
... even if the kind-config.yaml specifies an ipv4-only cluster:
networking:
# apiServerAddress: "0.0.0.0"
apiServerPort: 6443
# ipv4 only
ipFamily: ipv4
presumably because kind wants to be able to use the same network for both ipv4, ipv6 and dual-stack clusters.
If I stand up another container ion the same docker network I can see the /etc/resolv.conf has:
$ docker run --rm --name test -it --net kind --hostname docker-registry-proxy ubuntu:22.04
root@test:/# cat /etc/resolv.conf
nameserver 127.0.0.11
nameserver 2001:4860:4860::8888
nameserver 2001:4860:4860::8844
options edns0 trust-ad ndots:0
root@docker-registry-proxy:/# dig +short @2001:4860:4860::8888 google.com
;; communications error to 2001:4860:4860::8888#53: timed out
;; communications error to 2001:4860:4860::8888#53: timed out
;; communications error to 2001:4860:4860::8888#53: timed out
;; no servers could be reached
root@docker-registry-proxy:/# dig +short @127.0.0.11 google.com
142.251.221.78
and normal resolution works fine, since it'll presumably fall back to the ipv4 resolver:
root@docker-registry-proxy:/# host google.com
google.com has address 142.251.221.78
google.com has IPv6 address 2404:6800:4006:812::200e
google.com mail is handled by 10 smtp.google.com.
Workaround at docker network level
Docker doesn't allow networks to be edited once created, but you can destroy all attached containers then drop and re-create it.
If kind finds the kind network already exists, it will not re-create it.
I dumped the network config with
docker network inspect kind
then deleted it, and re-created it with
docker network create kind --opt=com.docker.network.bridge.enable_ip_masquerade=true --opt com.docker.network.driver.mtu=1500 --gateway 172.18.0.1 --subnet 172.18.0.0/16
(your address ranges etc may vary)
I then confirmed that new instances do not get ipv6 resolvers:
$ docker run --rm --name test -it --net kind --hostname docker-registry-proxy ubuntu:22.04
root@docker-registry-proxy:/# cat /etc/resolv.conf
search fritz.box hsnet.enterprisedb.network
nameserver 127.0.0.11
options edns0 trust-ad ndots:0
Re-creating the kind cluster should now result in clusters with ipv4-only DNS resolution, working around this issue with the docker-registry-proxy.