Run docker build on a machine without internet access

I have a docker compose project with a couple containers running on a production server. I can SSH onto the server, but it does not have access to the internet (customer wanted it like that, it’s a long story).

To make changes I have to build the containers locally, package them with docker save and then transfer the packages to the server. This process is really annoying, especially when I just want to change one or two lines of code.

When I make adjustments directly on the server and run docker build, it fails because of no internet access:

failed to solve: node:18: failed to do request: Head "https://registry-1.docker.io/v2/library/node/manifests/18": dialing registry-1.docker.io:443 with direct connection: resolving host registry-1.docker.io: lookup registry-1.docker.io: no such host

Why would docker build need access to the registry even though no new packages need to be downloaded? Shouldn’t all the unchanged layers be cached anyway?

Is there a way to rebuild containers on a machine with no internet access?

failed to do request: Head "https://registry-1.docker.io/v2/library/node/manifests/18"

Docker is checking if the node:18 image has changed since the last build, or it needs to download the image because you don’t have it local. When the builder is the default buildx builder, I believe it will load images that are already pulled to the docker engine, but if you use a docker-container driver in docker buildx ls, then all builds need to query the registry.

It is now possible to override the context to tell buildx to use a different source for a specific image. For more on that, see the --build-context flag and Docker’s blog post on the feature:

For an air-gapped environment, pulling the image locally to an OCI Layout would be ideal for that scenario. Tools like crane, oras, skopeo, and regctl (disclaimer, I’m the author) can help with that.

Leave a Comment