Share Docker image via Container Registry – DRF API

I prepared API with Django Rest Framework. I am using docker to run my API, everything works fine on my machine. So I just run docker-compose up and I can test API with Swagger in my browser.

Now I want my friend to test it locally on his machine. I want to do this by sharing my docker image with Container Registry (GitLab service) and docker-compose file.

So his scenario will be:

  1. Pulling repo from Registry Container by: docker pull registry.[...]
  2. Run: docker-compose up

And after that, he can test it. The main goal is to run this API without downloading repo with code – just using docker-compose and Docker image. We then want to run it on VPS.

I have already tried to do this but to no avail. Here are the steps I followed:

  1. docker login registry.gitlab.com
  2. docker build -t registry.gitlab.com/[...]:latest .
  3. docker push registry.gitlab.com/[...]:latest .
  4. Remove all images and containers related to project.
  5. Create new directory and paste there docker-compose file.
  6. docker pull registry.gitlab.com/[...]:latest .
  7. docker-compose up

And then I’m getting error:
python: can't open file '/app/manage.py': [Errno 2] No such file or directory

What can I do in that situation? Is it even possible to work? Maybe my Docker configuration is wrong. Below is my Dockerfile and docker-compose.

Dockerfile:

LABEL maintainer="kryspy"

ENV PYTHONUNBUFFERED 1

COPY ./requirements.txt /tmp/requirements.txt
COPY ./requirements.dev.txt /tmp/requirements.dev.txt
COPY ./app /app

WORKDIR /app
EXPOSE 8080

ARG DEV=true
RUN python -m venv /py && \
    /py/bin/pip install --upgrade pip && \
    apk add --update --no-cache postgresql-client && \
    apk add --update --no-cache --virtual .tmp-build-deps \
        build-base postgresql-dev musl-dev libffi-dev && \
    /py/bin/pip install -r /tmp/requirements.txt && \
    if [ $DEV = "true" ]; \
        then /py/bin/pip install -r /tmp/requirements.dev.txt ; \
    fi && \
    rm -rf /tmp && \
    apk del .tmp-build-deps && \
    adduser \
        --disabled-password \
        --no-create-home \
        django-user

ENV PATH="/py/bin:$PATH"

USER django-user

docker-compose

version: '3.9'

services:
  api:
    container_name: wishlist_api
    image: registry.gitlab.com/[...]:latest
    ports:
      - "8080:8080"
    volumes:
      - ./app:/app
    command: >
      sh -c "python manage.py wait_for_db &&
             python manage.py makemigrations &&
             python manage.py migrate &&
             python manage.py runserver 0.0.0.0:8080"
    environment:
      - DB_HOST=db
      - DB_NAME=wishlistDB
      - DB_USER=postgres
      - DB_PASS=Testowe123!
      - DB_PORT=5432
    depends_on:
      - db

  db:
    container_name: wishlist_db
    image: postgres:15-alpine
    restart: always
    ports:
      - "5432:5432"
    volumes:
      - db-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=wishlistDB
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=Testowe123!

volumes:
  db-data:

In your Compose file, you use volumes: to hide the image’s /app directory – that is, the entire installed application – and replace it with the contents of the ./app directory from the host system. This defeats the goal of having an isolated image, and works contrary to your goal of having an image you can run without the local source code.

Similarly, command: overrides the Dockerfile CMD, and you shouldn’t normally need it in the Compose file. You also do not need to manually set container_name:.

Make sure you also test your image locally without the volumes: or command: overrides to make sure it works the way you expect. (I’ve seen other questions, particularly Python-related, where “it works on my system”, or it doesn’t, because the volume mount injects different content than the image.) You can use an ordinary Python virtual environment to develop your application even if it’s eventually going to run in a container.

Leave a Comment