Understanding Stealth Volume Mapping of /opt/projects

已回答

Whenever I create a new Python project using docker-compose, I expect to manually mount my project in the Python app container.

As such my docker-compose.yml will usually have an entry like this for my app container.

volumes:
- .:/code

When I examine the container volume assignments, I will see the defined /code entry, but also a stealth mapping of my project directory into /opt/project.

What is the purpose of this automatic volume?    I just hate auto-magical happenings in my development environment without fully understanding what's going on.

 

 

11

Pycharm bypasses the command entry for my app container and simply executes

python -u /opt/project/manage.py runserver 0.0.0.0:8000

 

When using the debugger, it adds the call to pydevd, but still executes manage.py from /opt/project.

python -u /opt/.pycharm_helpers/pydev/pydevd.py --multiprocess --qt-support=auto --port 55741 --file /opt/project/manage.py runserver 0.0.0.0:8000

 

So Pycharm ignores the execution related settings in Dockerfile and docker-compose.yml.  Neither WORKDIR nor command: are respected.

It creates a project mount in the container which includes a LOT of files which should never exist inside a container  (.git directory, Dockerfile, docker-compose.yml, database files, etc...)

This seems to be a sledgehammer approach to launching the Django app. 

Is this /opt/projects mount strictly necessary or can PyCharm honour the settings in docker-compose.py so that execution from PyCharm is identical to execution from the CLI.   Obviously the debugger needs to add the pydevd call, but it should be possible to parse the command: statement from the compose file and modify it accordingly.

I'd really like to see my container startup work the way it's defined. 

8

How would this work if I have a remote docker interpreter and want to run/debug a script? The scripts sits in my (local) project and somehow needs to be passed to the remote docker daemon running my container. Does this happen automagically via the docker-compose.override.840.yml and /opt/projects? Or do I have to add some Path mappings in my PyCharm-docker configuration? What are those for anyway, how are they used?

 

The whole thing is somewhat undercodumented.

1

@Things

Just to clarify, by remote docker interpreter you mean the one on the remote host or it resides on the same machine PyCharm does?

If local, have you checked https://www.jetbrains.com/help/pycharm/using-docker-compose-as-a-remote-interpreter.html?

0

I am experimenting with both. When I use the docker daemon running on the same machine as PyCharm ("Docker for Mac") it usually just works. When I am trying to use a docker daemon on a different machine ("TCP socker") then it is a bit more difficult. I thought I had it running a few days ago, but now cannot reproduce. I keep getting the error "/opt/project/src/hello.py" when I am trying to run the file <project_root>/src/hello.py.

 

So now I am trying to understand how the docker container running on my remote (tcp) host gets to see src/hello.py. It looks like PyCharm2019.1/tmp/docker-compose.override.840.yml provides that access, but I do not understand how my local file src/hello.py gets transfered to the remote docker container. Is this done through the docker protocol? And what part do the Path Mappings play (if any) which I can define in the PyCharm Docker Config dialog?

 

1

Using Docker-based interpreter on the remote machine isn't supported at the moment https://youtrack.jetbrains.com/issue/PY-33489, please vote for the ticket and follow it for updates.

In the description of the issue, you will find a workaround, but it doesn't work for docker-compose.

1

Ok, thanks. It is much clearer now how this is supposed to work.

0

No, I am not getting it. There are several places to define path mappings:

  1. In my own docker-compose (maps from docker host to docker container)
  2. In PyCharm settings under docker configuration
  3. In PyCharm settings under interpreter configuration
  4. In PyCharm under run configuration

What is the meaning of all these (except 1)? None of them seems to be used together with a remote docker-compose interpreter.

0

Anyway, I got it to work with a plain Dockerfile. A lot more configuration that with docker-compose, but it works.

0

请先登录再写评论。