How to Connect Console to Python Interpreter Running in a Docker Container on a Remote Server?

 

Hi,

I have been struggling to find a way to set up a PyCharm project that points to a python interpret within a docker container that is being run on a remote server. Picture below should make my question clear.

 

I have followed through "similar posts" on this forum, and get a sense that it is possible - perhaps through docker-compose? However, I have not been able to find a clear guide on how to accomplish this specific problem.

 

Could anyone help me with this?

 

Thank you!

 

 

5 comments
Comment actions Permalink

After some days of exploration, my opinion is that it is not possible in PyCharm 2019.1 Professional to hook up a console/debugger with an interpreter running inside a Docker container living in a remote machine.

Some parts of the overall setup are working, but there are a couple of fundamental problems that can only be solved by the PyCharm developers.

What is (partially) working

PyCharm is able to connect to the interpreter inside the Docker container if you

1. enable the Docker API endpoint on the remote machine, for example adding

ExecStart=/usr/bin/dockerd -H fd:// -H tcp://localhost:2375 --containerd=/run/containerd/containerd.sock

to

/etc/systemd/system/docker.service.d/startup_options.conf

on the remote machine, and

2. "LocalForward" the API endpoint from the remote machine to the local machine via SSH

Host remote-server
    LocalForward 127.0.0.1:2375 localhost:2375

At that point PyCharm will be able to connect to the Docker server, inspect it, find all the images and containers, and understand which Python interpreters are available.

Open problems

However, there is a number of problems that make Docker unable to open a console (or start the debugger) inside the remote container.

Unchangeable path for helper packages

In the dialog "Configure Remote Python Interpreter", Pycharm insists on setting the "PyCharm helpers path" to "/opt/.pycharm_helpers". That path cannot be changed. If "/opt" does not exist or is not writable, the helpers will not be uploaded to the docker container.

To work around this problem one can create that path in the Dockerfile and prepopulate it with a copy of the helpers.

Ports not published by docker

To launch the console or the debugger, PyCharm runs the following command inside the container:

python -u /opt/.pycharm_helpers/pydev/pydevd.py --cmd-line --module \
--multiprocess --qt-support=auto --port 50699 --file foobar.py

However, PyCharm does not ask Docker to expose and publish the port (50699 in this case) on which the remote console/debugger listen. This means that PyCharm cannot connect to that port and the whole process quickly fails.

Couldn't connect to console process.
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)

PyCharm should use the Docker API to request that port to be published and a proxy to be set up.

In theory, this could be worked around by manually publishing ports via the `-p` option of Docker or the `ports:` section of docker-compose.yml. In practice this is not doable because one cannot know in advance which port will be chosen by PyCharm.

Non configurable port ranges

Each instance of the remote console/debugger is launched with a different port number. This makes it impossible to know beforehand which ports one should "LocalForward" with SSH.

If the range of possible ports could be configured, one could limit it to a dozen of ports and manually set up all the needed SSH "LocalForward" configurations.

0
Comment actions Permalink

This was very helpful! I will attempt these suggestions, and report back if I have any additional learnings.

I've also created an issue on the JetBrains tracker. Maybe the developers will take notice?

 

Thank you!

0
Comment actions Permalink

Thank you Wilbert for creating an issues on the JetBrains tracker. What is the URL?

0
Comment actions Permalink

Hi, support responded to the issue with two links. The docker solution appears very similar to your proposal. I am going to give the nvidia-docker solution a try (hopefully by tomorrow).

Regards!

0

Please sign in to leave a comment.