I know this is problematic in multiple ways, most notably the mounting of volumes from the remote host, but I've been experimenting with this scenario and hit seemingly wrong behavior in the Python runner before even getting to those issues, so I want to make sure that this is really an unsupported setup. Basically, I have set up the docker daemon on a remote host to listen on a TCP port as described here, and then I set up an SSH port tunnel so that the daemon is accessible on my laptop.
By specifying a docker service of the form "tcp://localhost:####" (the number being the local port number of the tunnel), I'm able to add a Python interpreter based on an image on the remote host. I'm also able to launch containers in the Docker tool window based on images in the remote service's image cache. But if I select the remote image as the interpreter in a Python run config, PyCharm still tries to connect to the local daemon, which I've tested by killing the local service and observing the refused connection.
I notice that the "Show All…" Python interpreters view shows the "path" to all of the docker-image-based interpreters in the form of a URL with a "docker:" scheme. Might this simpler form of reference be bypassing the docker service configuration associated with the remote interpreter?
The desired end state, here, is to be able to operate the debugger from the laptop against Python code running in a container on a remote host. We have software with memory requirements beyond what can be accommodated on a laptop, and our runtime environments are all container-based, so we want a consistent runtime context.