Facing problems using a remote interpreter run in a tensorflow-gpu Docker


I'm trying to run/debug the following Python script inside a tensorflow docker(with gpu enabled):

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()

The Docker image is tensorflow/tensorflow:latest-gpu-py3 and when I run a container based on it, I can run the given script without any errors (again, with gpu enabled). Then I tried to configure my PyCharm to connect to the Docker and run/debug following these steps:


Everything worked fine except the fact that when I run the script from PyCharm, I get the following error in the console:

ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

Again, I have no problem to run the script manually from within a container created based on the same Docker image.

Comment actions Permalink

I have the same issue. Sorry but I do not see how the references help?


Please sign in to leave a comment.