Debug with remote ssh interpreter with cuda (pytorch) still uses local gpu/cpu

Helo, I followed this guide: https://www.jetbrains.com/help/pycharm/remote-debugging-with-product.html
In order to set up a remote debugging on a server with much stronger gpu than my notebook.

The interpreter seem to be working on the example.

With some additional workload in the code, I can see the cpu activity on the server increasing after starting the code, and decreasing back when i stop it.

But when I run my pytorch code with cuda, it is using the laptops gpu, and cpu, not the remote server.
I confirmed this by looking at nvidia-smi at the server, showing 0% acitivity and 0 mb used from server gpu memory, at the same time seeing my local laptop gpu memory being used, and local cpu under heavy load:

Now Im suspicious that the problem could be: 

CUDA_DEVICE_ID = 0
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = str(CUDA_DEVICE_ID)

The part in my code that makes sure that I only use 1 gpu out of the three available on the server.

Did anyone have the same issue with the remote debugging including cuda / pytorch code? 
Could anyone give me some advice why is this happening, and how to resolve it?

0
1 comment

Hello, 

Could you please go to "Run/Debug configuration" and make sure the Remote interpreter is chosen for cuda / pytorch code ? According to your screenshot, it seems like the local one is being used. 

 

 

 

0

Please sign in to leave a comment.