TensorFlow crashes

Completed

Has anyone been able to get tensorflow running on PyCharm for Macintosh?  My attempts simply crash python when I merely try to "import tensorflow".  There is no traceback in PyCharm - the python interpreter just exits with "Process finished with exit code 132 (interrupted by signal 4: SIGILL)".  Here's my configuration:

OSX 10.11.6

Using CPU-version of tensorflow (not GPU version)

PyCharm Pro (latest)

Python 3.6.4

Where am I going wrong?

Thanks

Eric

 

2 comments

I have the same problem. Importing keras results in:

 

Using TensorFlow backend.

Process finished with exit code 132 (interrupted by signal 4: SIGILL)

 

My configuration:

Ubuntu 16.04

PyCharm Community 2017.3.4

Python 3.5.2

0

I met the same problem. Finally I found the method to solve it.

https://github.com/tensorflow/tensorflow/issues/17411 

I am encountering this issue as well with tensorflow-gpu 1.6.0, on linux, using python 3.6.4. I have installed tensorflow using pip itself. Simply running this produces a SIGILL:

$ python3 -m tensorflow
zsh: illegal hardware instruction  python3 -m tensorflow

I get stack traces similar to what is mentioned in this ticket's description.

This seems to be occurring due to the use of AVX instructions in the latest Tensorflow packages uploaded to pip. Running python3 through GDB and disassembling the crashing function points to this instruction:

=> 0x00007fffb9689660 <+80>:    vmovdqu 0x10(%r13),%xmm1

Which is an AVX instruction not supported on older or less-featureful CPUs that do not have AVX support. The tensorflow(-gpu) 1.5.0 pip packages do not use AVX instructions, and thus there are no problems using it with these CPUs.

The solution would be for a build of tensorflow(-gpu) that is not compiled with AVX instructions to be published (or to build a copy locally). The provided installation instructions do not mention any specific CPU requirements nor how to determine compatibility with the provided binaries.

In the meantime, reverting to tensorflow(-gpu) 1.5.0 using something like what @NinemillaKA mentioned above is an effective workaround.

0

Please sign in to leave a comment.