InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found

cudagpupython

I run this code:

tf.test.is_gpu_available( cuda_only=False, min_cuda_compute_capability=None )

I get the following error:

2019-10-25 18:25:20.855191: I
tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX2
2019-10-25 18:25:20.879831: I
tensorflow/stream_executor/platform/default/dso_loader.cc:44]
Successfully opened dynamic library nvcuda.dll 2019-10-25
18:25:21.461924: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0
with properties: name: GeForce MX130 major: 5 minor: 0
memoryClockRate(GHz): 1.189 pciBusID: 0000:01:00.0 2019-10-25
18:25:21.470775: I
tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25]
GPU libraries are statically linked, skip dlopen check. 2019-10-25
18:25:21.503654: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible
gpu devices: 0
————————————————————————— InternalError Traceback (most recent call
last) in
—-> 1 tf.test.is_gpu_available( cuda_only=False, min_cuda_compute_capability=None )

~\Anaconda3\envs\deep_env\lib\site-packages\tensorflow_core\python\framework\test_util.py
in is_gpu_available(cuda_only, min_cuda_compute_capability) 1430
1431 try:
-> 1432 for local_device in device_lib.list_local_devices(): 1433 if local_device.device_type == "GPU": 1434 if
(min_cuda_compute_capability is None or

~\Anaconda3\envs\deep_env\lib\site-packages\tensorflow_core\python\client\device_lib.py
in list_local_devices(session_config)
39 return [
40 _convert(s)
—> 41 for s in pywrap_tensorflow.list_devices(session_config=session_config)
42 ]

~\Anaconda3\envs\deep_env\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py
in list_devices(session_config) 2247 return
ListDevicesWithSessionConfig(session_config.SerializeToString())
2248 else:
-> 2249 return ListDevices() 2250 2251

InternalError: cudaGetDevice() failed. Status: cudaGetErrorString
symbol not found.

After crearing the following venv:

conda create -n Deep_learning_env python=3.6

pip install -U numpy matplotlib pandas ipython
git clone https://github.com/scipy.git scipy
pip install https://cntk.ai/PythonWheel/CPU-Only/cntk-2.7.post1-cp36-cp36m-win_amd64.whl
pip install https://cntk.ai/PythonWheel/GPU/cntk_gpu-2.7.post1-cp36-cp36m-win_amd64.whl
pip install tensorflow
pip install tensorflow-gpu
pip install gensim
pip install keras
pip install  --upgrade --no-deps cntk
pip install --upgrade --no-deps cntk-gpu

conda install theano pygpu
conda install -c peterjc123 pytorch
conda install -c anaconda cudatoolkit
conda install -c anaconda cudnn

conda list cudnn
# Name   Version  Build      Channel
cudnn    7.6.0    cuda10.1_0    anaconda
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:12:52_Pacific_Daylight_Time_2019
Cuda compilation tools, release 10.1, V10.1.243
python --version

Python 3.6.7
which nvcc
/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/bin/nvcc

NVIDIA GeForce MX130

more info:
cntk 2.7
cntk-gpu 2.7
tensorflow 2.0.0
tensorflow-estimator 2.0.1
tensorflow-gpu 2.0.0
ipython 7.8.0

also when I import tensorflow I get the following warning:

tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could
not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll
not found

Best Answer

You should checkout this: Github tensorflow Currently Cuda 10.1 is compatible with TF

Related Question