2017-01-30 2 views
0

우분투에 테스토 흐름을 설치해야하는 Tensorflow 서비스를 설치하고 있습니다. tf 루트 디렉토리에서 ./configure 명령을 실행했습니다. 이것은 출력 :우분투에서 제공되는 tensorflow 설치 중 오류가 발생했습니다

Please specify the location of python. [Default is /usr/bin/python]: 
Please specify optimization flags to use during compilation [Default is -march=native]:   
Do you wish to use jemalloc as the malloc implementation? [Y/n] y 
jemalloc enabled 
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] y 
Google Cloud Platform support will be enabled for TensorFlow 
Do you wish to build TensorFlow with Hadoop File System support? [y/N] y 
Hadoop File System support will be enabled for TensorFlow 
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] y 
XLA JIT support will be enabled for TensorFlow 
Found possible Python library paths: 
    /usr/local/lib/python2.7/dist-packages 
    /usr/lib/python2.7/dist-packages 
Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages] 

Using python library path: /usr/local/lib/python2.7/dist-packages 
Do you wish to build TensorFlow with OpenCL support? [y/N] y 
OpenCL support will be enabled for TensorFlow 
Do you wish to build TensorFlow with CUDA support? [y/N] y 
CUDA support will be enabled for TensorFlow 
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: 
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
Please specify the Cudnn version you want to use. [Leave empty to use system default]: 
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
Please specify a list of comma-separated Cuda compute capabilities you want to build with. 
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. 
Please note that each additional compute capability significantly increases your build time and binary size. 
[Default is: "3.5,5.2"]: 
Please specify which C++ compiler should be used as the host C++ compiler. [Default is ]: 
Invalid C++ compiler path. cannot be found 
Please specify which C++ compiler should be used as the host C++ compiler. [Default is ]: /usr/bin/g++ 
Please specify which C compiler should be used as the host C compiler. [Default is ]: /usr/bin/gcc 
Please specify the location where ComputeCpp for SYCL 1.2 is installed. [Default is /usr/local/computecpp]: 
................................................................. 
INFO: Starting clean (this may take a while). Consider using --expunge_async if the clean takes more than several minutes. 
......... 
ERROR: package contains errors: tensorflow/stream_executor. 
ERROR: error loading package 'tensorflow/stream_executor': Encountered error while reading extension file 'cuda/build_defs.bzl': no such package '@local_config_cuda//cuda': Traceback (most recent call last): 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 813 
     _create_cuda_repository(repository_ctx) 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 727, in _create_cuda_repository 
     _get_cuda_config(repository_ctx) 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 584, in _get_cuda_config 
     _cudnn_version(repository_ctx, cudnn_install_base..., ...) 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 295, in _cudnn_version 
     _find_cuda_define(repository_ctx, cudnn_install_base..., ...) 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 270, in _find_cuda_define 
     auto_configure_fail("Cannot find cudnn.h at %s" % st...)) 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 93, in auto_configure_fail 
     fail(" 
%sAuto-Configuration Error:%s ...)) 

Auto-Configuration Error: Cannot find cudnn.h at /usr/lib/x86_64-linux-gnu/include/cudnn.h 
. 

/usr/lib/x86_64-linux-gnu/include라고 아무 폴더가 없습니다. 나는 libcudnn.so 파일을 /usr/include 폴더에 /usr/lib/x86_64-linux-gnu/cudnn.h에 가지고 있습니다. 나는 어떻게 설정 파일이 경로를 생성하는지 알지 못한다. 그러나 성공적으로 cake와 cudnn 설치 경로를 찾을 수있는 caffe를 CMookLists.txt에 성공적으로 설치했지만 cudnn을 찾을 수 없다. 이 문제를 어떻게 수정합니까?

+0

Github 문제 https://github.com/tensorflow/tensorflow/issues/6850처럼 들립니다. Tensorflow 헤드에서 다시 시도하고 문제가 해결되었는지 확인할 수 있습니까? 그렇지 않다면, 그 github 문제에 후속 조치를 취하십시오. –

+0

시스템에 NVIDIA GPU가 있습니까? 그렇다면 nvidia-smi 및 nvcc -V를 입력하면 무엇을 얻게됩니까? –

답변

0

실제로는 cudnn이 설치되어 있다고 가정합니다.
것은 사용하여 CUDA의 설치 위치를 찾을 수 - 내 경우에는
which nvcc

그것을 반환 - 구성하는 동안 너무 cudnn.h/usr/local/cuda-6.5/include ( cudnn가 설치되어 경우)

에 위치 /usr/local/cuda-6.5/bin/nvcc

을 tensorflow, 묻습니다. -
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

여기에 cudnn의 위치를 ​​명시 적으로 지정해야합니다.
제 경우에는 /usr/local/cuda-6.5/include/입니다.

관련 문제