2017-05-02 1 views
1

Tensorflow을 통한 전송 학습을 이해하려고합니다. 하지만 명시된 오류가 나타납니다.InvalidArgumentError : dtype double을 사용하여 자리 표시 자 텐터 'ground_truth'의 값을 입력해야합니다.

(10, 2048) 
<type 'numpy.ndarray'> 
<type 'numpy.ndarray'> 
(10, 5) 
float64 
[ 0. 0. 0. 1. 0.] 

나는에 오류가 점점 오전 :
이것은

def add_final_training_ops(graph, class_count, final_tensor_name, 
          ground_truth_tensor_name): 
    """Adds a new softmax and fully-connected layer for training. 
    We need to retrain the top layer to identify our new classes, so this function 
    adds the right operations to the graph, along with some variables to hold the 
    weights, and then sets up all the gradients for the backward pass. 
    The set up for the softmax and fully-connected layers is based on: 
    https://tensorflow.org/versions/master/tutorials/mnist/beginners/index.html 
    Args: 
     graph: Container for the existing model's Graph. 
     class_count: Integer of how many categories of things we're trying to 
     recognize. 
     final_tensor_name: Name string for the new final node that produces results. 
     ground_truth_tensor_name: Name string of the node we feed ground truth data 
     into. 
    Returns: 
     Nothing. 
    """ 
    bottleneck_tensor1 = graph.get_tensor_by_name(ensure_name_has_port(
     BOTTLENECK_TENSOR_NAME)) 
    bottleneck_tensor = tf.placeholder_with_default(bottleneck_tensor1, shape=[None, 2048]) 
    layer_weights = tf.Variable(
     tf.truncated_normal([BOTTLENECK_TENSOR_SIZE, class_count], stddev=0.001), 
     name='final_weights') 
    layer_biases = tf.Variable(tf.zeros([class_count]), name='final_biases') 
    logits = tf.matmul(bottleneck_tensor, layer_weights, 
         name='final_matmul') + layer_biases 
    tf.nn.softmax(logits, name=final_tensor_name) 
    ground_truth_placeholder = tf.placeholder(tf.float64, 
               [None, class_count], 
               name=ground_truth_tensor_name) 
    cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
     logits=logits, labels=ground_truth_placeholder) 
    cross_entropy_mean = tf.reduce_mean(cross_entropy) 
    train_step = tf.train.GradientDescentOptimizer(FLAGS.learning_rate).minimize(
     cross_entropy_mean) 
    return train_step, cross_entropy_mean 

def do_train(sess,X_input, Y_input, X_validation, Y_validation): 
    ground_truth_tensor_name = 'ground_truth' 
    mini_batch_size = 10 
    n_train = X_input.shape[0] 

    graph = create_graph() 

    train_step, cross_entropy = add_final_training_ops(
     graph, len(classes), FLAGS.final_tensor_name, 
     ground_truth_tensor_name) 

    init = tf.initialize_all_variables() 
    sess.run(init) 

    evaluation_step = add_evaluation_step(graph, FLAGS.final_tensor_name, ground_truth_tensor_name) 

    # Get some layers we'll need to access during training. 
    bottleneck_tensor1 = graph.get_tensor_by_name(ensure_name_has_port(BOTTLENECK_TENSOR_NAME)) 
    bottleneck_tensor = tf.placeholder_with_default(bottleneck_tensor1, shape=[None, 2048]) 
    ground_truth_tensor1 = graph.get_tensor_by_name(ensure_name_has_port(ground_truth_tensor_name)) 
    ground_truth_tensor = tf.placeholder_with_default(ground_truth_tensor1, shape=[None, len(classes)]) 

    i=0 
    epocs = 1 
    for epoch in range(epocs): 
     shuffledRange = np.random.permutation(n_train) 
     y_one_hot_train = encode_one_hot(len(classes), Y_input) 
     y_one_hot_validation = encode_one_hot(len(classes), Y_validation) 
     shuffledX = X_input[shuffledRange,:] 
     shuffledY = y_one_hot_train[shuffledRange] 
     for Xi, Yi in iterate_mini_batches(shuffledX, shuffledY, mini_batch_size): 
      print Xi.shape 
      print type(Xi) 
      print type(Yi) 
      print Yi.shape 
      print Yi.dtype 
      print Yi[0] 
      sess.run(train_step, 
        feed_dict={bottleneck_tensor: Xi, 
           ground_truth_tensor: Yi}) 

인쇄 문은 다음과 같은 출력이 내 코드입니다

sess.run(train_step,feed_dict={bottleneck_tensor: Xi,ground_truth_tensor: Yi}) 

누군가가 내가이 직면하고 이유를 말해 줄 수 오류?

답변

1

피드가 적용되지 않는 add_final_training_ops에 자리 표시자를 만들었습니다. add_final_training_ops에 생성 한 자리 표시 자 ground_truth_tensor은 같다고 생각할 수도 있지만 그렇지 않은 경우 이전에 의해 초기화 된 경우에도 새로운 것입니다.

가장 쉬운 해결 방법은 자리 표시자를 add_final_training_ops에서 돌려 받고 대신이 자리 표시자를 사용하는 것입니다.

+0

그러면 bottleneck_tensor가 초기화되는 방법은 무엇입니까? – neel

+0

감사합니다. – neel

관련 문제