2017-02-15 2 views
3

저는 간단한 TensorFlow 모델을 교육하고 있습니다. 교육 측면은 잘 작동하지만 어떤 로그도 /tmp/tensorflow_logs에 쓰여지지 않고 있으며 이유가 확실하지 않습니다. 누구나 통찰력을 줄 수 있습니까? 당신에게TensorFlow FileWriter가 파일에 쓰지 않습니다.

# import MNIST 
from tensorflow.examples.tutorials.mnist import input_data 
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) 

import tensorflow as tf 

# set parameters 
learning_rate = 0.01 
training_iteration = 30 
batch_size = 100 
display_step = 2 

# TF graph input 
x = tf.placeholder("float", [None, 784]) 
y = tf.placeholder("float", [None, 10]) 

# create a model 

# set model weights 
# 784 is the dimension of a flattened MNIST image 
W = tf.Variable(tf.zeros([784, 10])) 
b = tf.Variable(tf.zeros([10])) 

with tf.name_scope("Wx_b") as scope: 
    # construct linear model 
    model = tf.nn.softmax(tf.matmul(x, W) + b) #softmax 

# add summary ops to collect data 
w_h = tf.summary.histogram("weights", W) 
b_h = tf.summary.histogram("biases", b) 

with tf.name_scope("cost_function") as scope: 
    # minimize error using cross entropy 
    cost_function = -tf.reduce_sum(y*tf.log(model)) 
    # create a summary to monitor the cost function 
    tf.summary.scalar("cost_function", cost_function) 

with tf.name_scope("train") as scope: 
    # gradient descent 
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function) 

init = tf.global_variables_initializer() 

# merge all summaries into a single operator 
merged_summary_op = tf.summary.merge_all() 

# launch the graph 
with tf.Session() as sess: 
    sess.run(init) 

    # set the logs writer to the folder /tmp/tensorflow_logs 
    summary_writer = tf.summary.FileWriter('/tmp/tensorflow_logs', graph=sess.graph) 

    # training cycle 
    for iteration in range(training_iteration): 
     avg_cost = 0. 
     total_batch = int(mnist.train.num_examples/batch_size) 
     # loop over all batches 
     for i in range(total_batch): 
      batch_xs, batch_ys = mnist.train.next_batch(batch_size) 
      # fit training using batch data 
      sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys}) 
      # compute the average loss 
      avg_cost += sess.run(cost_function, feed_dict={x: batch_xs, y: batch_ys})/total_batch 
      # write logs for each iteration 
      summary_str = sess.run(merged_summary_op, feed_dict={x: batch_xs, y: batch_ys}) 
      summary_writer.add_summary(summary_str, iteration*total_batch + i) 
     # display logs per iteration step 
     if iteration % display_step == 0: 
      print("Iteration:", '%04d' % (iteration + 1), "cost= ", "{:.9f}".format(avg_cost)) 

    print("Tuning completed!") 

    # test the model 
    predictions = tf.equal(tf.argmax(model, 1), tf.argmax(y, 1)) 
    # calculate accuracy 
    accuracy = tf.reduce_mean(tf.cast(predictions, "float")) 
    print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels})) 


print("Success!") 
+0

끝에 summary_writer.close()를 사용해도 될까요? – drpng

+0

@drpng 나는 그것을 변경하고 summary_writer.flush()를 추가하고 대상 폴더를/tmp/...에서 tmp/...로 변경 한 다음 –

+0

은 Windows에 있습니까? –

답변

4

감사 temp/.../temp/...에서 파일 경로를 변경하고 summary_writer.flush() 추가하고 summary_writer.close()은 로그가 성공적으로 기록 될 만들어의 조합.

+0

경로 시작 부분에서 슬래시'/'를 제거하면 나를 위해 일했습니다. 감사! –

관련 문제