우분투 16.04에서 tensorflow 버전 1.3.0을 실행 중입니다. 주요 의도는 tensorboard에 그래프를 시각화하는 것입니다. 코드를 실행하는 동안 코드가 처음 실행될 때 모든 것이 완벽하게 잘된 것처럼 보입니다. 나는이 오류가 그러나 나는 2 시간 동안 코드를 실행했을 때 그 이후 : 코드 여기Tensorflow 오류 : InvalidArgumentError : dtype float 및 shape [? : 784]로 자리 표시 자 텐서 '자리 표시 자'에 값을 입력해야합니다.
InvalidArgumentError Traceback (most recent call last)
<ipython-input-26-149c9b9d8878> in <module>()
11 sess.run(optimizer, feed_dict={x: batch_xs, y:
batch_ys})
12 avg_cost += sess.run(cost_function, feed_dict={x:
batch_xs, y: batch_ys})/total_batch
---> 13 summary_str = sess.run(merged_summary_op,
feed_dict={x: batch_xs, y: batch_ys})
14 summary_writer.add_summary(summary_str,
iteration*total_batch + i)
15 if iteration % display_step == 0:
/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in run(self, fetches,
feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
InvalidArgumentError Traceback (most recent call
last)
<ipython-input-26-149c9b9d8878> in <module>()
11 sess.run(optimizer, feed_dict={x: batch_xs, y:
batch_ys})
12 avg_cost += sess.run(cost_function, feed_dict={x:
batch_xs, y: batch_ys})/total_batch
---> 13 summary_str = sess.run(merged_summary_op,
feed_dict={x: batch_xs, y: batch_ys})
14 summary_writer.add_summary(summary_str,
iteration*total_batch + i)
15 if iteration % display_step == 0:
/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in run(self, fetches,
feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in _run(self, handle,
fetches, feed_dict, options, run_metadata)
1122 if final_fetches or final_targets or (handle and
feed_dict_tensor):
1123 results = self._do_run(handle, final_targets,
final_fetches,
-> 1124 feed_dict_tensor, options,
run_metadata)
1125 else:
1126 results = []
/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in _do_run(self, handle,
target_list, fetch_list, feed_dict, options, run_metadata)
1319 if handle is None:
1320 return self._do_call(_run_fn, self._session, feeds,
fetches, targets,
-> 1321 options, run_metadata)
1322 else:
1323 return self._do_call(_prun_fn, self._session, handle,
feeds, fetches)
/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in _do_call(self, fn,
*args)
1338 except KeyError:
1339 pass
-> 1340 raise type(e)(node_def, op, message)
1341
1342 def _extend_graph(self):
됩니다 :
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/home/niraj/Documents/artificial
intelligence/projects/tensorboard", one_hot=True)
learning_rate = 0.01
training_iteration = 200
batch_size = 100
display_step = 2
# TF graph input
x = tf.placeholder('float32', [None, 784]) # mnist data image of shape
28*28=784
y = tf.placeholder('float32',[None, 10]) # 0-9 digits recognition =>
10 classes
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
with tf.name_scope("Wx_b") as scope:
model = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
w_h = tf.summary.histogram("weights", W)
b_h = tf.summary.histogram("biases", b)
with tf.name_scope("cost_function") as scope:
cost_function = -tf.reduce_sum(y*tf.log(model))
tf.summary.scalar("cost_function", cost_function)
with tf.name_scope("train") as scope:
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)
init = tf.global_variables_initializer()
merged_summary_op = tf.summary.merge_all()
with tf.Session() as sess:
sess.run(init)
summary_writer = tf.summary.FileWriter('/home/niraj/Documents/artificial intelligence/projects/tensorboard', graph=sess.graph)
for iteration in range(training_iteration):
avg_cost = 0
total_batch = int(mnist.train.num_examples/batch_size)
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
avg_cost += sess.run(cost_function, feed_dict={x: batch_xs, y: batch_ys})/total_batch
summary_str = sess.run(merged_summary_op, feed_dict={x: batch_xs, y: batch_ys})
summary_writer.add_summary(summary_str, iteration*total_batch + i)
if iteration % display_step == 0:
print "Iteration:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(avg_cost)
print "Tuning completed!"
predictions = tf.equal(tf.argmax(model, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(predictions, "float"))
print "Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels})
하는 당신에게 상기시켜 여기
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,784]
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[?,784], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
는 역 추적입니다 이 코드는 처음 실행했을 때 완벽하게 작동합니다. 두 번째 실행에서는 오류가 발생했습니다. 그러나 노트북과 jupyter 터미널을 닫은 다음 다시 열고 다시 실행하면 ru가됩니다. n은 오류없이 실행되고 두 번째 실행에서는 위의 오류가 발생합니다.
예. tf.reset_default_graph()가 작동 중입니다. 전에 나는이 책을 다른 날 'Scikit-Learn and Tenorflow'로 읽었으며 분명히 "Jupyter (또는 Python 쉘)에서 동일한 명령을 더 많이 실행하는 팁을 발견했습니다. 결과적으로 많은 수의 중복 된 노드를 포함하는 기본 그래프로 끝날 수 있습니다. 하나의 솔루션은 Jupyter 커널 (또는 Python 셸)을 다시 시작하는 것이지만 더 편리한 솔루션은 tf.reset_default_graph()를 실행하여 기본 그래프를 표시합니다. " – clarky