2017-09-10 3 views
1

케라 모델을 개발했으며 데이터를 모델에 적용하려고했습니다. 나는 각 배열의 데이터 형식을 인쇄하려고이 오류데이터를 모델에 맞출 때 Keras 오류가 발생했습니다.

TypeError: Error when checking model input: data should be a Numpy array, or list/dict of Numpy arrays. 

에게 무엇입니까 다음은이 후 model.fit

def train(run_name, start_epoch, stop_epoch, img_w): 
# Input Parameters 
    img_h = 64 
    words_per_epoch = 300 
    val_split = 0.2 
    val_words = int(words_per_epoch * (val_split)) 

    # Network parameters 
    conv_filters = 16 
    kernel_size = (3, 3) 
    pool_size = 2 
    time_dense_size = 32 
    rnn_size = 256 
    minibatch_size = 32 

    if K.image_data_format() == 'channels_first': 
     input_shape = (1, img_w, img_h) 
    else: 
     input_shape = (img_w, img_h, 1) 


    act = 'relu' 
    input_data = Input(name='the_input', shape=input_shape, dtype='float32') 
    inner = Conv2D(conv_filters, kernel_size, padding='same', 
        activation=act, kernel_initializer='he_normal', 
        name='conv1')(input_data) 
    inner = MaxPooling2D(pool_size=(pool_size, pool_size), name='max1')(inner) 
    inner = Conv2D(conv_filters, kernel_size, padding='same', 
        activation=act, kernel_initializer='he_normal', 
        name='conv2')(inner) 
    inner = MaxPooling2D(pool_size=(pool_size, pool_size), name='max2')(inner) 

    conv_to_rnn_dims = (img_w // (pool_size ** 2), (img_h // (pool_size ** 2)) * conv_filters) 
    inner = Reshape(target_shape=conv_to_rnn_dims, name='reshape')(inner) 

    # cuts down input size going into RNN: 
    inner = Dense(time_dense_size, activation=act, name='dense1')(inner) 

    # Two layers of bidirectional GRUs 
    # GRU seems to work as well, if not better than LSTM: 
    gru_1 = GRU(rnn_size, return_sequences=True, kernel_initializer='he_normal', name='gru1')(inner) 
    gru_1b = GRU(rnn_size, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru1_b')(inner) 
    gru1_merged = add([gru_1, gru_1b]) 
    gru_2 = GRU(rnn_size, return_sequences=True, kernel_initializer='he_normal', name='gru2')(gru1_merged) 
    gru_2b = GRU(rnn_size, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru2_b')(gru1_merged) 

    # transforms RNN output to character activations: 
    # print("Output Size",img_gen.get_output_size()) 

    inner = Dense(47, kernel_initializer='he_normal', 
        name='dense2')(concatenate([gru_2, gru_2b])) 
    y_pred = Activation('softmax', name='softmax')(inner) 
    # Model(inputs=input_data, outputs=y_pred).summary() 

    labels = Input(name='the_labels', shape=[10], dtype='float32') 
    input_length = Input(name='input_length', shape=[1], dtype='int64') 
    label_length = Input(name='label_length', shape=[1], dtype='int64') 
    # Keras doesn't currently support loss funcs with extra parameters 
    # so CTC loss is implemented in a lambda layer 
    loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([y_pred, labels, input_length, label_length]) 

    # clipnorm seems to speeds up convergence 
    sgd = SGD(lr=0.02, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5) 

    model = Model(inputs=[input_data, labels, input_length, label_length], outputs=loss_out) 

    # the loss calc occurs elsewhere, so use a dummy lambda func for the loss 
    model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=sgd) 

    test_func = K.function([input_data], [y_pred]) 

    # viz_cb = VizCallback(run_name, test_func, img_gen.next_val()) 
    (X_train, y_train, train_input_length, train_labels_length), (X_test, y_test, test_input_length, test_labels_length) = dataset_load('./OCR_BanglaData.pkl.gz') 
    print(y_train[0]) 

    X_train = X_train.reshape(X_train.shape[0], 128,64,1) 
    X_test = X_test.reshape(X_test.shape[0], 128,64,1) 
    X_train = X_train.astype('float32') 
    X_test = X_test.astype('float32') 

    model.fit((np.array(X_train), np.array(y_train),np.array(train_input_length), np.array(train_labels_length)), batch_size=32, epochs=120, verbose=1,validation_data=[np.array(X_test), np.array(y_test),np.array(test_input_length), np.array(test_labels_length)]) 

코드입니다. 결과는

<type 'numpy.ndarray'> 

여전히이 오류가 발생합니다. 특별한 이유가 있습니까? 나는 케라의 백엔드로 tensorflow 모델을 사용하고 있습니다. NumPy와 배열해야

model = Model(inputs=[input_data, labels, input_length, label_length], outputs=loss_out) 

네 개의 입력의 각을하고, 적합한 방법은 (당신이 튜플을 사용) 목록에서 그들을 원하는 : 여기

+0

모델에 대한 코드를 게시 할 수 있습니까? – DJK

+0

질문을 업데이트했습니다. 전체 코드가 추가되었습니다. – Codehead

+0

모델이어야합니다 (inputs = input_data, output = loss_out) – DJK

답변

0

당신은 4 개 입력이

model.fit([X_train, y_train,train_input_length, train_labels_length],...) 

fit 방법의 결과가 누락되었습니다. 모델을 만들 때 사용자가 정의한 내용과 일치해야합니다 (loss_out).

관련 문제