Ich bekomme den folgenden Fehler, wenn ich versuche, meine LSTM zu starten Programm (für Eingaben variabler Länge).TypeError: Inkonsistenz in der inneren Grafik von Scan 'Scan_fn' .... 'TensorType (float64, col)' und 'TensorType (float64, Matrix)'
TypeError: Inconsistency in the inner graph of scan 'scan_fn' : an input and an output are associated with the same recurrent state and should have the same type but have type 'TensorType(float64, col)' and 'TensorType(float64, matrix)' respectively.
Mein Programm ist wie hier für imdb Sentiment-Analyse Problem auf dem LSTM Beispiel basiert: http://deeplearning.net/tutorial/lstm.html. Meine Daten sind nicht imdb, aber es sind Sensordaten.
Ich teilte meinen Quellcode: lstm_var_length.py und Daten: data.npz. (Klicken Sie auf die Dateien)
Aus dem oben genannten Fehler und ein paar Google-Suchanfragen haben mich verstanden, dass ich ein Problem mit den Vektor/Matrix-Dimensionen in meiner Funktion habe. Im Folgenden sind die Funktionsdefinitionen in denen dieses Problem auftritt: oben
def lstm_layer(shared_params, input_ex, options):
"""
LSTM Layer implementation. (Variable Length inputs)
Parameters
----------
shared_params: shared model parameters W, U, b etc
input_ex: input example (say dimension: 36 x 100 i.e 36 features and 100 time units)
options: Neural Network model options
Output/returns
----------------
output of each lstm cell [h_0, h_1, ..... , h_t]
"""
def slice(param, slice_no, height):
return param[slice_no*height : (slice_no+1)*height, :]
def cell(wxb, ht_1, ct_1):
pre_activation = tensor.dot(shared_params['U'], ht_1)
pre_activation += wxb
height = options['hidden_dim']
ft = tensor.nnet.sigmoid(slice(pre_activation, 0, height))
it = tensor.nnet.sigmoid(slice(pre_activation, 1, height))
c_t = tensor.tanh(slice(pre_activation, 2, height))
ot = tensor.nnet.sigmoid(slice(pre_activation, 3, height))
ct = ft * ct_1 + it * c_t
ht = ot * tensor.tanh(ct)
return ht, ct
wxb = tensor.dot(shared_params['W'], input_ex) + shared_params['b']
num_frames = input_ex.shape[1]
result, updates = theano.scan(cell,
sequences=[wxb.transpose()],
outputs_info=[tensor.alloc(numpy.asarray(0., dtype=floatX),
options['hidden_dim'], 1),
tensor.alloc(numpy.asarray(0., dtype=floatX),
options['hidden_dim'], 1)],
n_steps=num_frames)
return result[0] # only ht is needed
def build_model(shared_params, options):
"""
Build the complete neural network model and return the symbolic variables
Parameters
----------
shared_params: shared, model parameters W, U, b etc
options: Neural Network model options
return
------
x, y, f_pred_prob, f_pred, cost
"""
x = tensor.matrix(name='x', dtype=floatX)
y = tensor.iscalar(name='y') # tensor.vector(name='y', dtype=floatX)
num_frames = x.shape[1]
# lstm outputs from each cell
lstm_result = lstm_layer(shared_params, x, options)
# mean pool from the lstm cell outputs
pool_result = lstm_result.sum(axis=1)/(1. * num_frames)
# Softmax/Logistic Regression
pred = tensor.nnet.softmax(tensor.dot(shared_params['softmax_W'], pool_result) +
shared_params['softmax_b'])
# predicted probability function
theano.printing.debugprint(pred)
f_pred_prob = theano.function([x], pred, name='f_pred_prob', mode='DebugMode') # 'DebugMode' <-- Problem seems to occur at this point
# predicted class
f_pred = theano.function([x], pred.argmax(axis=0), name='f_pred')
# cost of the model: -ve log likelihood
offset = 1e-8 # an offset to prevent log(0)
cost = -tensor.log(pred[y-1, 0] + offset) # y = 1,2,...n but indexing is 0,1,..(n-1)
return x, y, f_pred_prob, f_pred, cost
Der Fehler wird verursacht, wenn f_pred_prob Theanos Funktion zu kompilieren versuchen.
Ausnahme und Call-Stack ist unter:
File "/home/inblueswithu/Documents/Theano_Trails/lstm_var_length.py", line 450, in
main()
File "/home/inblueswithu/Documents/Theano_Trails/lstm_var_length.py", line 447, in main
train_lstm(model_options, train, valid)
File "/home/inblueswithu/Documents/Theano_Trails/lstm_var_length.py", line 314, in train_lstm
(x, y, f_pred_prob, f_pred, cost) = build_model(shared_params, options)
File "/home/inblueswithu/Documents/Theano_Trails/lstm_var_length.py", line 95, in build_model
f_pred_prob = theano.function([x], pred, name='f_pred_prob', mode='DebugMode') # 'DebugMode'
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function.py", line 320, in function
output_keys=output_keys)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 479, in pfunc
output_keys=output_keys)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 1777, in orig_function
defaults)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/debugmode.py", line 2571, in create
storage_map=storage_map)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 690, in make_thunk
storage_map=storage_map)[:3]
File "/usr/local/lib/python2.7/dist-packages/theano/compile/debugmode.py", line 1809, in make_all
no_recycling)
File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan_op.py", line 730, in make_thunk
self.validate_inner_graph()
File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan_op.py", line 249, in validate_inner_graph
(self.name, type_input, type_output))
TypeError: Inconsistency in the inner graph of scan 'scan_fn' : an input and an output are associated with the same recurrent state and should have the same type but have type 'TensorType(float64, col)' and 'TensorType(float64, matrix)' respectively.
Ich habe eine Woche lang alle Debugging zu tun, konnte aber das Problem nicht finden. Ich bezweifle die Initialisierungen bei outputs_info in derano.scan, um das Problem zu sein, aber wenn ich die zweite Dimension (1) lösche bekomme ich einen Fehler in der Slice-Funktion, noch bevor f_pred_prob Funktion (in der Nähe von lstm_result). Ich bin mir nicht sicher, wo das Problem liegt.
Eine einfache Ausführung dieses Programms durch Platzierung der Datendatei im selben Verzeichnis wie die Python-Quelldatei kann dieses Problem erneut erstellen.
Bitte helfen Sie mir.
Dank & Grüße, inblueswithu