Why does every RNN give the same loss and accuracy?

I have a dataset composed of positions in X, Y and Z and actuator variables (6 of them). That represents around 310 000 rows.

I tried MLP but it doesn’t work well. I would like to try RNN but I don’t know why, every kind of RNN I try (simpleRNN, LSTM, NARX), not matter the number of neurons of layers, give me the same bad result. Indeed, no matter what I try, the loss and accuracy are the same (and it is too bad to be used).
Do you have an idea why and what could I do ?

I developed a bit more of my problem in here if you need :
MultiLayer Perceptron not working for regression problem, what could I try?

Leave a Comment