Keras accuracy does not change for multi-output module

I want to predict the penalty/punishment given for fraud cases, with inputs in the form of (damage amount($), if recidivism), and targets in the format of (fine($), jail(months), community service(hours), probation(months)) – the following is my code:

fname = "filepath.tsv"
all_in = []
all_out = []

with open(fname) as f:
    for i, line in enumerate(f):
        if i < 3:#first line
            print("Header:", line.strip().split('\t'))
            continue
        fields = line.strip().split('\t')
        all_in.append([int(a.replace(",", "")) for a in fields[5:7]])
        all_out.append([int(a.replace(",", "")) for a in fields[7:11]])

case_in = np.array(all_in, dtype = "uint64")
target_out = np.array(all_out, dtype = "uint64")

normalize_layer = tf.keras.layers.Normalization(axis=-1, name = "normalize_in")
normalize_layer.adapt(all_in)
normalize_out = tf.keras.layers.Normalization(axis=-1, name = "normalize_out")
normalize_out.adapt(all_out)
denormalize_out = tf.keras.layers.Normalization(axis=-1, invert = True, name = "denormalize_out")
denormalize_out.adapt(all_out)
scaled_out = normalize_out(all_out)

inputs = normalize_layer(Input(shape = 2))
x = Dense(6, input_dim = 2, activation = "sigmoid", use_bias = True)(inputs)
x = Dense(4, activation = "sigmoid")(x)
y_4 = Dense(1, activation = "sigmoid", name = "y_4")(x)
punishment = Dense(3, activation = "sigmoid", name = "punishment")(x)
y_1 = Dense(1, activation = "sigmoid", name = "y_1")(punishment)
y_2 = Dense(1, activation = "sigmoid", name = "y_2")(punishment)
y_3 = Dense(1, activation = "sigmoid", name= "y_3")(punishment)

model = Model(inputs = inputs, outputs = [y_1, y_2, y_3, y_4])

model.compile(
    optimizer = SGD(learning_rate=0.01, weight_decay=1e-6, momentum=0.9, nesterov=True),
    loss = {
        "y_1" : "MeanSquaredError",
        "y_2" : "MeanSquaredError",
        "y_3" : "MeanSquaredError",
        "y_4" : "MeanSquaredError"
    },
    metrics=['accuracy']
)

model.fit(
    case_in,
    scaled_out,
    batch_size = 10,#after # of data increases, increase
    epochs = 300,
    verbose = 2,
    validation_split = 0.8
)

for layer in model.layers:
    print("===== LAYER: ", layer.name, " =====")
    if layer.get_weights() != []:
        weights = layer.get_weights()[0]
        biases = layer.get_weights()[1]
        print("weights:")
        print(weights)
        print("biases:")
        print(biases)
    else:
        print("weights: ", [])

And, once starting to train the model, the accuracy do not change at all. Also, while I was trying to fix that, I messed something up to make my data with ~50 datasets only one batch of the .fit operation, though my batch size says 10. I was just trying way too many things to fix my code from here and there on the internet and it kind of messed everything up even more.

Initially, I tried normalizing the data, both my ins and outs, which unfroze my loss function, but it didn’t really do anything to the accuracy function. I also changed the ReLU function to the sigmoid function too, because at some point it seemed it might be a problem with a dead ReLU, as the biases on my layers were not being updated from their initial 0s. After that, I kept nudging around the optimizer, the loss functions, and epochs, which all came to no avail.

How to make the module actually work to its purpose? And do you have general feedback on the code?

Leave a Comment