Dictionary information was not totally written into pkl file with pickle.dump [duplicate]

I have one issue when I use pickle to record the information from trials object:

 fname = r'C:\Users\test09.pkl'
    with open(fname, 'wb+') as fpkl:
        pass
    for trial_label, trial in trials.items():
        print(f"\nData for {trial_label}:")
        with open(fname, "ab") as file:
            pickle.dump(trials[trial_label], file)

The result is that only the information in trail_0 can be written in file test01.pkl, the content of trail_1 has been abandoned.There is no information from trail_1.

By using ‘print’ command the information in trails can ben seen :

trials = {
    'trial_0': <hyperopt.base.Trials object at 0x0000020B6F875E50>,
    'trial_1': <hyperopt.base.Trials object at 0x0000020C32DA9490>
}

And I tried to print the content in trail_0 and trail_1.

The information is displayed as below:(all the information is already in trial_0 and trail_1)

 for trial_label, trial in trials.items():
        print(f"\nData for {trial_label}:")
        for trial_result in trial.trials:
            print(trial_result)

`
Data for trial_0:

{‘state’: 2, ‘tid’: 0, ‘spec’: None, ‘result’: {‘loss’: 3.5049540996551514, ‘status’: ‘ok’, ‘params’: {‘alpha’: 0.9892463279084213, ‘batch_size’: 8, ‘initializer’: ‘xavier’, ‘lamda’: 0.0020342129447770397, ‘learning_rate’: 0.012046211309666995, ‘optimizer’: ‘Momentum’, ‘units1’: 211, ‘units2’: 16}, ‘loss_train’: <tf.Tensor: shape=(), dtype=float32, numpy=3.5012333>, ‘history_loss’: ListWrapper([8.753083229064941, 8.74779987335205, 8.758686065673828, 8.748302459716797, 8.757068634033203, 8.758963584899902, 8.752105712890625, 8.771525382995605, 8.762356758117676, 8.758400917053223, 8.760527610778809, 8.76097583770752, 8.753300666809082, 8.761896133422852, 8.750022888183594, 8.755210876464844, 8.755691528320312, 8.748316764831543, 8.75323486328125, 8.763813018798828, 8.751744270324707, 8.749906539916992]), ‘history_val_loss’: ListWrapper([8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168, 8.762385368347168])}, ‘misc’: {‘tid’: 0, ‘cmd’: (‘domain_attachment’, ‘FMinIter_Domain’), ‘workdir’: None, ‘idxs’: {‘alpha’: [0], ‘alpha2’: [0], ‘batch_size’: [0], ‘initializer’: [0], ‘lamda’: [0], ‘lamda2’: [0], ‘learning_rate’: [0], ‘optimizer’: [0], ‘units1’: [0], ‘units2’: [0]}, ‘vals’: {‘alpha’: [1], ‘alpha2’: [0.9892463279084213], ‘batch_size’: [1], ‘initializer’: [0], ‘lamda’: [1], ‘lamda2’: [0.0020342129447770397], ‘learning_rate’: [0.012046211309666995], ‘optimizer’: [3], ‘units1’: [210], ‘units2’: [15]}}, ‘exp_key’: None, ‘owner’: None, ‘version’: 0, ‘book_time’: datetime.datetime(2024, 1, 13, 17, 40, 31, 538000), ‘refresh_time’: datetime.datetime(2024, 1, 13, 17, 50, 2, 337000)}
{‘state’: 2, ‘tid’: 1, ‘spec’: None, ‘result’: {‘loss’: 568.3591918945312, ‘status’: ‘ok’, ‘params’: {‘alpha’: 0.21653140358832956, ‘batch_size’: 4, ‘initializer’: ‘xavier’, ‘lamda’: 0.16352227980159106, ‘learning_rate’: 0.007188317753729951, ‘optimizer’: ‘SGD’, ‘units1’: 104, ‘units2’: 60}, ‘loss_train’: <tf.Tensor: shape=(), dtype=float32, numpy=568.365>, ‘history_loss’: ListWrapper([1420.9124755859375, 1420.9097900390625, 1420.9202880859375, 1420.9139404296875, 1420.9140625, 1420.9158935546875, 1420.917724609375, 1420.91259765625, 1420.91845703125, 1420.9150390625, 1420.9215087890625, 1420.919677734375, 1420.9197998046875, 1420.91943359375, 1420.9122314453125, 1420.907470703125, 1420.913330078125, 1420.915771484375, 1420.913330078125, 1420.9183349609375, 1420.92138671875, 1420.919189453125]), ‘history_val_loss’: ListWrapper([1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875, 1420.89794921875])}, ‘misc’: {‘tid’: 1, ‘cmd’: (‘domain_attachment’, ‘FMinIter_Domain’), ‘workdir’: None, ‘idxs’: {‘alpha’: [1], ‘alpha2’: [1], ‘batch_size’: [1], ‘initializer’: [1], ‘lamda’: [1], ‘lamda2’: [1], ‘learning_rate’: [1], ‘optimizer’: [1], ‘units1’: [1], ‘units2’: [1]}, ‘vals’: {‘alpha’: [1], ‘alpha2’: [0.21653140358832956], ‘batch_size’: [2], ‘initializer’: [0], ‘lamda’: [1], ‘lamda2’: [0.16352227980159106], ‘learning_rate’: [0.007188317753729951], ‘optimizer’: [2], ‘units1’: [103], ‘units2’: [59]}}, ‘exp_key’: None, ‘owner’: None, ‘version’: 0, ‘book_time’: datetime.datetime(2024, 1, 13, 17, 50, 2, 349000), ‘refresh_time’: datetime.datetime(2024, 1, 13, 18, 1, 2, 451000)}
{‘state’: 2, ‘tid’: 2, ‘spec’: None, ‘result’: {‘loss’: 0.4592095911502838, ‘status’: ‘ok’, ‘params’: {‘alpha’: 0, ‘batch_size’: 16, ‘initializer’: ‘xavier’, ‘lamda’: 0, ‘learning_rate’: 0.0070519520172108155, ‘optimizer’: ‘adam’, ‘units1’: 232, ‘units2’: 201}, ‘loss_train’: <tf.Tensor: shape=(), dtype=float32, numpy=0.4725984>, ‘history_loss’: ListWrapper([1.181496024131775, 1.1641870737075806, 1.173369288444519, 1.1811375617980957, 1.1743086576461792, 1.1636468172073364, 1.1717565059661865, 1.178470253944397, 1.1721278429031372, 1.171668291091919, 1.1752921342849731, 1.1784441471099854, 1.1712242364883423, 1.1714860200881958, 1.180673599243164, 1.1781213283538818, 1.1782572269439697, 1.172095775604248, 1.183220624923706, 1.1786330938339233, 1.1762667894363403, 1.174416422843933]), ‘history_val_loss’: ListWrapper([1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483, 1.1480239629745483])}, ‘misc’: {‘tid’: 2, ‘cmd’: (‘domain_attachment’, ‘FMinIter_Domain’), ‘workdir’: None, ‘idxs’: {‘alpha’: [2], ‘alpha2’: [], ‘batch_size’: [2], ‘initializer’: [2], ‘lamda’: [2], ‘lamda2’: [], ‘learning_rate’: [2], ‘optimizer’: [2], ‘units1’: [2], ‘units2’: [2]}, ‘vals’: {‘alpha’: [0], ‘alpha2’: [], ‘batch_size’: [0], ‘initializer’: [0], ‘lamda’: [0], ‘lamda2’: [], ‘learning_rate’: [0.0070519520172108155], ‘optimizer’: [0], ‘units1’: [231], ‘units2’: [200]}}, ‘exp_key’: None, ‘owner’: None, ‘version’: 0, ‘book_time’: datetime.datetime(2024, 1, 13, 18, 1, 2, 462000), ‘refresh_time’: datetime.datetime(2024, 1, 13, 18, 4, 47, 585000)}

Data for trial_1:

{‘state’: 2, ‘tid’: 0, ‘spec’: None, ‘result’: {‘loss’: 0.49134936928749084, ‘status’: ‘ok’, ‘params’: {‘alpha’: 0.010790905808018114, ‘batch_size’: 16, ‘initializer’: ‘xavier’, ‘lamda’: 0, ‘learning_rate’: 0.014460727153857825, ‘optimizer’: ‘adam’, ‘units1’: 95, ‘units2’: 78}, ‘loss_train’: <tf.Tensor: shape=(), dtype=float32, numpy=0.48485723>, ‘history_loss’: ListWrapper([1.212143063545227, 1.2260957956314087, 1.2122955322265625, 1.2140283584594727, 1.225738525390625, 1.2162582874298096, 1.2104045152664185, 1.2146499156951904, 1.20858895778656, 1.2094855308532715, 1.2158024311065674, 1.20168137550354, 1.217756986618042, 1.220750093460083, 1.210412859916687, 1.2164620161056519, 1.2077429294586182, 1.2252017259597778, 1.2162216901779175, 1.2155356407165527, 1.2114344835281372, 1.2177746295928955]), ‘history_val_loss’: ListWrapper([1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566, 1.228373408317566])}, ‘misc’: {‘tid’: 0, ‘cmd’: (‘domain_attachment’, ‘FMinIter_Domain’), ‘workdir’: None, ‘idxs’: {‘alpha’: [0], ‘alpha2’: [0], ‘batch_size’: [0], ‘initializer’: [0], ‘lamda’: [0], ‘lamda2’: [], ‘learning_rate’: [0], ‘optimizer’: [0], ‘units1’: [0], ‘units2’: [0]}, ‘vals’: {‘alpha’: [1], ‘alpha2’: [0.010790905808018114], ‘batch_size’: [0], ‘initializer’: [0], ‘lamda’: [0], ‘lamda2’: [], ‘learning_rate’: [0.014460727153857825], ‘optimizer’: [0], ‘units1’: [94], ‘units2’: [77]}}, ‘exp_key’: None, ‘owner’: None, ‘version’: 0, ‘book_time’: datetime.datetime(2024, 1, 13, 18, 5, 11, 694000), ‘refresh_time’: datetime.datetime(2024, 1, 13, 18, 8, 30, 826000)}
{‘state’: 2, ‘tid’: 1, ‘spec’: None, ‘result’: {‘loss’: 0.5244930386543274, ‘status’: ‘ok’, ‘params’: {‘alpha’: 0.7846837216580755, ‘batch_size’: 4, ‘initializer’: ‘xavier’, ‘lamda’: 0, ‘learning_rate’: 0.14068911632817868, ‘optimizer’: ‘adam’, ‘units1’: 107, ‘units2’: 54}, ‘loss_train’: <tf.Tensor: shape=(), dtype=float32, numpy=0.5071506>, ‘history_loss’: ListWrapper([1.2678765058517456, 1.2628709077835083, 1.262000560760498, 1.2589197158813477, 1.267344355583191, 1.2651135921478271, 1.2699180841445923, 1.2631267309188843, 1.264126181602478, 1.2626861333847046, 1.2632983922958374, 1.2601070404052734, 1.2722866535186768, 1.2662359476089478, 1.272186279296875, 1.2731786966323853, 1.277005672454834, 1.2642821073532104, 1.262831449508667, 1.2699660062789917, 1.2651342153549194, 1.274930477142334]), ‘history_val_loss’: ListWrapper([1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496, 1.311232566833496])}, ‘misc’: {‘tid’: 1, ‘cmd’: (‘domain_attachment’, ‘FMinIter_Domain’), ‘workdir’: None, ‘idxs’: {‘alpha’: [1], ‘alpha2’: [1], ‘batch_size’: [1], ‘initializer’: [1], ‘lamda’: [1], ‘lamda2’: [], ‘learning_rate’: [1], ‘optimizer’: [1], ‘units1’: [1], ‘units2’: [1]}, ‘vals’: {‘alpha’: [1], ‘alpha2’: [0.7846837216580755], ‘batch_size’: [2], ‘initializer’: [0], ‘lamda’: [0], ‘lamda2’: [], ‘learning_rate’: [0.14068911632817868], ‘optimizer’: [0], ‘units1’: [106], ‘units2’: [53]}}, ‘exp_key’: None, ‘owner’: None, ‘version’: 0, ‘book_time’: datetime.datetime(2024, 1, 13, 18, 8, 30, 835000), ‘refresh_time’: datetime.datetime(2024, 1, 13, 18, 23, 41, 618000)}
{‘state’: 2, ‘tid’: 2, ‘spec’: None, ‘result’: {‘loss’: 164.2992706298828, ‘status’: ‘ok’, ‘params’: {‘alpha’: 0, ‘batch_size’: 16, ‘initializer’: ‘xavier’, ‘lamda’: 0.04057033711323864, ‘learning_rate’: 0.15030101617707525, ‘optimizer’: ‘Momentum’, ‘units1’: 89, ‘units2’: 130}, ‘loss_train’: <tf.Tensor: shape=(), dtype=float32, numpy=164.29684>, ‘history_loss’: ListWrapper([410.74212646484375, 410.7425842285156, 410.7426452636719, 410.73309326171875, 410.7394104003906, 410.7387390136719, 410.7337341308594, 410.7370300292969, 410.73516845703125, 410.7540283203125, 410.7496032714844, 410.7389831542969, 410.7393798828125, 410.7406005859375, 410.74114990234375, 410.74542236328125, 410.7334899902344, 410.7344055175781, 410.7420654296875, 410.74090576171875, 410.7455139160156, 410.73712158203125]), ‘history_val_loss’: ListWrapper([410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125, 410.7481689453125])}, ‘misc’: {‘tid’: 2, ‘cmd’: (‘domain_attachment’, ‘FMinIter_Domain’), ‘workdir’: None, ‘idxs’: {‘alpha’: [2], ‘alpha2’: [], ‘batch_size’: [2], ‘initializer’: [2], ‘lamda’: [2], ‘lamda2’: [2], ‘learning_rate’: [2], ‘optimizer’: [2], ‘units1’: [2], ‘units2’: [2]}, ‘vals’: {‘alpha’: [0], ‘alpha2’: [], ‘batch_size’: [0], ‘initializer’: [0], ‘lamda’: [1], ‘lamda2’: [0.04057033711323864], ‘learning_rate’: [0.15030101617707525], ‘optimizer’: [3], ‘units1’: [88], ‘units2’: [129]}}, ‘exp_key’: None, ‘owner’: None, ‘version’: 0, ‘book_time’: datetime.datetime(2024, 1, 13, 18, 23, 42, 101000), ‘refresh_time’: datetime.datetime(2024, 1, 13, 18, 27, 3, 197000)}

`

I don’t know why the information in tril_1 has been abandoned when the data flow will be written into pkl file.How could I solve the issue?

I used the code :

for trial_label, trial in trials.items():
    print(f"\nData for {trial_label}:")
    with open(fname, "ab") as file:
        pickle.dump(trial, file)

and use the code to display the content of pkl file ,only the information of trial_0 can been seen.

with open(r'C:\Users\test09.pkl','rb') as file:
    data = pickle.load(file)
  


for trial in data.trials:
   
    print(trial)

with open(r'C:\Users\output_test10.txt', 'w') as file:
     for alle in data.trials:
        file.write(str(alle)+'\n'

You call pickle.dump() independently for each trial_0 and trial_1. This creates two pickled objects in the file. pickle.load() only loads one object at the time from the file. So to get the data for trial_1 you just need to call pickle.load(file) again.

See also: How to use append with pickle in python?

Leave a Comment