I am interested in updating existing layer parameters in Keras (not removing a layer and inserting a new one instead, rather just modifying existing parameters).
I will give an example of a function I'm writing:
def add_filters(self, model):conv_indices=[i for i, layer in enumerate(model.layers) if 'convolution' in layer.get_config()['name']]random_conv_index=random.randint(0, len(conv_indices)-1)factor=2conv_layer=model.layers[random_conv_index]conv_layer.filters=conv_layer.filters * factorprint('new conv layer filters after transform is:', conv_layer.filters)print('just to make sure, its:', model.layers[random_conv_index].filters)return model
so what's basically happening here is me taking a random convolutional layer from my network (all my conv layers have 'convolution' in their name) and trying to double the filters. As far as I know this shouldn't cause any 'compilation issues' with input/output size compatibility in any case.
The thing is, my model doesn't change at all. The 2 print-outs I added in the end print the correct number (double the previous amount of filters). But when I compile the model and print model.summary(), I still see the previous filter amount.
BTW, I'm not constricted to Keras. If anyone has an idea how to pull this off with PyTorch for example I'll also buy it :D