Modifying Features for Extreme Offsets.

As each composition uses 5 layers, I wanted to create the illusion of less density without changing the number of parameters. To do this, I allow for the possibility of offsets where each layer slides completed out of view, making it invisible. This allows for compositions of only the background colour, as well as simplified compositions where only a few layers are visible.

The problem with this from an ML perspective is that the parameters of the layers that are not visible are still in the training data; this is because the training data represents the instructions for making the image, not the image itself. This causes a problem for the ML because the training data still holds the features of the layer, even if it’s not visible. I thought I would run another hyperparameter search where I zero out all the parameters for layers that are not visible. I reran an older experiment to test against and the results are promising.

Using the data-set without the feature zero outs, the best model achieved accuracies of 68.7% (validation) and 54% (test). This model also achieved f1-scores of 83% “bad” and 51% “good” for the validation set and 66% “bad” and 39% “good” for the test set. Using the data-set with the feature zero outs, the best model achieved accuracies of 75.5% (validation) and 50% (test). This model also achieved f1-scores of 79% “bad” and 63% “good” for the validation set and 59% “bad” and 38% “good” for the test set.

This is a significant increase of accuracy for the validation set of nearly 7%. Also the f1-score for the “good” class increases 12%. The test accuracy is actually poorer, but considering the size of the data set at this stage this is no surprise. It seems clear that I should try this zero out method on a larger data-set once it has been collected.