No Improvement with New Features

Posted: July 31, 2019 at 5:22 pm

Running a scan of hyperparameters over 145 models resulted in no improvement of the 70% validation accuracy. (Well, actually one model reported 74% validation accuracy during the search, but was not saved as the “best model” by talos.) Below are the confusion matrices for both training and validation sets. Based on these results, it’s time to change to other features. I’ll try calculating the variance of each feature across layers first, since that’s pretty easy to implement, and resort to colour histograms of images if that leads no where.


New Features with new Dataset.

Posted: July 16, 2019 at 6:03 pm

Following from my last post I finished generating and labelling a new dataset. I’m right now rerunning the previous experiment in talos to see if this new data-set makes any change. An initial look at a few of the distributions of my features, good and bad compositions remain quite evenly distributed but a second look shows there is some uneveness to the distribution of some features, such as offset:

More offsets near 0 were labelled to be bad. Those two spikes are also quite far from an even distribution.

The new dataset is also 15,000 items including 3921 “good”, 3872 “bad” and 7207 “neutral” labels. Following is a random sampling of good and bad compositions from the new dataset:


Karen Barad

Posted: July 2, 2019 at 6:35 pm

After months I’ve finally finished reading the Karen Barad papers that where provided as part of their symposium at UBC. The following is my notes from the symposium, as well as my responses to the readings. These are lightly edited and clarified, and if I’m inspired to respond to the notes, I’ll include that in square brackets.

Troubling Time/s and Ecologies of Nothingness, Re-turning, Re-remembering, and Facing the Incalculable.