Cross Validation and Hyperparameter Tuning in CARET.

Posted: May 19, 2019 at 4:13 pm

The above plot outlines the results of my 10-fold cross validation for parameter tuning. The best model had 32 hidden units and a decay of 0.1. Predictions based on this ‘best’ model are still not great; The accuracy is 61% on the validation set. 30 “good” compositions were labelled to be “bad” and 31 “bad” compositions labelled to be “good”. The following images show the misclassified prediction results, good predicted to be bad (top) and bad predicted to be good (bottom).

Before I jump into training a deeper model, I have an idea for transforming my input vectors. Since I generate vectors that all range from 0-1 and have the same resolution, I wonder if the network is having trouble leaning distributions of parameters that are not real numbers; i.e. parameters like frequency, offsets and render styles are more like categorical variables. So an idea is to change those features so their resolution matches the number of possible categories

Jumping into a deeper network would involve either continuing my workflow with another DNN library or dropping R and implementing in keras and running on my CUDA setup.