Following from the previous post, I ran a test with a different training procedure. Previously I had been doing the canonical SOM training where the neighbourhood starts large and shrinks monotonically over time. For the videos, I want an increase of the degree of reorganization over time, so I train over a number of epochs where the starting neighbourhood size for each epoch increases over time. Within each epoch, that maximum neighbourhood size still shrinks for each training sample. In this test, results pictured in the large image below along with previous results underneath, I do multiple epochs where the maximum neighbourhood stays the same for every epoch.
The top image is the same resolution as the source file (corresponding to the lower left image previous best result) and does not have any of the artifacts of the previous best (colour spotting, visible banding) and has the smoothness I’ve been looking for through all these iterations. The colour contrast seems very low and I’m wondering why that is, also the structure looks very similar to the original where the face neck and hand have not migrated to each other as in the lower images.
This means that I need either a larger max neighbourhood for this training method, or I may need more training iterations. The former would not take any more time so I guess I should try that first. The top image took about 3.5 days to train. I’m also wondering if the lack of diversity could be related my disabling of the random seed where the same subset of training samples could be used over and over. I’ll make both these changes and see where that goes. Upon a closer look it seems I had already switched to time seeded randomness! I’ll try fewer training iterations and see if that makes any difference.