So once I added a random codebooks init to ann_som I configured oprofile to get a sense of what portions of the patch would need optimization. The results make it very clear that ann_som itself uses as much as 80% of the PD CPU usage. Python is using a measly 10%. My assumption that python may be a bottleneck is clearly not founded, and the only way to improve performance would be to limit the number of iterations ann_som goes through. Now that I have a fully populated SOM, I’ll see how few iterations are needed to train the second SOM. I wonder what will happen If I use the linear training method multiple times without clearing the SOM. It should optimize much faster the second time, as the majority of the data would not have changed.
Once I get an idea of how that will work then I should integrate the motivated camera and dual-SOMstuff into the current DM system.