Posted: April 25, 2012 at 2:58 pm
Here is an image of a run where I disabled the merging code. So this is just an accumulation of patches until they fill up memory (once the program allocated 7.8GB):
This image contains 198,100 patches from 734 frames. The mean processing time is 3.9 seconds, with a max of 5.3 (excluding the first frame when the CUDA stuff gets complied for the GPU, which was 7.365). The new (good) segmentation method based on floodFill() is a bit slow, but 4s per frame is good enough for now.
Merging Patches into Percepts
Posted: April 25, 2012 at 10:39 am
After spending far too much time on early perception I really need to move on. Above is a somewhat aesthetically interesting reconstruction from patches that could not be merged. I think the current very rough sketchy state of the system is ok for now, but merging is problematic. (more…)
This documentary film about FLOSS and art includes an interview with me.
“Este documental aborda el uso del software libre en el arte actual (2008). Para obtener un panorama condensado se hicieron entrevistas y levantamiento de imagen de los festivales Make Art en Poitiers, Francia, y Piksel en Bergen, Noruega. Estos festivales se realizan cada año y se caracterizan por reunir a artistas de todo el mundo que utilizan software libre como plataforma creativa. Aqui se muestran las ediciones del 2008 de estos festivales.
Producido en el 2008 y 2009 por el Taller de Audio del Centro Multimedia del Centro Nacional de las Artes, México, gracias al apoyo del PADID.”
Feature Extraction (between frames for 5 subsequent labelled patches)
Posted: April 13, 2012 at 3:48 pm
Since I’ve been flying by the seat of my pants, I thought I should collect some real data on the feature extraction process before continuing. The only measure I could think of for seeing the distribution of features for a particular patch of pixels was to segment and hand label patches. I’ve done so for a small set of 5 subsequent frames, and following are the results. (more…)
On The Conservation of Technological Culture
Posted: April 11, 2012 at 11:46 am
It is true that digital representation is very abstract and easily lost, but the same goes for any other ‘written’ representation. All meaning is context dependent.
It’s very hard to make anything last. Maybe making things last is not the point. Maybe we should be looking at a living (rather than archived) culture, one that is passed down like an oral tradition and the meaning is in the enacted traditions, language and social relations that encompass culture.
Ones and zeros are abstract, but the code that humans write to interact with computers is no different than any other text. The solution to the problem of loss over time is to treat technological development as a cultural enterprise. Imagine if the family computer was maintained by the family. It’s passed down and reworked from parents to children generation after generation. It’s fixed, rethought, and rebuilt, and always reflects the culture.
That which is fixed cannot reflect a culture that is always changing. All it takes is will and maintenance to make any technological product last, we all just need the knowledge and means of production to do it.