The (non)user and The Consumer Appliance

Posted: December 27, 2013 at 7:04 pm

The “end user” is the person a particular technology utility is targeted toward. If the technology is a phone, then it’s the person using the phone to make phone calls. Traditionally, a user would buy some tool for some purpose and use it. These days our tools have become nearly universal. We have computers that run software where each piece of software could be considered a separate tool. We used to buy computers, now we buy phones, TVs, routers, household appliances, etc. These are all computers, all general purpose, just wrapped in different packaging with different physical use scenarios and software.

With internet distribution and “software ecologies” the relation between the tool-makers and the tool-users has become quite complex. Some software is provided gratis where the tool-maker compensates for their labour by creating an audience for advertisers. These audiences are valuable because they come with very complex contexts, specific use cases for this particular software, but increasingly also social networks and trails of cookies indicating numerous products they may be interested in. This system is most pronounced in the large corporate social networks who exist to mine your social life for information that could be exploited by marketers. One must beg the question, who is the “user” in this context? If the marketer pays Google or Facebook for your information and access to you, are social networks tools used by marketers? What does that make those of us who contribute our social structures to these networks? Clearly we, or at least our information, are products. Products bought and sold like any other, to be exploited by those with the means to pay. To describe this relation I introduce the term (non)user. (more…)


Virtuosity and Stability in Technology

Posted: December 27, 2013 at 9:03 am

I was trained in fine arts with liberal studies in philosophy. While I’ve always (as long as I can remember) been negotiating with media and technology I was never formally trained (indoctrinated?) by a hard objectivist discipline (like physics). It has always been clear to me (from critical theory and feminism) that the world we know is a world constructed by both reality, and our own desires and expectations.

I certainly don’t identify myself as a transhumanist (of any type), because I think that the major feature of humanity is our ability to abstract away from details to execute complex actions where much detail (what we are actually doing) becomes unconscious and implicit. The fusion of technology, culture and life pushed by transhumanists has actually always been the status quo. We have no choice but to internalize and abstract when we learn to use tools, our thoughts have been shaped by tools for as long as we have made them. The ability to use tools and abstract is not unique to humans; I think non-human animals can do the same, just not to the same extent. This internalization and our ability to use the abstraction in place of the details makes us susceptible to perceiving our expectations over sensorial reality. What we expect, and the abstractions (concepts) themselves are a function of our culture. We see the world through our tools: the media is the message.

Our culture is then central to how we construct and perceive the world (and ourselves). All that technology does is reflect our cultural values, and often a small culture of innovators. In order to develop our technology, we need to consciously look first at our culture and values. The primary site of developing/criticizing/integrating technology should be culturally aware. If technology is a reflection of culture, then there is no strict divide. (more…)


150,000 Frame Test – Segmentation and Clustering

Posted: December 23, 2013 at 3:38 pm

After some tweaks of the segmentation and clustering code, it seems we have something that won’t turn to mud after a few days. Consider the image from the previous post as reference when considering the following images:

percepts-lastFrame percepts-black

(more…)


100,000 frame test – Segmentation and Clustering

Posted: December 16, 2013 at 5:50 pm

This is the longest test in some time where the percepts are actually dumped to disk so we can take a look at them. Callgrind indicated that my inline weighting (just using the * operator on cv::Mats) was using 30% of the CPU of the whole program, switching to the addWeighted() function and other optimization got that 7s per frame time down to ~3s, making longer tests more feasible on this machine. The bad news is that the trend to more ephemeral clusters seems to continue, and after 100,000 frames all percepts are unreadable mud:

output

The idea for the fix is to switch from CIELuv to HSV and threshold masks so they only calculate the mean colour for a small area that corresponds to the most recently clustered mask. Currently, the raw mask is used, and is interpreted as binary, so its likely that most of the image is selected by the mask, increasing muddiness.


30,000 frame test – Segmentation and Clustering

Posted: December 9, 2013 at 6:50 pm

Over the weekend I ran a 30,000 frame test, thus far the longest test running the predictor and integrated segmentation and clustering system. The temporal instability has lead to many percepts ending up being extremely ephemeral. Following is an image that shows all percepts after 30,000 frames rendering on top of one and other with a white background:

30000_percepts_test (more…)


Updates to Segmentation and Clustering

Posted: December 6, 2013 at 7:27 pm

After implementing the predictor in the main DM program, I had the chance to run the and them dump percepts to give some form to the ML generated output previously posted. The results were quite weak. It appears that there is simply too much information to be encapsulated by the small number (~2000) of percepts in the system. The first issue was the way that percepts had a tendency to crawl in from the edges due to the centring of clusters. I resolved this by treating percepts on edges differently which merge while still being anchored to the edges. Additionally, the precept’s averaging of constituent regions were weighted to emphasize the most recent percepts (something like 75-85% weighting of current stimulus). This made percepts appear much more stable (over time) than they actually were. In short, a very unstable cluster was represented by a highly specified image. The idea with this was that the presentation of the percepts would be recognition, and in perception the display would show a reconstruction of sensory data from clusters alone. The result was very little correlation between the reconstruction and the sensory information:

shot0001

(more…)