A few people have emailed me to ask what I think about this, the method used to generate the image above. So I thought I would post the synthesis of my thoughts here. I was asked because my Ph.D. was about the mechanisms of dreaming. So what does this project have to do with dreaming? Well it turns out, not much at all.
The following was originally posted on Google+ on Dec 18th, 2014.
I experienced the Oculus Rift at Convergence in Banff a few weeks ago and meant to post my reflections. First, a little background: I’ve been doing computer graphics since I was a preteen (I used digipaint – 86, Imagine 3D – 86, and multimedia with AmigaVision – ~90, I also tinkered with SGIs running Alias in the early 90s), and I was lucky enough to go to SIGGRAPH in 1995, when I used a VR system of the time. It was so heavy that it was mounted on an armature you moved around with your hands.
I was looking at the upcoming Prix Ars application process and realized that there was not really a place for my (generative) work. I looked over at the Interactive Art section to see how broadly it was defined, and found this:
“Jurors are looking forward to encountering innovative technological concepts blended with superbly effective design (usability).”
The “end user” is the person a particular technology utility is targeted toward. If the technology is a phone, then it’s the person using the phone to make phone calls. Traditionally, a user would buy some tool for some purpose and use it. These days our tools have become nearly universal. We have computers that run software where each piece of software could be considered a separate tool. We used to buy computers, now we buy phones, TVs, routers, household appliances, etc. These are all computers, all general purpose, just wrapped in different packaging with different physical use scenarios and software.
With internet distribution and “software ecologies” the relation between the tool-makers and the tool-users has become quite complex. Some software is provided gratis where the tool-maker compensates for their labour by creating an audience for advertisers. These audiences are valuable because they come with very complex contexts, specific use cases for this particular software, but increasingly also social networks and trails of cookies indicating numerous products they may be interested in. This system is most pronounced in the large corporate social networks who exist to mine your social life for information that could be exploited by marketers. One must beg the question, who is the “user” in this context? If the marketer pays Google or Facebook for your information and access to you, are social networks tools used by marketers? What does that make those of us who contribute our social structures to these networks? Clearly we, or at least our information, are products. Products bought and sold like any other, to be exploited by those with the means to pay. To describe this relation I introduce the term (non)user.
I was trained in fine arts with liberal studies in philosophy. While I’ve always (as long as I can remember) been negotiating with media and technology I was never formally trained (indoctrinated?) by a hard objectivist discipline (like physics). It has always been clear to me (from critical theory and feminism) that the world we know is a world constructed by both reality, and our own desires and expectations.
I certainly don’t identify myself as a transhumanist (of any type), because I think that the major feature of humanity is our ability to abstract away from details to execute complex actions where much detail (what we are actually doing) becomes unconscious and implicit. The fusion of technology, culture and life pushed by transhumanists has actually always been the status quo. We have no choice but to internalize and abstract when we learn to use tools, our thoughts have been shaped by tools for as long as we have made them. The ability to use tools and abstract is not unique to humans; I think non-human animals can do the same, just not to the same extent. This internalization and our ability to use the abstraction in place of the details makes us susceptible to perceiving our expectations over sensorial reality. What we expect, and the abstractions (concepts) themselves are a function of our culture. We see the world through our tools: the media is the message.
Our culture is then central to how we construct and perceive the world (and ourselves). All that technology does is reflect our cultural values, and often a small culture of innovators. In order to develop our technology, we need to consciously look first at our culture and values. The primary site of developing/criticizing/integrating technology should be culturally aware. If technology is a reflection of culture, then there is no strict divide.
It seems that one of the foundational ideas of the singularity is that the machines we build will exceed our intentions and act in ways beyond our will and control. Now, I do see how machines act in ways we do not expect, and in the case of a machine with a lot of power over aspects important to human life, that can lead to problems. This happens all the time due to bugs and human error and even happens in the most non-intelligent machines. The centre of the problem seems to be the power we give over to automated systems.
So what causes a non-human animal to empathize? I’ve been thinking a lot about how humans and non-human animals differ. I’ve come to the conclusion that there are two major differences:
- I believe humans have an unparalleled ability to abstract, that is build hierarchies of mental representations where details are thrown away to encapsulate larger concepts that can be broadly applied. This is what allows us to convince ourselves of untruths, to mistake our expectations for reality, to exploit others by defining away their suffering and to imagine and build technologies that extend our cognition.
- Many of us are not in a day-to-day struggle for survival. I presume that much of the morality, empathy and free will that we consider crucial to our human identity would melt away under constant threats to survival. Consider cannibalism and infanticide amongst chimps, who are genetically closer to us than they are to any other apes.
This seal in the video certainly has little empathy for the penguins, so why the empathy for the photographer? Perhaps all animals have lines that define “us” and “them” where we choose to empathize or to exploit. I further expect that these lines arise from biological survival: If you are below me on the food chain, then you are “them”, if you are equal on the chain then you are “us”.
Clearly this is a little more complex in humans, but I expect only because of our ability to abstract. We create a concept (say, race or gender) and then use that to move where our line of “us” and “them” is.
I think language begins with the ability to perceive two sensory patterns that are different (because they are always different) as the same. I think of this as a clustering process, the carving of chunks from a continuous space. It’s a choice of saying these are the same, when the aren’t. It’s an imposition of a compartmentalized structure (concept) on continuous chaos (reality). This could be seen as a filtering process.
These (prelinguistic) clusters at the atoms of perception. We build the perceptual world from these chunks of assumed sameness. These chunks are also the material from which we learn predictions of the world, we learn constraints and periodicity by predicting the occurrence of these clusters.
linguistic symbols are build up layer by layer of these clusters, forming meta-clusters that hold abstract notions such as “object”. Additionally, the notions such as causality, justice, good and evil all result from abstractions of the predictions (via a similar clustering process) combined with emotional valence. I think this is pretty close to Barsalou.
I have said that my interest in art, and my approach to art making, is not in terms of the traditional role of artistic “expression”, but rather art is used as a methodology to explore “expression”, or more specifically examine the notion of meaning itself. I have talked about being more interested in “doing” than “representing” and in “exploring” over “expressing”. This results from an dissatisfaction early my my B.F.A. with contemporary art where I could not glean the relation between the title and text accompanying a work, and the form of the work itself. I found this very frustrating and often considered it a failure of the work. Part of my interest in using computational mechanisms, and scientific knowledge, to build artworks is to formalize and make rigorous the relation between the concept (what the work is supposed to be) and the form (what the work is).
It is true that digital representation is very abstract and easily lost, but the same goes for any other ‘written’ representation. All meaning is context dependent.
It’s very hard to make anything last. Maybe making things last is not the point. Maybe we should be looking at a living (rather than archived) culture, one that is passed down like an oral tradition and the meaning is in the enacted traditions, language and social relations that encompass culture.
Ones and zeros are abstract, but the code that humans write to interact with computers is no different than any other text. The solution to the problem of loss over time is to treat technological development as a cultural enterprise. Imagine if the family computer was maintained by the family. It’s passed down and reworked from parents to children generation after generation. It’s fixed, rethought, and rebuilt, and always reflects the culture.
That which is fixed cannot reflect a culture that is always changing. All it takes is will and maintenance to make any technological product last, we all just need the knowledge and means of production to do it.
In a panel discussion regarding copyright at ISEA Istanbul (2011) it occurred to me that beyond the corporate and monetary aspects of copyright is the notion of attribution, the acknowledgement that someone else has contributed to a work. This is highly relevant to my previous post on ownership.
The idea was simply that it would be interesting for each cultural artifact (sound, video, image, etc.) to have a list of contributors. When those items are remixed and recontextualized, the resulting construction would concatenate all the contributors from each component. The result would be a growing history of all those that contributed to a work. One could even imagine that each contribution could be to a degree, perhaps tied to the difference between the “original” (previous incarnation) and the remixed permutation. There could even be a section for “inspiration” where indirect attribution could be made.
Such a system would be extremely interesting in the context of the analysis and visualization of cultural artifacts, as such lists of attribution over time have much potential to illuminate how ideas and forms propagate through culture.
There seem to be two main uses of the term sustainability: environmental and financial. Since I am a touch of an idealist I think of sustainability in a broad scope. For something to be sustainable it has to be sustainable forever (or at least for the foreseeable future).
Environmental sustainability should be reserved for practises that stop future destruction of the environment, including biodiversity, even if every person on earth follows that practise. According to this definition, the use of oil for propulsion is not sustainable, nor is the consumption of animals for food. In short, the lifestyle of the average person in the “developed” world is not sustainable. This lifestyle has become the benchmark with which other nations measure their success. How can we expect them to lower their impact on the environment when we have had the benefit of 100+ years of unsustainable practises? Why should the emerging world not experience the kind of golden age that the first world has? Perhaps our golden age is coming to an end.
I over heard some designers on the bus last night talking about having to create 70 designs for the same layout and how much of a pain it was. It reminded me of this idea I had back when I worked on a commercial design project, which was to have image files have a similar structure as OOP classes. Layers in each image could be class members, and therefore a new image derived from that image class would inherent the same layers. New layers could be added, but the changes in the parent layers would automatically be included in child layers. This could also apply to objects in vector files.
But really, I think there just needs to me more computational and generative design tools for these problems, rather than hand constructing each of 70 designs from the same layout…
The following was inspired by an interview on CBC Radio’s Spark with Marjorie Perloff.
When you write poetry you’re using words you did not invent (though you could) to convey some idea. Likely you also use phrases and sentence structures you did not invent. Further, you may be be using allegory and referring to stories and ideas you also did not invent. So where is the line between repetition and contribution? IP and copyright clearly make the point that a particular arrangement of form may be unique and attributable to one person (or corporation).
If the universe started in a big bang, and the initial singularity had infinite density, then why are there clumps (planets, galaxies, stars, etc.) in the universe? Don’t clumps require a nonuniform distribution? Are infinite density and nonuniform distribution mutually exclusive?
Models always throw away information, and can only ever be approximations:
“Bonini’s paradox: models or simulations that explain the workings of complex systems are seemingly impossible to construct: As a model of a complex system becomes more complete, it becomes less understandable; for it to be more understandable it must be less complete and therefore less accurate. When the model becomes accurate, it is just as difficult to understand as the real-world processes it represents.”
The more we built models the more we realize what those models don’t explain. As knowledge increases, that which is not known also increases:
“Zeno’s paradoxes: “You will never reach point B from point A as you must always get half-way there, and half of the half, and half of that half, and so on…” (This is also a paradox of the infinite)”
(sourced from wikipedia)
The value of science is not in the truth value of its propositions, but in way scientific knowledge reflect how we conceptualize ourselves and our world.
I was thinking a few months ago, after seeing a lecture on urban planning, that it would be interesting for a scientist who studies creativity and an urban planner to collaborate. Literally grouping activities and people in such a way as to encourage creative production.
Seems to me “three dimensions” is simply a mathematical notion, defined as three directions which are separated by 90 degree angles. The 90 degrees mean that these dimensions are independent and that one can vary without effecting the others. This strikes me as a cultural construction, not an insight into reality. Why? Because there are many parallel “directions” in which the properties of objects can vary without effecting the others. What about colour or orientation for example?
I suppose the physicist would say that orientation is reduced to 3 dimensional movement if the object is considered a group of related objects moving through space, and the same for colour as the bouncing of light rays off of objects. So the solution seems to be to break objects into smaller and smaller units (sound familiar?) where these smaller and smaller units have nothing but positions in space and time. Regarding 9 dimensional space, it seems obvious that 9 is the number of dimensions required to satisfy the mathematical models to explain the phenomena in question. So how many dimensions are there really? Well, it depends on the level of abstraction in the model/description.
My reply to Marius’s response:
(A) I hope my discussion of a continuum between conceptual and formal was not missed. I do think there is a continuity of practise, and that it’s perhaps more a question of emphasis on form or content than anything. Most work would fall somewhere between the extreme poles.
I’ve been quite interested in notions of free will. Rather, I’m more interested in the degree to which choices are made as a result of external factors or internal factors, the latter of which are reducible to external factors in a deterministic, and materialist, world.
There seem to be only two options:
1. All actions result from external forces. The moment of the big bang determined every “choice” we will ever make.
2. Actions are the result of “random” and unpredictable interactions.
#2 does not make sense in a deterministic world anyhow, since it too would simply be the result of initial conditions.
Seems the only escape is to reject determinism, that gives us randomness, but what about free will? Maybe we must reject materialism also. There is also this old paper I wrote musing about signal, noise and consciousness: http://www.ekran.org/ben/wp/2007/untitled-iterations-vagueterrain-2006/
There is much discussion on the creative power of making your own tools, but what are tools made from, but other tools? What does this mean for the apparent increase of freedom in with the increase of depth of knowledge? For example the lower level the language, the more busy work the program needs to do, like managing memory. If you go right down to machine code the whole program appears to do little but manage memory, occluding the high level concept of what this program means and does. What is the appropriate level of abstraction (depth) of description for a particular project? Are some concepts ideally represented in machine code? Are other concepts clearest in OOP, or data-flow languages?
I was just thinking about a conversation I had with Matthew Forsythe during Interactive Screen 1.0, in Banff. I had been thinking a while about the cost of the things we use, I don’t just mean material costs, but also ecological and geological cost. The problem of calculating the cost of a particular product is quite difficult because of the arbitrary horizon (scope) of the calculation.
For example, lets take a pencil, the obvious costs include the wood and graphite, the cost of producing those components could be included. What is the cost of mining the graphite? What about the logging of the wood? What about the people paid to mine and log? What about the equipment needed? Not only that, what about the cost of all the machines used in those processes? What about the machines used to make those machines? We could go even further, what is the cost (in time) of the growth of that wood? What is the geological cost of the production of graphite?
Calculating the “true” cost of anything becomes a problem akin to measuring the length of a coastline. The closer you move the horizon of measurement to “reality” the longer the coastline, and the greater the cost. Perhaps the cost of everything (in infinite quantity) is infinite. The whole history of the universe is required for anything/everything to exist. That is expensive.