Abstraction and The Singularity

Posted: November 25, 2013 at 10:33 am

It seems that one of the foundational ideas of the singularity is that the machines we build will exceed our intentions and act in ways beyond our will and control. Now, I do see how machines act in ways we do not expect, and in the case of a machine with a lot of power over aspects important to human life, that can lead to problems. This happens all the time due to bugs and human error and even happens in the most non-intelligent machines. The centre of the problem seems to be the power we give over to automated systems.

Lets use the example of a genetic algorithm (GA). I use this example because (at least in terms of design) many systems that are human competitive (or super-human) are enabled by GAs. In a GA we tell the system what we want (fitness criteria), and then it searches a space to find something that suits the criteria. In human-competitive systems, the GA finds things humans have never thought of.

So lets say we want a GA to be more autonomous because we’re not happy with specifying the fitness criteria. One option is to build a second GA. This GA does not search a space for suitable designs, but searches a space for suitable fitness criteria, whose results will then be fed into the existing GA (or a set thereof), leading to new designs. In this case, we don’t tell the meta system what to design, but we leave the system to design whatever design suits the higher level GAs criteria. In Computational Creativity, we often use the label “meta-creation” (creation through creative machines).

In essence, we have not removed our intentional power over the system, we have simply changed the level of description at which that intention is articulated. Rather than specifying criteria for the generation of designs, we specify criteria for the generation of criteria for the generation of designs. In this context, we can consider abstraction as a shift in the level of abstraction such that few choices at a higher level (a single criterion for criteria) leads to many choices manifest at a lower level (multiple criteria). So lets follow this through, we keep building up abstraction by adding layers and layers of GAs. With each added layer the whole system’s autonomy appears to increase.

Now the singularity proponents may say that this process becomes a run-away train; abstractions become increasingly abstract until the point that they are entirely autonomous. Now its this last bit that I don’t follow. In no way does increasing abstraction and shifting the level of descriptions in our instructions to machines causally disconnect them from us. They are never really autonomous because they are constructed by us. That is not to say that that can’t surprise us, and likely the increasing abstraction would lead to more surprises. Some of these surprises could really change the world, they pop out from the implicit spaces we define but could never have the hope to exhaustively explore. A GA is powerful because of its brute force; it looks through an entire space for a solution, including looking in those spaces a human never thought to look. That does not make them intentional or autonomous, but it certainly does make them valuable and powerful. The spaces in which they search are still defined by humans, and if they are not, then the parameters that define them are (at some level of abstraction).

In fact, how we write software often follows a similar pattern of abstraction. We write some code in C at a particular level of description. Each call in C is compiled into many instructions in machine language. We could write in assembler instead, and this could result in a more efficient program that requires less machine language instructions. We do not attribute autonomy to the machine-code. It’s behaviour is known and defined in C, even when the programmer did not directly specify what machine instructions should be called. If we did not have control (at some level of abstraction, and often at multiple at once) of the lower level instructions many of our technologies would not function at all.

I think it’s folly for us to think that increasing complexity leads to increasing autonomy, but I do grant that increasing complexity certainly leads to the illusion of increasing autonomy. For a machine to exhibit true autonomy, it would have to be created outside of the context of human intention, which is a paradox because machines are built by us.

Of course there are systems we (or at least the materialists among us) think of as “machines” that begin with autonomy from the outset: Life. In order to survive, biological life must have its own self-oriented autonomy. So what happens when we exert our intention on life through bioengineering? Is it still life, or a machine? Is it real or artificial? I don’t know the answers to these questions but do posit that perhaps the very exertion of our will on an organism interferes with the autonomy that makes it alive.

Now, if we, as a society, choose to give over more and more important tasks to automated systems then those systems have the potential to enrich or destroy our lives. It is not that these systems have a will, a “want” or an intention, they are just blindly following low level instructions we impose upon them. If a technological systems goes awry and the result is the death of all humans, this is not evolution and its not optimization; It’s simply human error propagated through a complex system that leads to unintended consequences. There is no logical reason why we would not be able to prevent such errors from causing real (global) damage, we do it every day with the numerous lethal systems that currently exist.

Lets take the example of likely the most dangerous technology almost all of us interact with every day: the car. As a society we have decided that the convenience and speed that a car offers is worth the danger that they pose. Cars kill people every day, over a million people a year, according to the Wall Street Journal. For some groups of the population, cars are the leading cause of death. Cars kill largely because of human error. We simply misjudge how they will behave in particular circumstances by not paying attention, or not changing our behaviour for the conditions.

Machines are already killing us, it’s not about autonomy, it’s not about a robot uprising, it’s not about intelligence, it’s simply about the collective choices we make to off load labour and effort to machines. Technologies reflect the cultural values of all those who participate in their development, from the engineers who design them, to those who build and maintain them, all the way to the end-users that choose to buy or use them. Nearly all of us are all complicit in these developments, and if we don’t like the directions they are heading, then its us that have the control to change that.