Sunday, July 13, 2014

Superintelligence


In The God Delusion, Richard Dawkins said the cosmological and/or teleological arguments for God's existence generate an infinite regress: a divine designer must be more complex than whatever he designs. Therefore, appealing to God to explain the world only pushes the demand for an ultimate explanation back a step; indeed, it compounds the perceived problem.

There are two different ways of responding to Dawkins:

i) We can respond to him on Christian grounds. We can point out that he hasn't mastered the position he presumes to refute. God isn't complex in the same sense that physical, composite objects or organisms are complex. God is not an organization. God is not a big thing made of smaller things, like disassembling an engine. To the extent that God is complex, he's complex in a way analogous to complex abstract objects like the Mandelbrot set. That's infinitely complex. But it's not the kind of thing you can take apart. It's not composed of smaller parts, which are, in turn, composed of even smaller parts, viz., a carburetor is part of a car engine while a gasket is part of a carburetor. But in classical theism, God has no spatiotemporal parts.

ii) But we can also respond to Dawkins on his own terms. His infinite regress argument reminds me of von Neumann's seminal lectures on self-reproducting automata. To begin with, it's very shortsighted of Dawkins to deploy an argument which, on the face of it, disproves macroevolution. As von Neumann notes:

Anybody who looks at living organisms knows perfectly well that they can produce other organisms like themselves. This is their normal function, they wouldn't exist if they didn't do this…Living organisms are very complicated aggregations of elementary parts…that they should occur in the world at all is a miracle of the first magnitude.
Furthermore, it's equally evident that what goes on is actually one degree better than self-reproduction, for organisms appear to have gotten more elaborate in the course of time. Today's organisms are phylogenetically descended from others which are vastly simpler than they are, so much simpler, in fact, that it's inconceivable how any kind of description of the later, complex organism could have existed in the earlier one…Evidently, these organisms have the ability to produce something more complicated than themselves. 

So an ironic consequence of Dawkins' argument is that, if sound, the infinite regress would disprove macroevolution. And that poses quite a dilemma for Dawkins inasmuch as macroevolution is his alternate to divine creation! 

Perhaps Dawkins would say the infinite regress argument only applies to designed products rather than undesigned products. If so, it's hard to see the logic of that restriction. If the designer must be more complex than what he designs, then mustn't the producer (e.g. evolutionary process) be more complex than the product? 

iii) There is, however, another problem with Dawkins' argument, from a secular scientific perspective. For his argument, if sound, would render AI research futile. As von Neumann goes on to say:

The other line of argument, which leads to the opposite conclusion, arises from looking at artificial automata. Everyone knows that a machine tool is more complicated than the elements which can be made with it…So, one gets a very strong impression that complication, or productive potentiality in an organization, is degenerative, that an organization which synthesizes something is necessarily more complicated, of a higher order, than the organization it synthesizes. This conclusion, arrived at by considering living organisms, is clearly opposite to our early conclusion, arrived at by considering living organisms. 

Von Neumann is discussing the prima facie dilemma of producing self-reproducing machines. Must a machine be more complex than what it produces? If so, then you can't make a self-replicating machine.

Yet AI is even more ambitious. According to AI, not only can computer designers construct self-replicating computers, but they can even construct computers which become more complex than the computer designer himself! Indeed, von Neumann, in the same lecture, gestures at this program: 

…the production of a more complicated object from a less complicated object is possible…There is a minimum number of parts below which complication is degenerative, in the sense that if one automaton makes another the second is less complex than the first, but above which it is possible for an automaton to construct other automata of equal or higher complexity…There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, where synthesizes of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself. 

Indeed, the AI community is concerned with the prospect that, if successful, AI machines will turn on their inventors and destroy the human race: 

So, while the idea of the generally intelligent agent continues to play an important unifying role for the discipline(s) of artificial intelligence, it also leads fairly naturally to the possibility of a super-intelligence. The usual concept is that if we humans could create artificial general intelligent ability at a roughly human level, then this creation could, in turn, create yet higher intelligence, which could, in turn, create yet higher intelligence, and so on ... “We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” (Bostrom, 2014 ch. 2). So we might generate a growth well beyond human ability and perhaps even an accelerating rate of growth, leading to an ‘intelligence explosion’. Two main questions about this development are when to expect it, if at all (see Bostrom, 2006; Hubert L. Dreyfus, 2012; Kurzweil, 2005) and what the impact of it would likely be, in particular which risks it might entail up to a level of existential risk for humanity (see Bostrom, 2013; Müller, 2014a). As Hawking et al. say “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” (Hawking, Russell, Tegmark, & Wilczek, 2014; cf. Price, 2013). 
These results should be taken with some grains of salt, but we think it is fair to say that the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-50, certainly (with 90% probability) by 2075. From reaching human ability, it will move on to superintelligence in 2 years (10%) to 30 years (75%) thereaf- ter. The experts say the probability is 31% that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. 
So, the experts think that superintelligence is likely to come in a few decades and quite possibly bad for humanity – this should be reason enough to do research into the possible impact of superintelligence before it is too late. We could also put this more modestly and still come to an alarming conclusion: We know of no compelling reason to say that progress in AI will grind to a halt (though deep new insights might be needed) and we know of no compelling reason that superintelligent systems will be good for humanity. So, we should better investigate the future of superintelligence and the risk it poses for humanity. 
http://www.sophia.de/pdf/2014_PT-AI_polls.pdf

So AI turns Dawkins' regress argument on its head. A designer needn't be more complex than what is designed. To the contrary, what is designed may be (or become) more complex than the designer! 

My point is not to endorse AI. And, of course, I don't think creatures are equally complex, much less more complex, than their Creator. I'm just analyzing Dawkins' argument on its own terms. Even secularists don't share his intuition. 

3 comments:

  1. Dawkins should be required to watch the entire Battlestar Galactica series, both the original and the newer versions.

    ReplyDelete
    Replies
    1. Moral of the story: never endow toasters with artificial intelligence!

      Delete
    2. A little known but very important factoid...

      Delete