2013-06-10

Exaptationist vs adaptationist Singularity, and engineerability

In terms of total biological/social/technological complexity, human society may still be closer to the first single-celled lifeforms that colonized the Earth 4 billion years ago than we are to a technological singularity. But in terms of the logarithm of complexity, we're nearly there -- we have most of the orders of magnitude of progress we'll need. And because positive feedback loops continue to provide exponential or faster growth, that's what matters.

The Singularitarian's burden

The other day, it occurred to me how impressive it was that so much progress had been made without anyone or anything having to concern themselves with what their own or any other species was evolving into, or how quickly.

If we'd come that far without a targeted effort, how much faster could we progress toward the Singularity if we organized for that explicit purpose? Or if we weren't in a hurry, could we reduce the risk of an unfriendly Singularity, i.e. get some "Terminator insurance"? The standard assumption among Singularitarians seems to be that switching to "engineered evolution" is desirable if not essential, now that we understand the nature and dynamics of the positive-feedback loops involved.

But the first attempt to engineer humanity's evolution in a post-Darwinian framework was eugenics, which was clearly premature given Victorian-era science, and has clearly done more harm than good. Attempts to engineer technology's evolution (MIRI, Lifeboat Foundation, etc.) haven't caused a Holocaust yet, but they've led to a scientist receiving two death threats. At best, they've been a distraction, and they haven't produced any innovations that could feed the feedback loops (such as any algorithms a programmer could actually implement, compile, run, and get useful output from). And if it's possible to engineer the Singularity, then why did Ray Kurzweil get out of a successful engineering career (while allegedly in great health) when he discovered the concept?



If, like previous generations, I didn't have to concern myself with how my own actions affected the timing or friendly/unfriendly nature of the Singularity, it would simplify things like career planning a lot, given that my vocation seems to be computing R&D. So it would be a psychological free lunch. It'd probably help me overcome my workaholism too -- I wouldn't feel guilty that I might be delaying the Singularity by taking more leisure time than could be justified in terms of health. Singularitarianism would still get me out of bed in the morning, but it would no longer keep me up late at night, and the change would be almost immediate.

Lots of people say "nobody can predict the future" and point to historical examples of mispredictions by experts. But those experts had never heard of the Law of Accelerating Returns. Plus, the idea that we won't get better at predicting the future of science and technology is itself a prediction about the science and technology of prediction, so unless there's a formalization of it that I'm unaware of, it's self-refuting. Handwaving something as important as the Singularity, on such a flimsy basis, strikes me as irresponsible.

I think I may have found an answer, and one that doesn't call the Singularity itself or the prospect of its friendliness into question.

"Irreducible", reusable, and recyclable

In biology, exaptation is where a feature evolves to serve one purpose, turns out to later serve another purpose, and then gets refined for its new purpose. If conventional adaptation is when Mother Nature "buys" a new function, then exaptation is when she redeems some of her reward points from all that shopping. When creationists call an organ or system "irreducibly complex", they're pointing out that it wouldn't have pulled its weight in its current function while being assembled incrementally, so it's a good chance they're talking about something that's a product of exaptation.

There's some evidence that human reasoning and creativity are exaptations, having evolved for argumentation and mating display respectively. I'm aware of evolutionary psychology's controversial status as a science. (I guess it's the same problem facing macroeconomics -- everyone's a subject and everything's an experiment, but by the same token there's no control planet.)

Gene splicing is another example of exaptation -- the spliced gene performs a similar or related function as in the source organism, but in a completely different ecological niche with artificial rather than natural selection. (Someday, new useful genes will probably be created out of whole cloth, but I'm not holding my breath for that.)

Technological evolution also involves exaptation (paywalled source), from microwave ovens (whose heating element, called a magnetron, was originally for improving radar resolution) to fiber optic manufacturing (which depended on Corning's glass-making processes). An under-publicized example is my own field of research: GPGPU, where chips originally designed for game graphics are used as massively parallel co-processors for supercomputing.

Trying too hard?

From the above examples, I think it's safe to say that both biological and technological exaptation have been strong catalysts, if not absolute prerequisites, for the progress that's been made toward the Singularity so far. Humanity and its ancestors have brought ourselves closer to Singularity for almost every purpose except the purpose of bringing ourselves closer to Singularity.

The Singularity almost certainly can be reached from here by exaptation, and will if we wait long enough and it hasn't already been triggered by design. Ray Kurzweil predicts in The Singularity Is Near that mind uploading will be done with souped-up medical brain scanners, with most or all of the souping-up being intended for mundane medical purposes; that the VR environment will be based on entertainment technology that'll already be familiar to video gamers; and that the supercomputing power required will also have plenty of mundane uses. I've also written an outline for an SF short story in which an AI takeoff is reached by exaptation, when a malware worm assimilates enough machine-learning algorithms from the Enterprise Resource Planning system that hosts it. (Assimilating algorithms from the host system's applications software would be a major innovation in malware, I'm sure, but I don't think it would require already being a human-level AI.)

The only question that remains is whether passive exaptationism is the only path to a Singularity, or even the best path in terms of safety and efficiency. My working hypothesis is that it will be possible and desirable to design and implement the Singularity explicitly, but only once it has been much more closely approached by exaptation (likely to the point that the project leadership will include some near-human-level AIs), which is unlikely to be the case before 2025 and could plausibly take until 2060. Until then, engineering efforts to trigger, hasten or steer the Singularity will be, at best, less productive than the development of exaptable technologies.

I hope to be able to solidify this into something at least approaching a scientific theory, or find some help in giving it an "Occam's shave" if I've overthought it.

(Disclaimer: I didn't take biology past grade 10, and may be getting some things totally wrong. I'm writing this post mainly for friends, family and my own future reference, rather than for experts. I doubt I'll be able to turn it into hard science until I at least finish reading The Selfish Gene.)