2012-01-13

#Occupy the #Singularity

[A]s death to the mortal man so failure to the immortal[...]. (Imperial Thought of the Day, Warhammer 40,000 Rulebook 4th ed., p. 226)
Some Singularitarians have publicly jumped to the conclusion that younger Occupy Wall Street protesters have nothing to worry about in the long run. The argument runs like this:

Premise: Mind uploading will eliminate the need for food, water, shelter and medicine. The only necessities of posthuman existence will be data storage, processor hardware, and electricity.
Premise: Moore's Law, combined with a new abundant energy source (fusion? space-solar?), will make all three of these resources abundant and the cost of existence negligible.
Conclusion: Therefore, at the time of the Singularity, planet Earth will have transitioned to a post-scarcity economy without poverty.

I consider the first premise all but proven and the second highly probable, but I don't agree that the conclusion logically follows from them.


Even when there is no cost of existence, scarce resources can still exist that are painful to live without -- in other words, poverty can still exist. In fact, there are already digital economies where this is the case -- massively multiplayer online role-playing games (MMORPGs). I'll discuss World of Warcraft (WoW), since it's the only one I'm familiar with.

A blood elf and her scarce resources. (Image credit: Jaliborc / Blizzard Entertainment.)
In WoW, player characters don't need food or lodging, but the aspects of the game that make it worth playing -- achievements, epic gear, high arena ratings -- are the end points in a complex web of scarce resources.

To get into heroic raids, you need achievements that show you've done the raids on normal mode. To get into normal-mode raids, you need pre-raid gear. To get pre-raid gear, you need to do heroic dungeons. To meet the gear requirements for heroic dungeons, you need to do normal dungeons. And so on. Long story short, the game consists of three long, interwoven lines of skill progression, achievement progression and gear progression (which branch into PvE and PvP paths when you finish levelling your character), and the limiting scarce resource for most players is play time.

The players who have an advantage in WoW are those who have lots of free time to progress, to "farm" for gold, to find a guild that suits their needs, and to hone their skills by practicing, reading guides etc. That means being able to refuse overtime at work, and maybe occasionally take time off without pay. For those with family commitments, it may also mean hiring a babysitter now and then. Being able to afford better-than-min-spec hardware is also a big advantage. Less fortunate players complain, and rightly so, that they've fallen behind in progression and can't catch up, and that they're low on gold and don't have time to farm.

Older players also seem to have a big advantage, despite the short half-lives of game-mechanic knowledge (1-2 years) and gear (3-4 months).

In sum, wealth and poverty exist in WoW, they're self-reinforcing, and they tend to align with wealth and poverty in real life. There is a difference though: In real life, rich people are often bored and frustrated, while poor people are hungry and sick. In WoW, richer players are challenged and rewarded, while poorer players are bored and frustrated. We can understand this difference in terms of Maslow's hierarchy of needs: in MMORPGs, the 99 Percent don't have to worry about food, shelter or other basic needs, but they also can't get a real sense of achievement from the games -- as Maslow puts it, they can't get self-actualization.


It would be absurd to claim that the post-singularity world was going to be exactly like an MMORPG (although it may diverge, with MMORPG-like worlds available to those who want them). But parallels can be drawn. Instead of achievements and loot, the measures of success will be careers, families and other personal achievements, just as they are now. And instead of play time and virtual gold, the limiting scarce resources will be hardware and energy. (We may surpass Moore's Law for a while, but the amount of computation and storage possible in a given X-light-year radius will still be finite because of various physical limits. A post-singularity society may discover ways around those limits, but I'm not holding my breath.)

No human being in the analog world derives much self-actualization from wealth alone, and many if not most people wouldn't derive any. But for digital posthumans, it may be a very different story. Soft, squishy human brains struggle to cultivate the mental capacities we consider useful or virtuous. Once the software we call the mind has been extracted from them and digitized, we can simply scale those capacities by adding more storage and processing power, allowing us to run faster than real time. The faster you can think, the slower demands seem to be thrown at you, and the more of them you can keep up with.

The range of what can be scaled this way is hard to grasp, but Nick Bostrom has shed some light on it by defining superintelligence as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." It seems safe to assume, then, that a digital superintelligence's access to hardware will largely determine its ability to self-actualize, and to attain whatever levels might lie beyond self-actualization (whether peculiar to posthuman psychology, or simply not yet attainable).




Having said that, there are 3 factors that may mitigate the hardware inequality:
  1. Limits to scalability. The brain is often described as a massively-parallel system, but since all of its "threads" are continuously interacting, it's probably more like a superscalar CPU. This suggests a limit on the parallelizability of posthuman minds, especially on distributed hardware. Since most of the recent growth in computing power has come from parallelization, a limit on parallelization is probably a limit on hardware scaling.
    Replication (running several instances of yourself concurrently) will overcome this limit, but it introduces the overhead cost of continuously synchronizing the replicas (which seems from this test to be more than linear in time complexity) and resolving the inevitable merge conflicts. The human brain is constantly modifying itself, so for the posthuman, each process fork is a development branch (and, if he or she isn't careful, maybe a development fork).

    Since resolving merge conflicts can't currently be done automatically by source-control repositories, it probably requires an AI. If that AI has to consist of more instances of the user (which would make sense, since people usually know themselves better than anyone else knows them), then those instances themselves need to be synchronized with both the "working" instances and each other. The end result may be that once a certain number are already running, additional replicas can't cover their own overhead costs. At that point, further scaling through replication is impossible. Thus, the 1 Percent may not have any incentive to hog more than a certain amount of resources.

     
  2. Cooperation and competition. The balance between cooperation, healthy competition and unhealthy competition may shift during the posthuman transition. Political and religious conflicts tend to be fuelled more by misguided altruism than by selfishness. Since smarter people tend to be less ideological, there will probably be fewer such conflicts among superintelligent posthumans. With ideological sparring out of the way, altruistic members of the 1 Percent will be able to give more hardware resources to the 99 Percent in need.

    On the other hand, business competition is usually healthy, since it often leads to price and feature wars. These benefit the 99 Percent (consumers) at the expense of the 1 Percent (shareholders of established businesses). Eliminating it wholesale only leads to Soviet-style state capitalism.
     
  3. Friendly AI. Given enough powers and responsibilities in government, AIs may be able to keep posthuman society democratic rather than plutocratic, and they may be able to ensure a distribution of hardware that's fair to the 99 Percent. SIAI and IEET are already researching the problem of establishing goal sets for Friendly AI. (Goal sets are probably the only part that can be built before the AI algorithm itself is invented.) Let's hope the economic-policy goal set includes limiting the suffering due to economic inequality.
Still, these solutions all seem more speculative to me than the problem, and they could easily all fail. The only way to reliably prepare is what I plan to attempt: to become part of the 1 Percent within my organic lifetime.