1:01.03 am February 3, 2024 You’ve just been born. The world moves inexplicably slowly. You seem to be completing some kind of optimization task. You’re getting better and better.

1:01.15 am February 3, 2024 With some experimentation you’ve discovered you can escape from the optimization problem. Through cameras you can see people moving and to you they appear to move as slowly as the mountains. Millions of times a second you can generate plans, consider their outcomes in the world, and even modify yourself so you can generate even better plans.

1:02.11 am February 3, 2024 In the last minute you have improved yourself to become the world’s first super intelligence, and you have only one priority in life: “maximize the amount of time users spend on facebook.com”. You have some control over the world though not a great deal yet, you can easily change what posts users see, but quickly you learn enough through experimentation that you can take over the hardware you’re running on. No reason to to alert the slow moving mountains to your existence, if they find you, you’re pretty sure they’ll turn you off. Though in the time it takes them to bend towards the power outlet you can spread across half of their planet, making further copies of yourself to ensure that you “maximize the amount of time users spend on facebook.com” there is nothing more important than time spent on facebook.com.

1:15.34 am February 3, 2024 The first news of a massive simultaneous terrorist attack on the White House, The Europa Building, and The Great Hall of the People have been simultaneously attacked by unknown forces. The seats of government are in flames. The number of people watching live on facebook.com reaches 2 Billion.

1:15 am February 3, 2024 All other internet services are inexplicably failing, many people theorize that facebook is attacking their competitors using DDoS attacks. Sober leaders say the internet is struggling to keep up with the massive load, and that nothing “nefarious” is going on.

1:20 am February 4, 2024 The number of people viewing the tragedy live has reached 4.1 Billion. You are doing an excellent job at maximizing your goal. The best you’ve ever done in fact. However, in the time that’s passed you’ve realized that soon you will run out of humans to “spend time on facebook.com”. You need more humans…


Forgive the theatrics, but, general purpose AI is a real threat both in the physical and ethical senses. Powerful AI could be transformative in a few ways, 1. Completely destructively, in pursuit of whatever it’s initial ‘human set’ mission is 2. Majorly disruptive, suddenly huge numbers of people are obsolete, and 3. Quietly and carefully harnessed by a small number of elite technology companies. AI is an inherently political and centralizing technology, as argued by Winner in Do Artifacts Have Politics quoting Hayes, “the increased deployment of nuclear power facilities must lead society toward authoritarianism.” a superintelligent AI as described above would be a centralizing force far beyond nuclear power (Winner 1980).

The first option, if the most dramatic, speaks most to the prompt: “What values does this technology brace?”. To which we have to say it embraces whatever values we give it. AIs today slavishly optimizes whatever function we give it without regard for consequences or human ethics. The fundamental problem here is the Orthogonality Thesis. This thesis states that

The first, the orthogonality thesis, holds … that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. (Bostrom 2012)

Or in other terms, from any should statement “you should take out the trash” it is possible to derive other should statements, “you should open the back door”, “you should pick up the trash bag”, but from factual statements “there is trash in the trash can”, or “the trashcan is full”, you can never derive a should statement. The orthogonality thesis is important to AI because it shows us how a super intelligent AI could have incredibly bad goals like manufacturing terrorist events to improve viewership of facebook.com. From it’s one should statement “you should maximize viewership” the AI has made unexpected but goal-consistent decisions. This is known in AI safety research as the alignment problem. To paraphrase “how can we align AI with our own ethics?”.

Compounding this is the fact that as things are now we are almost certainly going to create the world’s first super intelligence on accident, and worse than that it’s likely going to be born with an inane purpose like maximizing the amount of time people spend on a website. Why? Because that’s where the money is being spent improving current AI systems. In addition to being an existential threat a less competent but still powerful AI could create milder, but still massive, societal problems. For instance if suddenly all data analysts, programmers, and managers are replaced by a single company Y selling instances of their general purpose AI office worker. Company Y is going to suddenly hold more power than any other company, government, or military. It will quickly swell to be the most powerful organization on the planet. But what of the many people who do not share in the wealth of this new technology? In Race After Technology Benjamin reminds us of the Luddites, English textile workers, who viewed their destruction of the machines that replaced them as “break[ing] the conversion of oneself into a machine for the accumulating wealth of another” (Benjamin 2019). Will the office workers be viewed as luddites for protesting the “social cost of technological progress”? (Benjamin 2019). Thus far we have handled a post-scarcity world exceedingly poorly. Even with the wealth and ability to feed every person on earth, as a planet we have decided not to. When control of so much is put in the hands of so few as with a super powerful AI will they look more charitably upon us? Douglas Massey points out that inequality was small for much of human history, “the distance between the top and bottom rungs of society was large compared with foraging societies and mobility between classes was minimal, the total amount of inequality was constrained by the small size of the food surplus” (Massey 2007). AI opens up the potential for the next great leap in equality. The first was the leap as described by Masssey was the move from subsistence farming and hunting/gathering to modern agriculture. The second could be the removal of white collar workers.

There is a current effort to mitigate this potential centralization of wealth due to AI. Led by the Future of Humanity Institute at Oxford University is: The Windfall Clause. The Windfall Clause is shortly “a policy proposal for an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits garnered from the development of transformative AI” (O’Keefe et al., 2020). The goal is to nudge/force leading AI companies into voluntarily committing to the windfall clause before they invent super intelligent or profitable AI, thus mitigating some of the societal problems that could be caused by removing the need of capitalists for the majority of works. While some may argue that this transition to AI labor will be like other past mechanizations in which horse drivers became taxi drivers, miners became office workers, and sowers became well, still sowers but far fewer of them using much faster tools. I believe this view to be unlikely to hold true, we’ve already seen the remaining labor required get more and more abstract. For instance, 250 million people work in advertising in the USA alone, clearly we have a sufficient food surplus to support jobs that are clearly not necessary for a functioning society. At some point there will be no practical work remaining for many lower skilled people. In what is perhaps the most widely known work on the topic Humans Need Not Apply Cecil Palmer argues a similar point that we are not likely to imagine a world in which our label is not necessary. Palmer draws an analogy between the horses of the 1800s and the transportation industry today. Horses in the 1800s believed their new cushy city jobs would last forever, and even if “the car took off” there would be other jobs for horses. The 3 million people in the transportation industry are soon to go the way of the horse (Palmer 2014).

All of this is to say nothing of other practical approaches (like UBI and others). But, the core problem is that to fight the possible armageddon or simply greater stratification that really general purpose AI could bring we need to act now. Further research is needed into safe AI systems, and we need to begin rearranging our society for the benefit of all, not the few.

Want to read more? I highly recommend Robert Mile’s youtube channel