Navigation

Home

Sociotechnical Factors of Institutional Artificial Intelligence Development

A dissertation submitted in part-fulfilment of the regulations for the Degree of Master of Philosophy 2023 in the University of Cambridge

“Why software is eating the world”

Title of a 2011 blog post by Marc Andreessen

“Ideas cannot digest reality”

Jean-Paul Sartre

Abstract

In this dissertation I examine the development of AI technology by Silicon Valley corporate institutions. I lay out four possible sociotechnical narratives which might explain these developments: AI as an extension of the Valley’s culture and business model, AI as a utopian or dystopian fulfilment of the ideologies surrounding “AGI” or the “Singularity”, AI as an economic measure to replace human labour, and AI as an economic narrative that requires continuous re-investment and development of new capabilities to perpetuate itself. I then examine to what degree these narratives apply to Meta, Inc., a prominent Silicon Valley company with a heavy reliance on AI technology. Finally, I consider what these observations inform us about the human culpability for AI development.

Introduction

In 2022, an opinion piece in Forbes breathlessly informed us that “Within the next decade, artificial general intelligence (AGI)—the ability of computer systems to understand, learn and respond as humans do—is expected to emerge.1” In recent years, billions of dollars of investment have been invested into a wide portfolio of technologies labelled “AI”, with companies in Silicon Valley leading the charge. However, why and how is this level of AI “expected to emerge”? Is this technological development inevitable? I believe that the development of AI technology as we understand it today is not part of the “natural progression” of technological advancement. Instead, it is the result of a series of conscious choices by humans under social and systemic incentives. To establish these incentives, I will look at the heartland of the modern AI industry in Silicon Valley, California. I will present four different approaches to conceiving of AI development: as the continuation of the Silicon Valley paradigm of “software eating the world”; as an achievement of the transhuman visions of “AGI”; as an economic measure to improve efficiency; and as a self-continuing economic narrative that demands continuous reinvestment. To provide a concise case study of what I mean, I will examine the AI developments and products offered by Meta, formerly Facebook. Finally, I will extrapolate from these findings to present a general conception of human culpability in AI development.

For the purposes of this dissertation, I define “AI technology” or “AI models” as a category of software programs whose behaviours are determined by inferences obtained from human-curated or machine-generated data without explicit human guidance (i.e. through a process of “learning”). Most of the dissertation will also consider the group of Silicon Valley companies working to develop AI as a collective group of “Valley Institutions”. While my precise conception of the “Valley” will be expanded later, I do this because these companies are highly interconnected, share many key visions or goals, and are also acting in accordance with a set of similar capitalist incentives. I will also examine in the case study not only the incentives behind AI development but the aftermath and risks of this form of AI development.

Beyond the immediate consequences, however, a deeper question arises—why are these actors pursuing these technological developments? In her book Addiction by Design2, Natasha Schull references Bruno Latour’s concept of “inscription”. In her words, it is “a process […] whereby designers inscribe certain modes of use into the products that consumers will interact with”, the designers setting “scripts” which “inhibit or preclude certain actions while inviting or demanding others”.3 The “script” for Schull is simple: the exchange of money for the satisfaction of interacting with the gambling device, a mindless “machine zone” that renders humans little more than rats in a series of nested Skinner Boxes. What, then, is the “script” for AI? What are the end goals of pursuing these technological developments, and who or what is responsible?

Theoretical Overview

It is impossible to speak of recent developments about AI without paying attention to Silicon Valley, where many major AI corporations have their offices and where many AI technologies are developed. The theoretical conception of Silicon Valley is both a physical location and a network of persons, ideas, capital flows, and research endeavours—Richard Barbrook and Andy Cameron’s map of the Valley combines “the disciplines of market economics and the freedoms of hippie artisanship”, but also notes that “the West Coast itself is a product of massive state intervention”.4 In positioning the Valley as a synthetic mix of place and idea, they also conjoin it to the market economy which powers California and the United States as a whole. This holistic approach which treats the Valley as a connected ecosystem of ideas and actors is also applied by researchers like Timnit Gebru when researching TESCREAL5, the authors of the “How AI Fails Us” discussion paper,6 and Shoshana Zuboff in The Age of Surveillance Capitalism.7

When I speak of the “Valley” or “Valley Institutions”, I include in this group Microsoft, OpenAI, Meta, Google, Anthropic, and other major AI developers as well as venture capital firms like A16Z8 and Y Combinator. My approach deviates from approaches like Zuboff’s which, while immensely profitable as a record of the Valley’s tactics and aspirations, risks flattening the Valley into a technological conspiracy moving towards an end goal of behavioural control. I also deviate from the more ideological constructions of Gebru et al. by placing a renewed focus on economic influences. In general, I suppose that there a multiplicity of incentives that drive the development of AI technology, and consider all corporate institutions that shape and are affected by these incentives to be “the Valley”. As such I include Microsoft—headquartered in Redmond but instrumental in the funding of OpenAI; I also include A16Z, which has no machine learning engineers of its own but is vital in shaping the ideological discourse that motivates some AI developers. I exclude research institutions like Stanford and MIT due to their distance from the Valley’s economic incentives, though there is a large overlap in personnel and ideology between the two.

Drawing on the framework laid down by Thomas Hughes in Networks of Power, I consider AI development as a process of adaption and growth in the face of social resistance and social demand.9 Hughes outlines several approximate stages of technological adoption from initial invention to transfer, growth, momentum, and financialisation. AI as a technology appears to be in the momentum or growth stage, with “a perceptible rate of growth and velocity” that exists alongside significant flaws like hallucination which correspond to Hughes’ ideas of “reverse salients” or “critical problems”.10 I further interpret the present trend of AI development as a broad feedback loop between the human institutions that develop these systems, the external public that provides demand and resistance to technological adoption, and the private or public entities that provide resources and funding for development. The institution drives the development of the product but also relies on external input (e.g. user data, funding agreements, or hearings in the US Senate) to shape it further. At the same time, this work considers as its subject both questions of institutional process and of technological “development” or “progress” more generally, which poses certain theoretical difficulties I will now examine.

Historical conceptions of technology and its development have leaned towards either technological determinist or social determinist perspectives. The degree to which these two extremes shape the course of social and technological progression (or, indeed, if such a progression can be rightly charted at all) has been a perennial cause of debate. Theorists such as Heilbronner claim that progress comes from “a technical conquest of nature that follows one and only one grand avenue of advance”, going on to suggest that “a given technology imposes social and political characteristics on the society in which it is found” before walking back their claims somewhat.11 By contrast, in her article proposing a possible “technoculture” Leila Green cites Ursula Franklyn and The Real World of Technology:12 “it is better to examine limited settings where one puts technology in context, because context is what matters most”.13 The implication here is that what is truly at stake is the social context of technology’s use, rather than the precise nature of the technology that has been used.

The discourse surrounding AI development throws both of these perspectives into confusion: while imagined general AI systems seem finally capable of actively “impos[ing] social and political characteristics”14 onto its users as the technological determinists claim, it does so by adopting human-like qualities of agency and goal-setting. In turn, the social determinist must contend with a technology that demands recognition as a bona fide social participant, confusing the divide between agent and tool. Where medium theorists like Ronald Deibert have argued that changes in “modes of communication” can shape the broad “evolution and character” of society as a whole,15 now it is possible for technology to shape society at a level once reserved for humans alone, with a student perhaps generating an AI-written article refuting Deibert’s claims. The usual solution to this confusion is to reject the idea of AI systems as meaningfully separate from human intellectual labour: The discussion paper “How AI fails us” published by Harvard University Carr Centre for Human Rights Policy et al. exemplifies this position.16 It states,

The dominant vision of artificial intelligence imagines a future of large-scale autonomous systems outperforming humans in an increasing range of fields. This “actually existing AI” vision misconstrues intelligence as autonomous rather than social and relational. It is both unproductive and dangerous, optimizing for artificial metrics of human replication rather than for systemic augmentation, and tending to concentrate power, resources, and decision-making in an engineering elite.

By casting intelligence as “social and relational”, the paper resists defining AI systems as intelligent independent from human interpretation. Other analyses like that of Gray and Suri’s Ghost Work emphasise the human labour involved in curating training data for AI systems or correcting AI-driven moderation decisions17, emphasising the invisible human labour behind apparent AI automation. These approaches, while presenting many valid criticisms of AI development, do not address the present state of the AI industry effectively. While historically confined to controlled environments like the game of Go18, most recently AI development has led to the rise of large language models (LLMs), programs that have been perceived as comparable to humans in tests of general reasoning,19 legal contract review20, or even military decision making21. As these systems become more independently capable, critics of the industry cannot continue to claim that the industry remains reliant on Potemkin AI22 or extensive human intervention, lest they risk being branded as irrelevant. I further argue that this form of evasion around what qualifies as truly “intelligent” or “independent” AI constitutes a form of No True Scotsman fallacy, leading to claims like “The end state of [Artificial General Intelligence] is itself poorly defined and thus cannot be acknowledged as a meaningful object to ‘oppose’”.23 Meanwhile in 2023 AI-powered conversation partners were already used as methods to exploit humans by forming emotional bonds then withholding intimacy behind payment barriers24.

Another theoretical lens with which to consider AI is as an actor embedded in an economic system. This requires a slight modification of classical materialist analysis: Oft-cited exceptions like the Fragment on Machines aside,25 materialist economic analysis tells us that the origin of profit in a productive endeavour is the (human) capitalist’s exploitation of a (human) working class26—machines have no subjective capacity of their own and only act as a “dead” substitute for human physical labour. Even if we acknowledge that the capitalist may provide intellectual or non-physical labour as a manager, the primary focus of materialist analysis is still on the human or subject participants of the productive process, and on the power imbalance between capitalist and worker. Analysis based on actor-network theory (ANT) as forwarded by Michel Callon and others takes a different approach. Fabian Muniesa writes that “as soon as something happens, there is action to be accounted for, and a good ANT account does not single out any particular form of action, be it social or otherwise”27. If we blur the lines between subject and non-subject, we can consider the possibility that humans have no special place in the productive process, whether as physical or intellectual labourers. Any process which minimises the costs of production is beneficial since it increases the final net profit for the capitalist, including yielding decision-making powers to AI systems if that increases overall efficiency.

We can extend the economic question further: why pursue AI and automation, instead of other technological advancements? For example, for a period up to around 2022 the focus of the Valley Institutions was on the potential of blockchain technologies. What fundamental benefits does AI offer that cannot be matched by other products? I propose that more than any concern over wages or electricity costs, AI has formed a compelling economic narrative for continued investment as outlined by theorists like Robert Shiller.28 On the other hand, scholars like Gebru believe that many in the Valley are motivated by a complex set of ideologies (the TESCREAL constellation) that make their work akin to a technical form of apotheosis. The complex interplay of these diverse perspectives shall form the crux of my analysis.

Software and AI

Perhaps the first perspective we ought to consider is that AI is simply a continuation of business as usual for Silicon Valley. To do this, I will situate AI within the existing business paradigm of the Valley, which is oriented around creating and marketing technological products, mainly software. I will show how AI constitutes a continuation of their ambitions for software to encompass every aspect of daily life, and evaluate the limitations of this vision.

It is important to remember that when we speak about AI models, we are also speaking of the class of objects to which they belong: software programs. But what is software? All software is fundamentally a question of symbolic manipulation. This is not a philosophically nebulous truism: the internal operations of software can be described through the operations of a Turing Machine29, which can itself be expressed as a symbolic manipulation or string rewriting problem. In other words, software is a series of logical rules that turn symbols coming in into symbols coming out. The symbols coming in may be entries in a database, search engine inputs, or accelerometer readouts; the symbols coming out may be video game graphics or a Mandelbrot set fractal or the words of the Bible, but software acts to translate one to the other. In this sense AI is just a specialised subset of software, one whose inner workings are less understandable than most programs.

If this is true, software seems to be a largely symbolic phenomenon, a reference without a referent. Yet a primary partner in Andreessen Horowitz, one of the most influential venture capital firms in Silicon Valley, proposed in a famous blog post that this immaterial construct is “eating the world”.30 What does it mean for AI or software to eat the world? For Andreessen, this phenomena seems to largely revolve around the way large organisations in capitalism structure themselves. With growing access to the internet, he argues, comes a growing ability to participate in networked society, and to use the internet to fulfil your needs for food, daily goods, entertainment, and services. As he writes,

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures.31

For software to “eat the world”, then, implies that more and more parts of our daily life and the broader economy will be “run on software and delivered as online services”—the internet will mediate how we access goods and services of all kinds, not merely for entertainment or web content. Yet this is not entirely a satisfying explanation, since it seems that Andreessen does not adequately explain how this transformation is happening. How does software drive a truck or grow bread?

For the Valley to direct our lives through symbolic programs, they must find a way to let these programs gain an understanding of reality. To do this, real world data (e.g. the current location of a truck, a map of the local area) must be input into an algorithm which turns data into actionable output (e.g. the shortest path to a warehouse). Scholars like Johanna Drucker have suggested that these inputs—data—should be renamed capta32, since data is never passively collected but must be actively captured from reality through intentional acts of measurement and association. At the same time, computer algorithms also have to be developed that can process data effectively and compute useful results. This algorithmic work was historically driven by human researchers: classical computer science is suffused with algorithms named after their supposed inventors (e.g. Djikstra’s path-finding algorithm33). With these requirements in mind, we can see exactly how Silicon Valley plans to let software eat the world—and how AI fits into this plan.

Armed with what Barbrook calls a “Californian Ideology” that combined libertarian sensibilities with new-left optimism about technology,34 the Valley became a major proponent of information technology and its promise to mediate processes from shopping to democracy. To achieve these goals, they combined unprecedented levels of data collection with large amounts of capital investment into software development.35 In this light, AI is a powerful symbolic tool that enables data to be processed with less human involvement, which is expensive and difficult to scale up.36 AI programs can automatically derive mathematical models for fuzzy concepts like “cats” or “high-risk lenders” or “human faces”, turning raw data into meaningful information computers can interpret.37 The most advanced of these models are labelled under the category of “generative artificial intelligence”, in which programs move beyond discovering correlations in existing data to fabricating new data ex nihilo—these models simulate (but do not necessarily replicate) the cognitive or real-world processes that produce their training data. Barbrook points out that, like Jefferson’s dumbwaiter, these advancements allow for the displacement of unreliable human workers and artisans with obedient, computer-operated servants.38 In other words, AI is the realisation of Silicon Valley’s dream of using software to control and direct every facet of our lives.

A road-map has been laid out to make the world a pliant playground for the Valley. The road-map demands the expansion of computing facilities as the next frontier of geopolitical competition. It incentivises hiring disposable contract workers who must follow instructions from their wireless earpieces or phones. It leads to billions of dollars invested in developing autonomous robots, automated factories, and self-driving cars. Today in developed societies almost all basic services and transactions are mediated through software. With AI posited as the next revolution in digital capabilities, software will no longer merely order your food, organise your work, and deliver your leisure. Now, software will also be your friend.39

Still, however, this digestion is incomplete. What technologists like Andreessen like to forget is that beneath the immaterial processes of their software engines lie physical machines, machines that consume copious amounts of electricity and water40 and are subject to outages, malfunctions, as well as destruction from man made or natural sources. As Seb Franklin points out, the lie of the Cloud (and now of AI) is that it displaces the real cost of computing to a ethereal otherworld where techno-utopianism can roam free.41 In the parts of the world where this displacement has failed, where the internet remains a patchy utility and where the basic necessities of life are threatened by climate change, social upheaveal, and war, we can see these idealised representations fail to capture reality. How can one find their way via Google maps if the city they are traversing has been bombed to ruins?

Drucker reminds us that the formation of capta always involves an intentional act of measurement and an intentional correlation of the measurement with a real life quality, a correlation that is always conditional and never perfect.42 As the AI industry struggles against the persistent problems of hallucinations and confabulations,43 we see how models can easily become estranged from the sources of human knowledge they attempt to supplant. In the next section, we will examine a possible telos for this pursuit, what happens if they succeed at aligning world, capta and model.

The Telos of AI

I will now attempt to describe the hypothetical end goals of AI development as well as the criticisms and alternatives positioned against these teleologies, mainly Gebru et al.’s concept of TESCREAL. The question of a final goal or “end state” for AI technology development seems at first somewhat unintuitive—technological development as a phrase does not presuppose a “final ending”, nor does it suggest a definitive outcome. AI development takes many forms, including the development of novel architectures for AI models (e.g. the GAN architecture which was used for early image generation research44), the development of new AI applications like ChatGPT45, and the retrofitting of existing applications to new domains. All of these processes can happen in disparate fields of research including machine vision, predictive modelling, machine translation, human-computer interaction etc. However, I believe that the Valley Institutions have compatible and well-defined end goals for AI for several reasons: First, these visions are clearly defined through ideas such as “AGI”, “ASI”, and the “Singularity”. Second, these visions form a major part of the communications, PR, and even investor relations efforts conducted by these companies. Third, advancement in many fields of AI has recently been achieved through breakthroughs like the development of the Transformer architecture46, which has been applied to object recognition, autonomous vulnerability exploitation, and text generation tasks amongst others47. This makes disparate fields of AI research more and more closely related and therefore the idea of a coherent and unified “end state” more conceivable. While the last claim is a purely technical hypothesis, the first two claims are evident in the statements issued by the Valley Institutions and their leaders.

For the first claim, the leaders of Meta, Google, and OpenAI have all stated that their explicit goal is to reach a state of “artificial general intelligence” (AGI)48 or “human-level AI”.49 More concretely, the idea of a definitive societal shift once we reach “human-level AI” is encapsulated in computer scientist and science fiction writer Vernor Vinge’s idea of the “Singularity”,50 which is often cited within the Valley51. Vinge’s definition suggests that “change comparable to the rise of human life on Earth” will come from “the imminent creation by technology of entities with greater than human intelligence”. OpenAI similarly defines “AGI” as “a highly autonomous system that outperforms humans at most economically valuable work”.52 Both define AGI in terms of agentic technological systems that compete with and surpass humans intellectually. A key difference between the two statements, however, is the replacement of “human intelligence” with the concept of “economically valuable work”. A sleight of hand has been performed where what is profitable or “valuable” in the present economy has been substituted for the fundamental human capacity for “intelligence”. And it is this ideological substitution that motivates the following claim.

With regards to my second claim, I believe that the idea of an endpoint for AI development is an immensely attractive cultural and marketing concept. The “Singularity” conception of AI is imbued with superhuman qualities and acts in manners humans cannot understand or predict. In an essay titled “Moore’s Law for Everything” by Sam Altman, this mystical attitude is taken to the extreme, with Altman exhorting his readers to “imagine a world where, for decades, everything–housing, education, food, clothing, etc.–became half as expensive every two years.”53 How this will be achieved without disastrous results amid a global climate crisis and ecological collapse is not addressed, except with the promise that AI will “lower the cost of goods and services” by eliminating labour costs. How AGI will continue to lower labour costs for decades on end after presumably replacing all human workers is also not explained. Faith in the power of AGI is a prominent theme in Valley communications: Hagiographic pro-AI peices such as the Techno-Optimist manifesto54 state that “we believe Artificial Intelligence is best thought of as a universal problem solver”, displacing human ingenuity as the solution to all of our ailments. More anxious pieces such as the 22-word risk statement endorsed by Sam Altman and Dennis Hassabis55 suggest that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.56 Neither perspective questions the superhuman intellectual potential of AI and its capability to direct human affairs. Even warnings to investors about the risks of investing in AI have become a perverse form of marketing. OpenAI’s for-profit operating agreement includes the following statements:

**Investing in Open AI Global, LLC is a high-risk investment**

**Investors could lose their capital contribution and not see any return**

**It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world** 57

While these statements carry the tone of standard investor warnings, the suggestion that the concept of money itself may become moot in a “post-AGI world” serves as a powerful promise of AI’s capacity to enact a final, complete transformation of human society. A joint economic-eschatological myth springs into motion, with AI positioned as a quasi-divine transformative force that answers all human needs and desires. Meanwhile, cultural commentators like Robert Evans have pointed out that this discourse around AI has acquired teleological, even cult-like properties58.

I have now examined the unified vision of a telos for AI put forward by Valley. It should be noted, however, that the Valley-favoured AGI concept is not the only ideological construction used to summarise AI development. The “How AI Fails Us”59 discussion paper advances the concept of Actually Existing AI (AEAI) which it explicitly aligns against the concept of AGI, identifying the chief components of AEAI as human competition, autonomy, and centralisation of power. I believe that this definition is in fact substantially similar with OpenAI’s definition of AGI—both feature competition between humans and AI, the concept of a centralised AI system60, and a focus on autonomy. While the discussion paper highlights that it is focused on the present practices of AI companies rather than their aspirations, it is unclear how aspirations can be disentangled from the instrumental practices used to pursue them, especially when many of the leaders of these companies appear genuinely committed to the point of embracing quasi-religious attitudes towards AGI, even in private.61 However, the AEAI paper remains prescient on many fronts, including its depiction of “two symbiotic future visions, one optimistic and one pessimistic, both dependent on centralization,” which has only become more true as AI technologies have become more potent, their pioneers come closer to centres of economic power, and both their upsides and downsides are elevated to matters of existential importance.

Moving beyond the AEAI paper, social scientist Dave Karpf’s conception of a quasi-unified “WIRED ideology” casts AI developers as both “conquering heroes” gifting the world the benefits of AI and also vanquishing the dangers that come from AI, the literal “motive force” of human development. Extending the “Californian Ideology” of Barbrook and Cameron, he writes:

“The ideological project of today’s tech barons is little different from the ideological project of WIRED’s tech accelerationist, tech optimist, tech solutionist libertarian past. They still wish to be viewed as conquering heroes, gladiatorial competitors jostling for control over the future that only they can build.”62

In this view, both the positive and negative modes of Valley Institution communication surrounding AI fit into a mould of the Valley Institutions as “conquering heroes” engaging in a “tech accelerationist, tech optimist, tech solutionist” programme. His notes that the Valley sees “engineers, entrepreneurs, and Silicon Valley investors” as “the motive force driving an inflection point in the course of history itself”. As there is never a serious possibility of abstaining from AI development, the only way to both realise the promise of AI and address any problems stemming from it is to engage in more AI development. Karpf cites Sam Altman’s statement that “techno-optimism is the only good solution to our current problems” as evidence of this forward-only mentality, and his notion of “tech accelerationist, tech optimist” discourse within Valley Institutions has been evidenced both by the “techno-optimist manifesto” we have discussed above and the effective accelerationist (e/acc) movement that it invokes.63 .

Parallel to this sociological examination, Timnit Gebru and Emile Torres have consolidated the Valley’s ideological programme into a constellation of beliefs known as TESCREAL,64 highlighting the problematic historical lineage of OpenAI’s “post-AGI world”. The TESCREAL paper presents a particular vision for the “telos of AI”: under the guidance of a munificent superintelligent AI system, humanity will become a race of digital consciousnesses expanding and colonising the galaxy as a post-scarcity machine civilisation. At the same time, Gebru scrutinises the origins of this vision from a historical, ethical, and risk-based perspective. This analysis has since gained popularity in media reports on the Valley Institutions and the AI race65, and deserves serious consideration given the authors’ deep prior involvement with Valley Institutions.

To begin, although the authors focus on the eugenicist origins of many of TESCREAL’s component ideologies, I believe that this does not necessarily indicate a causal link between participation in the two groups. TESCREAL as described within the paper covers a broad selection of futurist visions related to space travel, virtual reality, AI superintelligences etc., many of which have limited overlap with human genetic modification, selective breeding, or the racist motivations that drove those two projects. That TESCREAL itself can be an over-generalised or inaccurate categorisation of AI developers is also acknowledged by Gebru et al.: “It is important to note that not everyone associated with ideologies in this bundle believes in the totality of the dominant views in this bundle, and some people may even object to being bundled in this manner. […] Our argument is that the TESCREAList ideologies drive the AGI race even though not everyone associated with the goal of building AGI subscribes to these worldviews.” At the same time, Gebru and Torres report that many influential figures within TESCREAL movements have undoubtedly expressed racist or eugenicist attitudes.66 This suggests that there is possibly a broader animating principle that leads to enthusiasm for eugenics and enthusiasm for TESCREAL, rather than a direct causal link between a belief in eugenics and being an AI developer. Overall, in my view labelling the field of AI research as inextricable from eugenicist beliefs would be inaccurate and counterproductive to fostering effective dialogue around the risks of developing AI.67

I now move to consider the TESCREAL analysis as a whole. While this constellation seems convincing as a portrait of beliefs commonly presented at a high level within Valley Institutions, it seems deficient in that it does not take into account the role capitalist incentives play in AI development, especially given the for-profit nature of some of the groups it examines. It also does not account for the possibility that these ideas may be imperfectly held, held only as performative signals to other Valley Institution members, or deceptively held to attract funding for otherwise arcane academic interests. J. Robert Oppenheimer famously described the challenge of building the world’s first nuclear weapon as a “technically sweet problem”, and this sentiment was echoed by Geoffrey Hinton—called by some the “Godfather of AI”—about AI research68.

Finally, it should be noted that most of TESCREAL’s claims about the future developments are essentially unfalsifiable: At a timescale of tens of thousands of years almost any action (up to and including amassing large amounts of wealth or political power, which could be carried out for any number of reasons)69 can be justified as incremental progress towards a distant state of utopia. Thus, TESCREAL’s claims about the future of humanity, AI superintelligences etc. cannot be treated as a concrete goal for TESCREAL believers to achieve, or as an effective means to evaluate their present actions. Rather, they seem more like strongly-held personal beliefs or predictions for the general state of human affairs. A low level belief in TESCREAL-adjacent ideologies (perhaps especially Effective Altruism/EA and Longtermism) is therefore quite hard to distinguish from general concern about risks to human civilisation or the future of human society. There is also severe disagreement within the AI development community on questions like whether safety concerns should take precedent over developing AI,70 up to and including high-profile resignations.71 Like the AEAI discussion paper, it seems prudent to carry forward the potential risks and harms highlighted by the TESCREAL paper as concerns to be wary of without treating the paper as a definitive conclusion about the motivations of AI developers.

Having explored the ideological visions for the development of AI technology, I will now move to consider the economic or pragmatic visions behind developing AI—in other words, how is AI development supposed to give a return on billions of dollars of investment?

The Economics of AI

I will now examine the economic motivations for developing AI and offer some preliminary observations as to the economic viability of AI products. There is a significant divide between what idealists or AI enthusiasts claim AI is designed to do and what critics suggest AI actually does in a socio-economic context72. As Sam Altman puts it in “Moore’s Law for Everything”, “AI will lower the cost of goods and services, because labor is the driving cost at many levels of the supply chain.”73 In the context of the essay, he suggests that this will make goods cheaper and more accessible, but a lower cost does not necessarily guarantee a lower price for the end product—merely an increase in profit margin. To his credit, Altman recognises this, stating that under AI “even more power will shift from labor to capital” and proposing a series of governmental interventions to mitigate these negative effects.74 Regardless, a core economic promise of AI is that it will accelerate the automation of production, reducing the cost of producing goods drastically as both physical and mental labour can now be automated75.

Besides replacing human labourers, another prominent method of AI monetisation relies on exposing consumers to AI models directly. In this model, the AI acts to provide various services to an end user, ranging from data entry and providing email summaries to more traditionally human roles like emotional or conversational partnership.76 Companies like OpenAI have even spoken of allowing AI to produce tailored explicit content77. While the novel nature of this technology means that there is no predefined field of service work they are disrupting, a broad suite of online services already exist which promise to connect users with contractors skilled in copyediting78, dataset labelling79, or therapy80. AI companies would then step in as a similar service provider, but with AI instead of an anonymous human at the other end of the line. Given that many AI models were trained with the help of contractor-sourcing platforms like MTurk, this development would be highly disruptive to those workers affected.

For these promises to hold, however, AI must be able to perform at least on par with humans at a lower cost. If, for example, “human-level” AI existed but cost a billion dollars a year to replace an average human office worker, then it would not be cost-effective to implement in the workplace. Consequently, the costs of developing and implementing AI systems acts as a negative incentive for the Valley Institutions to invest heavily in developing AI technology, making a cost analysis important for the economic argument for developing AI.81

I will now attempt an economic analysis of the costs and revenues associated with operating these technologies. According to an article in Nature:

“As performance is skyrocketing, so are costs. GPT-4 — the LLM that powers ChatGPT and that was released in March 2023 by San Francisco-based firm OpenAI — reportedly cost US$78 million to train. Google’s chatbot Gemini Ultra, launched in December, cost $191 million. Many people are concerned about the energy use of these systems, as well as the amount of water needed to cool the data centres that help to run them. “These systems are impressive, but they’re also very inefficient,” Maslej says.”82

78 million USD for GPT-4 may seem like a relatively small cost given recent reports that OpenAI is on track to hit 2 billion dollars in revenue this year83. However, this does not account for the costs of operating OpenAI’s services and offering competitive salaries for its employees. In 2023, ChatGPT’s operating costs were estimated to be around 700,000 USD per day.84 This gives the annual operating costs of ChatGPT at approximately 250 million USD. Linkedin estimates that OpenAI has 201-500 employees85. With an estimated employee salary of 750, 000 USD per year86 based on reported salaries for OpenAI machine learning engineers, this gives employee expenditures of approximately 260 million dollars. This gives an estimate of the operating expenditure on the order of 600 million dollars. Furthermore, the 2 billion in reported revenue is annualised (i.e. obtained by multiplying the last month of revenue by 12) and may not represent actual revenue if subscriber counts decrease. Indeed, services such as Github Copilot were reported in 2023 to actually be losing money on a per-user basis, potentially leading to price raises that turn away users.87 At the same time, estimates for AI’s electricity and chip usage show unsustainably high figures that do not seem to correlate with environmental sustainability or long term profitability.88 While established corporate actors such as Meta and Google may be able leverage their resources to support these expensive endeavours, previous promised “tech revolutions” such as home automation or voice assistants have been abandoned when they prove unprofitable for extended periods.89 It is also important to note as a counterpoint that the cost of operating AI services may decrease with efficiency improvements.

In some regards, however, a basic profit and revenue analysis for AI technology is largely irrelevant at this stage. After the monumental success of ChatGPT in capturing the public imagination, Microsoft and a group of venture capital investors from various Valley Institutions quickly supplied OpenAI with up to 10 billion dollars in funding.90 Similarly, large scale state and academic endowments have funded the development of AI since its beginning as “Good Old-Fashioned AI” with DARPA.91 It seems intuitive that these decisions are based less on the promise of an immediately profitable product with a return on investment on the order of billions of dollars (Even assuming a billion dollars in profit each year, OpenAI’s investors would still take a decade to recoup their investment collectively), but rather on the belief that the development of AI will continue to skyrocket and provide outsized benefits that impact the entire global economy. It is conditions behind this belief that I will examine next.

The Narrative of AI

I will now examine and evaluate the construction of AI as an economic narrative that justifies continuous reinvestment, rather than as a purely quantitative cost saving measure. The idea that cultural and “viral” narratives provide more direct impetus for investment than sound business fundamentals has been explored by authors such as Robert J. Shiller—writing about the rapid rise of investment into cryptocurrencies, he suggests that an economic narrative is “a contagious story that has the potential to change how people make economic decisions”, including business and investment decisions.92 From this perspective, the history of Silicon Valley’s tech successes, the grand ideological telos of AI, and the promise of massive economic return all form part of the basis for a “contagious story”, a powerful economic narrative around the benefits of investing in AI.

I will now attempt to define exactly what this narrative consists of. To begin with the obvious, AI is a highly attractive prospect for a narrative centred around “the future of work”: Automated systems do not tend to engage in collective bargaining, have no need for biological provisions, decline holiday and sick leave, perform consistently without fatigue or distraction, and are easily scalable with services like OpenAI’s API platform.93 Even if AI does not fully replace humans as per the “human-AI competition” model set out in the AEAI paper,94 they can in theory augment human workers’ capabilities to achieve greater efficiency outcomes overall. It is therefore no surprise that deployment of AI in highly lucrative industries has been the subject of intense interest95. These economic potentials are directly referenced by Valley Institutions and their marketing/public relations statements96, and form the bulwark of more modest predictions made by Valley Institution figures. Sam Altman’s promises in “Moore’s Law for everything” make a lot more sense when you consider AI as a narrative of economic transformation rather than a fundamental sociopolitical “state change”—according to Altman’s statements in Davos, AI might be considered the ultimate guarantor of continuous advancement, where “every year we put out a new model [and] it’s a lot better than the year before”.97

In general, it is difficult to evaluate these claims of economic hyper-efficiency proposed by AI enthusiasts and Valley Institutions. There are several reasons for this:

First, the state of the art in this field is advancing extremely rapidly98, with new reports and findings about capability increases constantly appearing, sometimes aided by substantial deceptions.99 This makes determining the potential upper bound of AI model capabilities difficult. Any claims which reference supposedly fundamental limitations of AI models are at risk of being quickly rendered inaccurate by these advancements.100 However, this does not mean that there are no critical limits and flaws with machine learning models such as the Transformer architecture—limits and flaws that are unlikely to be resolved by scaling up the datasets and parameter counts of these models. One of the most notable is the tendency for models to emit linguistically coherent output that is uncorrelated with reality, otherwise known as hallucination or confabulation101. These flaws are hard to detect and hard to diagnose given the realities of black-box models. As such, model performance is hard to correlate with average human performance, with generative models sometimes exceeding human expectations and sometimes demonstrating unexpected and seemingly elementary mistakes.102 This “surprising” quality of model failure will only become more prominent as model performance improves and their weaknesses are partially mitigated by techniques such as Reinforcement Learning from Human Feedback (RLHF103)—as the public and non-technical institutions trust AI models more and implement them into more large-scale use-cases, their failures will become more shocking and more impactful.104

Second, intentional obfuscation about the exact capabilities, resource usage, or other fundamental properties of state of the art AI models is common. For example, OpenAI has cited competition and safety concerns as the reason they still have not released the weights and model architecture of GPT-4 publicly105, despite offering an API platform for commercial and non-commercial usage of the model. While Meta106 and Google107 have released so-called “open models” by releasing the model weights for public download, by not releasing the training datasets and technical specifications they follow a model more similar to shareware or freeware108 rather than the open source ideals they reference publicly109.

Third, the actual means for achieving this promised economic transformation remain unclear. So far, all of the Valley Institutions have pursued similar tactics for monetising AI, producing chatbots and other AI-powered tools that allow consumers interact directly with powerful AI models. While these tools are undoubtedly promising for replacing intellectual work on a low level,110 the most powerful aspect of AI as promised by its proponents is the possibility for human-level automated agents to accomplish tasks entirely without human oversight. That would allow intellectual labour to be massively parallelised and completed rapidly without hiring large teams of humans, becoming an endeavour similar to the large-scale automation of car factories. In essence, such a transformation would mean that some domains of intellectual labour no longer require any human involvement and would therefore be limited only by the speed and availability of computing hardware. At present, this technology does exist, but only for a number of bounded use-cases like content recommendation, content generation, or content moderation—this is particularly of note in the case of Facebook which I will examine later. If this form of automation becomes widespread in industry, businesses who are the first movers in adopting mass-scale AI will easily out-compete their rivals due to lower labour costs and higher efficiency, making fear of missing out or “being late to the party” a major part of the economic narrative for Valley Institutions. If this form of automation could be applied in the domain of scientific research, the rate of scientific progress would experience a rapid “takeoff”—a scenario referred to as “PASTA” in AI safety circles.111

So far, however, PASTA remains a fantasy. LLM-based AI models remain brittle in their abilities to emulate humans,112 with strange and hard to anticipate failure modes and a tendency to regress to stock answers (an effect that becomes more noteable when retrained on LLM-generated data).113 In essence, bets on current generation technology leading to this human-level “takeoff” scenario assume that linguistic or symbolic representations of real life phenomena can substitute for physical experience with those phenomena—in other words, they constitute a bet that “the map is as good as the territory” for the purposes of certain tasks. While AI models have been able to speed up certain parts of intellectual work today, no model for automated human-level agentic behaviour has been implemented so far—something that may be ultimately a benefit to society as unscoped AI systems are one of the shared concerns for both AI safety and AI ethics proponents.114 The risk of funding running out before such ultra-profitable models of AI are implemented is a realistic concern for AI companies, many of which are having trouble attracting more funding after initial eye-raising investment rounds.115 If this happens, generative AI may well suffer another “AI winter” and a slow death not unlike that predicted for Amazon Alexa and Google Assistant116.

Given these caveats, what can we say about the economic narrative powering the current wave of AI investment? It’s clear that enthusiasm and hype are directly tied to AI models demonstrating powerful, awe-inspiring, even worrying capabilities. It’s also clear that the easiest way to get more investment is to demonstrate more powerful and novel capabilities. I believe the Valley Institutions are participants in a vicious cycle, where heavy up-front investment in compute and training allows for the demonstration of new and novel capabilities, which creates market hype and leads to re-investment. However, said hype also leads to a higher expectation for even more advanced capabilities in the next round of model releases. Furthermore, in a situation similar to a pyramid scheme, the development of each round of new models requires massive economic expenditure far outpacing existing revenue streams.117 Thus, new investment and cash injections are constantly required to maintain the cycle.118 While many major AI developers claim to be pursuing “responsible scaling” and safe development119; the reality remains that they are economically incentivised to constantly produce new models with yet more powerful and untested capabilities and introduce them to the market as soon as they are developed. In this light, it is unsurprising that the open letter from the Future of Life Institute calling for a six-month pause on AI development failed to stop AI research in the Valley despite many notable Valley signatories.120 Indeed, many supposed safety guarantees later turn out to be more public relations efforts than genuine safety investments.121 The Valley Institutions must do this in order to continue fuelling the cycle—or else risk going out of business.

Case Study - Meta

Thus far I have considered the cultural, ideological, economic, and narrative factors behind the current process of AI development in the Valley Institutions. Now I will examine a case study of those factors as they relate to a particular group, both in terms of the development process and its consequences. The Valley Institution I have chosen to examine in close detail is Meta, previously known as Facebook. Meta is an ideal Valley Institution to examine for various reasons. Its competitors like Alphabet, Microsoft, and Amazon all feature divisions focused on physical products like self-driving cars, smart speakers, or phones, but Meta’s first concern is the user experience on sites like Facebook and Instagram. Since the company (up until its rebranding) featured a remarkable unity of purpose, we can see clearly how AI development impacts a specific product offering as well as how it impacts the institution as a whole.

For this section I am greatly indebted to the insider accounts provided by Jeff Horwitz in his book Broken Code122. This allows us to somewhat circumvent the methodological difficulties found in books like Eriksson et al.’s Spotify Teardown, which have been criticised for “unnecessarily and speculatively reducing the complexity of organizational decision-making” because they have limited access to internal decision makers.123 However, Broken Code also makes clear that Meta is far from a monolithic singular entity. Therefore, when I speak of “Facebook” or “Meta”’s incentives, I speak mostly of the incentives for the institution as a capitalist corporate entity, rather than any incentives that drove particular teams. I will also be focusing on AI’s role in developing the social network Facebook as it is Meta’s most recognisable product and has the most users.

Facebook’s development of AI is part of the site’s core user experience. A 2016 Facebook Engineering article describing the “Facebook Learner” system states that:

“Many of the experiences and interactions people have on Facebook today are made possible with AI. When you log in to Facebook, we use the power of machine learning to provide you with unique, personalized experiences. Machine learning models are part of ranking and personalizing News Feed stories, filtering out offensive content, highlighting trending topics, ranking search results, and much more.124

While this statement’s use of active language (“we use the power of…”) seems to reflect an intentional desire to deploy AI, in some sense Facebook’s reliance on AI systems is unsurprising. Although Facebook positions itself as a neutral service that “give[s] people a voice” to connect with each other125, it exercises editorial control over the content it promotes to users in areas like the News Feed. This content curation is a form of intellectual labour Facebook carries out through AI systems.126 With monthly active users (MAU) surpassing 500 million by 2010 and reaching more than 3 billion today,127 it would simply be unfeasible for Facebook to hire enough employees to manually curate the content shown to every Facebook user. In that sense, AI as an economic labour-saving device is a prerequisite to operating a website like Facebook at the scale Facebook’s leadership desires. Furthermore, the use of these AI systems engenders a beneficial feedback loop: a more effective content recommendation system increases user engagement and therefore exposes users to more ads, providing Facebook with more behavioural data and advertising revenue to invest into improving its AI systems. Zuboff outlines this cycle of behavioural surplus extraction in Surveillance Capitalism128, and we may clearly conclude that economic factors (see “Economics of AI”) are preeminent in driving Meta’s development of AI technology in fields like NLP.

From an ideological perspective (see “Telos of AI”), even after the founding of the Facebook AI Research (FAIR) lab in 2015129 Facebook did not seem to demonstrate a public commitment to TESCREAL. I believe some of this recalcitrance may be attributed to the hiring of Yann LeCun, a famous AI researcher, as the head of FAIR. LeCun has publicly expressed doubt regarding many facets of TESCREAL such as the concept of AGI or the threats to humanity posed by AI systems.130 Despite this, he promises that “in the future, everyone’s interaction with the digital world, and the world of knowledge more generally, is going to be mediated by AI systems”.131 By suggesting that AI systems will control “our entire information diet” and how we interact with online services, LeCun is essentially suggesting a complete takeover of society by AI—especially given our earlier considerations of software “eating the world”. Thus, for LeCun AI development appears to be a no-lose scenario, a position that aligns well with Facebook’s heavy economic reliance on AI.132

Perhaps unsurprisingly, Meta has adopted a relatively cavalier posture towards AI related risks, existential or otherwise. They have open-sourced some of their AI research, including the popular PyTorch platform for creating AI models.133 FAIR has also produced models capable of deceiving and manipulating humans during simulated board games.134 We can view this risk-positive attitude in AI as continuation of Silicon Valley business as usual (see “Software and AI”), especially since Facebook found its first major investment and growth in the Valley. This attitude is also reflected in Horwitz’s interviews with Brian Boland, a vice president for Facebook’s Advertising and Partnerships divisions: “Building things is way more fun than making things secure and safe,” Horwitz recalls him saying, “Until there’s a regulatory or press fire, you don’t deal with it.135

Facebook’s adoption of Silicon Valley dogma extended beyond their attitude towards risk. Meta’s internal communications reflected an intentional distancing from traditional corporate culture in favour of an techno-solutionist position similar to that outlined by Karpf and Barbrook. Horwitz writes,

“An internal manifesto from 2012 known as the Red Book declared that “Facebook was not created to be a company” and urged its employees to think more ambitiously than corporate goals. “CHANGING HOW PEOPLE COMMUNICATE WILL ALWAYS CHANGE THE WORLD,” the book stated above an illustration of a printing press.136

In addition to this, an infamous leaked memo by Facebook VP Andrew Bosworth suggested that “The ugly truth is that we [at Facebook] believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.137” Similar to my analysis of TESCREAL, we can see here the use of a nebulous higher goal like “connecting people” or “changing how people communicate” to justify any number of actions undertaken by Facebook.138 However, his emphasis on “the metrics” telling a true story reflects a different way Facebook guides its actions—one that is very distant from any high-minded ideals Facebook’s leadership may possess.

In Seeing Like a State, James C. Scott outlines the need for states or other institutions to have an all-seeing or panoptic capacity in order to meet goals or fully utilise available resources.139 Economic institutions such as companies have a similar need to perceive their own economic position. To do this, institutions often use metrics as tools to set goals and measure progress: for example, the Consumer Price Index is used in the United States as a synthetic metric for inflation. Horwitz points out that a similar obsession with measurable performance existed at Facebook, neatly summarised by the phrase “data wins arguments”.140 Leadership and engineers inside Facebook used metrics like Daily Active Users (DAU) and later Meaningful Social Interactions (MSI) to quantify how well its systems were performing and give the organisation collective goals to optimise towards.141

Of course, by defining a metric you necessarily ignore some factors too complicated or difficult to measure, and policies based purely on metrics run the risk of colliding with those confounding factors. Furthermore, this act of measurement often reflects unconscious or hidden assumptions made by the party setting the metric. Finally, even if you have good metrics for what you wish to measure, they can be measured inaccurately or become the target of overoptimisation and therefore lose their efficacy. Facebook, combining a disruption-oriented and risk-positive workplace culture with a strong reliance on metrics to measure progress, fell prey to all of these pitfalls. The “Facebook Learner” system mentioned above was an attempt to make machine learning an accessible tool for all Facebook engineers, replacing a deep theoretical understanding of AI with ease of use and the ability to run “hundreds of experiments” to quickly improve metrics.142 These “experiments” (a euphemism for unannounced changes to the Facebook website for a random portion of users) featured limited experimental controls, relatively short durations that did not allow long term effects to manifest, and no meaningful consent from the users they were targeted at.143 As a result, the harmful effects of optimising for metrics like MSI (which turned out to increase aggressive interactions online) were allowed to not only persist but become a core part of Facebook’s AI-powered content recommendation system. Finally, a haphazard Valley startup environment that prioritised “building things” and shipping new features meant that even the people in charge of building those systems did not know precisely what data they depended on to function, or where that data could be found: Horwitz recalls that “Engineers and data scientists described living with perpetual uncertainty about where user data was being collected and stored—a poorly labeled data table could be a redundant file or a critical component of an important product.144

These poor practices meant that while short term gains in key engagement were reported, over a longer window user trust in Facebook as a whole was eroded, with the site gaining a reputation for being a haven of fake news and incendiary content.145 Large sections of Broken Code are devoted to Facebook’s Civil Engagement teams working to reduce the harmful effects of Facebook’s own core product, as Facebook became tied to political violence in countries like Myanmar, America, and India. According to a leaked internal memo focusing on the “Stop the Steal” movement, Facebook was a locus of polarised election discourse and far-right insurrectionist activity in 2020: “from the earliest Groups, we saw high levels of Hate, [Violence and Incitement], and delegitimization, combined with meteoric growth rates — almost all of the fastest growing FB Groups were Stop the Steal during their peak growth”.146 If their long term goal was to use AI technology to facilitate user experiences that connected people rather than dividing them, Facebook’s short term optimisations set them backwards.

However, if we observe the revenue generated through these optimisations and examine Facebook’s development of AI from a more economic lens, a different picture emerges. Facebook’s AI developments, in keeping with the goals outlined in the “Narrative of AI” section, successfully “allow[ed] intellectual labour to be massively parallelised and completed rapidly without hiring large teams of humans”. With only 15,000 human moderators for 2.5 billion users in 2019,147 this human-out of the loop automation means that Facebook’s AI systems wwere given enormous agency to direct the News Feeds and (indirectly) the information diet of billions of Facebook users with no meaningful human oversight except in extreme circumstances. With 135 billion dollars in revenue in 2023 thanks to this technical feat,148 Facebook is a success story for institutional development and deployment of AI at scale.

Conclusion

So far, we have seen how a confluence of factors push institutions to engage in developing AI technology into powerful, unscoped, autonomous systems designed to replace humans rather than improve human capabilities. They do so because such systems are a continuation of the Valley’s dream to “[eat] the world”, because they are a main component of human “transcendence” for ideological constellations like TESCREAL, because powerful systems that can replace humans at a variety of tasks would vastly cut down labour costs, and because demonstrations of novel capabilities perpetuate a narrative of AI hype and encourage continued investment in AI. Each case of institutional AI development contains an uneven mixture of these factors, making none of them predominant in our analysis. The final question that remains, then, is one of intentionality.

It is tempting to say that, because Meta’s internal methodology for deploying AI was haphazard and motivated mainly by economic concerns; it was therefore unconscious or the inevitable result of “market forces”. That would be a mistake. Facebook’s engineers were not acting unwillingly but rather (in AI parlance) as optimisers, trying whatever they could with systems like Facebook Learner to raise engagement metrics. In a manner similar to the semi-random process of gradient descent, these changes to the behaviour of Facebook were implemented piecemeal and built upon gradually, always seeking to reach consistent maximum engagement. The engineers’ behaviour was reinforced as engagement metrics were used to determine raises, promotions and bonuses, making the metrics analogous to reward signals in a reinforcement learning paradigm. And like in a reinforcement learning paradigm, the work was carried out without reference to long term consequences until internal dissent or external negative reception was combined with measurable (financial) penalties. In short, Facebook’s corporate structure was engineered by management into a massive self-regulating system with the goal of maximising user engagement. That it proceeded to function as designed is no surprise, and certainly not a natural fact of technological progress.

This ultimate truth of human responsibility extends to any of the interpretations I have presented for AI. No matter if AI is borne from Silicon Valley hubris or ideological utopianism, cold economic calculation or pressure to maintain investment, the choice to develop AI is a human one, one taken to achieve human objectives. It is my hope that this close examination shows the error of calling AI development “inevitable”: in every sense of the word, AI development by large corporate institutions is a human act, one that can be addressed by reducing the expected reward or increasing the expected downsides through regulation and enforcement. Fatalism about AI serves the same purpose it always does, to make us accept without resistance what we ought to question and scrutinise. Charlie Warzel calls this “AI’s manifest-destiny philosophy: this is happening, whether you like it or not”.149

[11868 words]

Bibliography

Agre, Philip E. ‘Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI’, 2006. https://api.semanticscholar.org/CorpusID:114001296.

Allyn, Bobby. ‘ChatGPT Maker OpenAI Exploring How to “responsibly” Make AI Erotica’. NPR, 8 May 2024. https://www.npr.org/2024/05/08/1250073041/chatgpt-openai-ai-erotica-porn-nsfw.

Altman, Sam. ‘Moore’s Law for Everything’, 16 March 2021. https://moores.samaltman.com/.

Amadeo, Ron. ‘Google Lays off “Hundreds” More Employees, Strips Google Assistant Features’. Ars Technica, 11 January 2024. https://arstechnica.com/gadgets/2024/01/google-lays-off-hundreds-more-employees-strips-google-assistant-features/.

Andreessen, Marc. ‘The Techno-Optimist Manifesto’. A16Z, 16 October 2023. https://a16z.com/the-techno-optimist-manifesto/.

———. ‘Why Software Is Eating the World’. A16Z (blog), 20 August 2011. https://a16z.com/why-software-is-eating-the-world/.

Barbrook, Richard, and Andy Cameron. ‘The Californian Ideology’. Mute, 1 September 1995. https://www.metamute.org/editorial/articles/californian-ideology.

BetterHelp. ‘BetterHelp’, n.d. https://www.betterhelp.com/.

Breiman, Leo. ‘Statistical Modeling: The Two Cultures (with Comments and a Rejoinder by the Author)’. Statistical Science 16, no. 3 (1 August 2001). https://doi.org/10.1214/ss/1009213726.

Bubeck, Sébastien, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, et al. ‘Sparks of Artificial General Intelligence: Early Experiments with GPT-4’. arXiv, 13 April 2023. http://arxiv.org/abs/2303.12712.

Cuthbertson, Anthony. ‘Company That Made an AI Its Chief Executive Sees Stocks Climb’. The Independent, 16 March 2023. https://www.independent.co.uk/tech/ai-ceo-artificial-intelligence-b2302091.html.

Deibert, Ronald J. ‘Introduction’. In Parchment, Printing, and Hypermedia: Communication in World Order Transformation. New Directions in World Politics. New York: Columbia Univ. Press, 1997.

Dell’Acqua, Fabrizio, Edward McFowland, Ethan R. Mollick, Hila Lifshitz-Assaf, Katherine Kellogg, Saran Rajendran, Lisa Krayer, François Candelon, and Karim R. Lakhani. ‘Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality’. SSRN Electronic Journal, 2023. https://doi.org/10.2139/ssrn.4573321.

Devin Didn’t Solve My Computer Vision Project. Youtube, 2024. https://www.youtube.com/@ComputerVisionEngineer.

Dijkstra, E. W. ‘A Note on Two Problems in Connexion with Graphs’. Numerische Mathematik 1, no. 1 (December 1959): 269–71. https://doi.org/10.1007/BF01386390.

Dixon, Stacy Jo. ‘Number of Monthly Active Facebook Users Worldwide as of 4th Quarter 2023’. Statista, 21 May 2024. https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/.

Doctorow, Cory. ‘Even If You Think AI Search Could Be Good, It Won’t Be Good’. Medium (blog), 15 May 2024. https://doctorow.medium.com/https-pluralistic-net-2024-05-15-they-trust-me-dumb-fucks-ai-search-b8115252e457.

Drucker, Johanna. ‘Data vs. Capta: A Brief Polemic (Data Modelling and Use)’. In The Digital Humanities Coursebook: An Introduction to Digital Methods for Research and Scholarship, First edition., 25–26. Abingdon, Oxon ; New York: Routledge/Taylor & Francis, 2021.

Dunn, Jeffrey. ‘Introducing FBLearner Flow: Facebook’s AI Backbone’. Engineering at Meta (blog), 9 May 2016. https://engineering.fb.com/2016/05/09/core-infra/introducing-fblearner-flow-facebook-s-ai-backbone/.

Dwoskin, Elizabeth. ‘Misinformation on Facebook Got Six Times More Clicks than Factual News during the 2020 Election, Study Says’. The Washington Post, 4 September 2021. https://www.washingtonpost.com/technology/2021/09/03/facebook-misinformation-nyu-study/.

ET Online. ‘OpenAI Faces Financial Challenges amid User Decline: Experts Predict Bankruptcy Concerns’. The Economic Times, 14 August 2023. https://economictimes.indiatimes.com/news/new-updates/openai-faces-financial-challenges-amid-user-decline-experts-predict-bankruptcy-concerns/articleshow/102711336.cms.

Evans, Robert. ‘The Cult of AI’. Rolling Stone, 27 January 2024. https://www.rollingstone.com/culture/culture-features/ai-companies-advocates-cult-1234954528/.

Feng, Emily. ‘Epic Drought in Taiwan Pits Farmers against High-Tech Factories for Water’. NPR, 19 April 2023. https://www.npr.org/sections/goatsandsoda/2023/04/19/1170425349/epic-drought-in-taiwan-pits-farmers-against-high-tech-factories-for-water.

Fiverr. ‘Fiverr’, n.d. https://www.fiverr.com/.

Franklin, Seb. ‘Cloud Control, or the Network as Medium’. Cultural Politics 8, no. 3 (1 November 2012): 443–64. https://doi.org/10.1215/17432197-1722154.

Franklin, Ursula M. The Real World of Technology. Revised edition. CBC Massey Lectures. Toronto: Anansi, 2004.

Future of Life Institute. ‘Pause Giant AI Experiments: An Open Letter’. Future of Life Institute, 22 March 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

Gebru, Timnit, and Émile P. Torres. ‘The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence’. First Monday, 14 April 2024. https://doi.org/10.5210/fm.v29i4.13636.

Gerard, David. ‘Pivot to AI: Hallucinations Worsen as the Money Runs Out’. Attack of the 50 Foot Blockchain (blog), 11 April 2024. https://davidgerard.co.uk/blockchain/2024/04/11/pivot-to-ai-hallucinations-worsen-as-the-money-runs-out/.

Germain, Thomas. ‘‘Magic Intelligence in the Sky’: Sam Altman Has a Cute New Name for the Singularity’. Gizmodo, 13 November 2023. https://gizmodo.com/sam-altman-openai-agi-board-decision-1851017018.

Goldman, Sharon. ‘In Davos, Sam Altman Softens Tone on AGI Two Months after OpenAI Drama’. VentureBeat, 17 January 2024. https://venturebeat.com/ai/in-davos-sam-altman-softens-tone-on-agi-two-months-after-openai-drama/.

Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. ‘Generative Adversarial Nets’. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, 2672–80. NIPS’14. Cambridge, MA, USA: MIT Press, 2014.

Google. ‘Gemini’, n.d. https://gemini.google.com.

Gray, Mary L., and Siddharth Suri. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston: Houghton Mifflin Harcourt, 2019.

Gray Widder, David, Sarah West, and Meredith Whittaker. ‘Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI’. SSRN Electronic Journal, 2023. https://doi.org/10.2139/ssrn.4543807.

Green, Lelia. ‘Technoculture: Another Term That Means Nothing and Gets Us Nowhere?’ Media International Australia 98, no. 1 (February 2001): 11–25. https://doi.org/10.1177/1329878X0109800105.

Hammond, George. ‘Speed of AI Development Is Outpacing Risk Assessment’. The Financial Times, 4 October 2024. https://arstechnica.com/ai/2024/04/speed-of-ai-development-is-outpacing-risk-assessment/.

Heath, Alex. ‘Mark Zuckerberg’s New Goal Is Creating Artificial General Intelligence’. The Verge, 18 January 2024. https://www.theverge.com/2024/1/18/24042354/mark-zuckerberg-meta-agi-reorg-interview.

Heilbroner, Robert L. ‘Do Machines Make History?’ Technology and Culture 8, no. 3 (July 1967): 335. https://doi.org/10.2307/3101719.

Horwitz, Jeff. Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets. First edition. New York: Doubleday, 2023.

———. ‘Chapter 2’. In Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets, First edition. New York: Doubleday, 2023.

———. ‘Chapter 3’. In Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets, First edition. New York: Doubleday, 2023.

———. ‘Chapter 17’. In Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets, First edition. New York: Doubleday, 2023.

Hughes, Thomas Parke. ‘Introduction’. In Networks of Power: Electrification in Western Society, 1880 - 1930, Softshell Books ed., 14–17. Softshell Books History of Technology. Baltimore, Md.: John Hopkins Univ. Press, 1993.

Imnimo. ‘A Class of Problem That GPT-4 Appears to Still Really Struggle with Is Variants of Common Puzzles.’ Hacker News, 14 March 2023. https://news.ycombinator.com/item?id=35155467.

Jensen, Tabi. ‘An AI “Sexbot” Fed My Hidden Desires—and Then Refused to Play’. WIRED, 9 March 2023. https://www.wired.com/story/replika-chatbot-sexuality-ai.

Jones, Nicola. ‘AI Now Beats Humans at Basic Tasks — New Benchmarks Are Needed, Says Major Report’. Nature 628, no. 8009 (25 April 2024): 700–701. https://doi.org/10.1038/d41586-024-01087-4.

Kahn, Jeremy. ‘Exclusive: OpenAI Promised 20% of Its Computing Power to Combat the Most Dangerous Kind of AI—but Never Delivered, Sources Say’. Fortune, 21 May 2024. https://fortune.com/2024/05/21/openai-superalignment-20-compute-commitment-never-fulfilled-sutskever-leike-altman-brockman-murati/.

Karnofsky, Holden. ‘Forecasting Transformative AI, Part 1: What Kind of AI?’ Cold Takes (blog), 10 August 2021. https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/.

Karpf, Dave. ‘That Old WIRED Ideology’. Substack blog. The Future, Now and Then (blog), n.d. https://davekarpf.substack.com/p/that-old-wired-ideology.

Lenat, Doug, and Gary Marcus. ‘Getting from Generative AI to Trustworthy AI: What LLMs Might Learn from Cyc’. arXiv, 31 July 2023. http://arxiv.org/abs/2308.04445.

Levels.fyi. ‘OpenAI’. Accessed 4 June 2024. https://www.levels.fyi/companies/openai/salaries/software-engineer.

LinkedIn. ‘OpenAI’. Accessed 4 June 2024. https://www.linkedin.com/company/openai/.

Mac, Ryan, Craig Silverman, and Jane Lytvynenko. ‘Facebook Stopped Employees From Reading An Internal Report About Its Role In The Insurrection. You Can Read It Here.’, 26 April 2021. https://www.buzzfeednews.com/article/ryanmac/full-facebook-stop-the-steal-internal-report.

Mac, Ryan, Charlie Warzel, and Alex Kantrowitz. ‘Growth At Any Cost: Top Facebook Executive Defended Data Collection In 2016 Memo — And Warned That Facebook Could Get People Killed’. Buzzfeed News, 29 March 2018. https://www.buzzfeednews.com/article/ryanmac/growth-at-any-cost-top-facebook-executive-defended-data.

Markelius, Alva, Connor Wright, Joahna Kuiper, Natalie Delille, and Yu-Ting Kuo. ‘The Mechanisms of AI Hype and Its Planetary and Social Costs’. AI and Ethics, 2 April 2024. https://doi.org/10.1007/s43681-024-00461-2.

Martin, Lauren, Nick Whitehouse, Stephanie Yiu, Lizzie Catterson, and Rivindu Perera. ‘Better Call GPT, Comparing Large Language Models Against Lawyers’. arXiv, 23 January 2024. http://arxiv.org/abs/2401.16212.

Marx, Karl. ‘Estranged Labour’. In Economic and Philosophical Manuscripts of 1844, 1844. https://www.marxists.org/archive/marx/works/1844/manuscripts/labour.htm.

———. ‘The Fragment on Machines’. In The Grundrisse Der Kritik Der Politischen Ökonomie, 690–712, 1857. https://thenewobjectivity.com/pdf/marx.pdf.

Meta. ‘Discover the Possibilities with Meta Llama’. Meta Llama, n.d. https://llama.meta.com/.

———. ‘Driven by Our Belief That AI Should Benefit Everyone’, n.d. https://ai.meta.com/responsible-ai/.

———. ‘Meta Reports Fourth Quarter and Full Year 2023 Results; Initiates Quarterly Dividend’. Meta Investor Relations, 1 February 2024. https://investor.fb.com/investor-news/press-release-details/2024/Meta-Reports-Fourth-Quarter-and-Full-Year-2023-Results-Initiates-Quarterly-Dividend/default.aspx.

———. ‘Our Story’. Meta.com, n.d. https://about.meta.com/company-info/.

———. ‘PyTorch’. Meta, n.d. https://ai.meta.com/tools/pytorch/.

Metz, Cade. ‘The Fear and Tension That Led to Sam Altman’s Ouster at OpenAI’. The New York Times, 18 November 2023. https://www.nytimes.com/2023/11/18/technology/open-ai-sam-altman-what-happened.html.

———. ‘“The Godfather of A.I.” Leaves Google and Warns of Danger Ahead’. The New York Times, 1 May 2023. https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html.

Metz, Cade, and Karen Weise. ‘Microsoft to Invest $10 Billion in OpenAI, the Creator of ChatGPT’, 23 January 2023. https://www.nytimes.com/2023/01/23/business/microsoft-chatgpt-artificial-intelligence.html.

Meyer, Michelle N. ‘Everything You Need to Know About Facebook’s Controversial Emotion Experiment’. WIRED, 30 June 2014. https://www.wired.com/2014/06/everything-you-need-to-know-about-facebooks-manipulative-experiment/.

MTurk. ‘Amazon Mechanical Turk’, n.d. https://www.mturk.com/.

Muniesa, Fabian. ‘Actor-Network Theory’. In International Encyclopedia of the Social & Behavioral Sciences, 80–84. Elsevier, 2015. https://doi.org/10.1016/B978-0-08-097086-8.85001-1.

Murgia, Madhumita, and George Hammond. ‘OpenAI on Track to Hit $2bn Revenue Milestone as Growth Rockets’. The Financial Times, 9 February 2024. https://www.ft.com/content/81ac0e78-5b9b-43c2-b135-d11c47480119.

Newton, Casey. ‘The Trauma Floor: The Secret Lives of Facebook Moderators in America’. The Verge, 25 February 2019. https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona.

OpenAI. ‘Hello GPT-4o’, 13 May 2024. https://openai.com/index/hello-gpt-4o/.

———. ‘Introducing ChatGPT’. OpenAI Blog (blog), 30 November 2022. https://openai.com/blog/chatgpt.

———. ‘OpenAI’s Approach to Frontier Risk’, 26 October 2023. https://openai.com/global-affairs/our-approach-to-frontier-risk/.

———. ‘Our Structure’. OpenAI, n.d. https://openai.com/our-structure.

———. ‘The Fastest and Most Powerful Platform for Building AI Products’, n.d. https://openai.com/api/.

OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, et al. ‘GPT-4 Technical Report’. arXiv, 4 March 2024. http://arxiv.org/abs/2303.08774.

Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, et al. ‘Training Language Models to Follow Instructions with Human Feedback’. arXiv, 4 March 2022. http://arxiv.org/abs/2203.02155.

Park, Peter S., Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks. ‘AI Deception: A Survey of Examples, Risks, and Potential Solutions’. Patterns 5, no. 5 (May 2024): 100988. https://doi.org/10.1016/j.patter.2024.100988.

Peng, Binghui, Srini Narayanan, and Christos Papadimitriou. ‘On Limitations of the Transformer Architecture’. arXiv, 26 February 2024. http://arxiv.org/abs/2402.08164.

Perrigo, Billy. ‘Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk’. TIME Magazine, 13 February 2024. https://time.com/6694432/yann-lecun-meta-ai-interview.

Proctor, Jason. ‘Air Canada Found Liable for Chatbot’s Bad Advice on Plane Tickets’, 15 February 2024. https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416.

Rivera, Juan-Pablo, Gabriel Mukobi, Anka Reuel, Max Lamparth, Chandler Smith, and Jacquelyn Schneider. ‘Escalation Risks from Language Models in Military and Diplomatic Decision-Making’. arXiv, 7 January 2024. http://arxiv.org/abs/2401.03408.

Roose, Kevin. ‘This A.I. Subculture’s Motto: Go, Go, Go’. The New York Times, 10 December 2023. https://www.nytimes.com/2023/12/10/technology/ai-acceleration.html.

Sadowski, Jathan. ‘Potemkin AI’. Real Life, 6 August 2018. https://reallifemag.com/potemkin-ai/.

Samuel, Sigal. ‘“I Lost Trust”: Why the OpenAI Team in Charge of Safeguarding Humanity Imploded’. Vox, 19 May 2024. https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence.

Schüll, Natasha Dow. ‘Introduction: Mapping the Machine Zone’. In Addiction by Design: Machine Gambling in Las Vegas. Princeton: Princeton university press, 2014.

Scott, James C. Seeing like a State: How Certain Schemes to Improve the Human Condition Have Failed. Veritas paperback edition. Yale Agrarian Studies. New Haven, CT London: Yale University Press, 2020.

Seaver, Nick. ‘Review of Spotify Teardown: Inside the Black Box of Streaming Music, by Maria Eriksson, Rasmus Fleischer, Anna Johansson, et Al.’ Information & Culture: A Journal of History 54, no. 3 (2019): 396–98.

Shiller, Robert James. ‘1. The Bitcoin Narratives’. In Narrative Economics: How Stories Go Viral & Drive Major Economic Events. Book Collections on Project MUSE. Princeton: Princeton University press, 2019.

———. Narrative Economics: How Stories Go Viral & Drive Major Economic Events. Book Collections on Project MUSE. Princeton: Princeton University press, 2019.

Shumailov, Ilia, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson. ‘The Curse of Recursion: Training on Generated Data Makes Models Forget’. arXiv, 14 April 2024. http://arxiv.org/abs/2305.17493.

Siddarth, Divya, Daron Acemoglu, Danielle Allen, Kate Crawford, James Evans, Michael Jordan, and E. Glen Weyl. ‘How AI Fails Us’. Harvard University Carr Centre for Human Rights Policy and Justice, Health, and Democracy Impact Initiative, 1 December 2021. https://ethics.harvard.edu/how-ai-fails-us.

Silver, David, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, et al. ‘Mastering the Game of Go without Human Knowledge’. Nature 550, no. 7676 (October 2017): 354–59. https://doi.org/10.1038/nature24270.

Simon, Charles, and Forbes Technology Council. ‘AGI Is Ready To Emerge (Along With The Risks It Will Bring)’, 27 July 2022. https://www.forbes.com/sites/forbestechcouncil/2022/07/27/agi-is-ready-to-emerge-along-with-the-risks-it-will-bring/?sh=181711d2332e.

Steinberg, Arieh, Mark Tonkelowitz, Peter Deng, Adam Mosseri, Adam Hupp, Aaron Sittig, and Mark Zuckerberg. Filtering content in a social networking service. 10379703, filed 26 June 2015, and issued 13 August 2019.

Takahashi, Dean. ‘Altera Raises $9M to Develop AI for Digital Humans’. VentureBeat (GamesBeat), 8 May 2024. https://venturebeat.com/games/altera-raises-9m-to-develop-ai-for-digital-humans/.

Tangermann, Victor. ‘OpenAI Employees Say Firm’s Chief Scientist Has Been Making Strange Spiritual Claims’. Futurism, 20 November 2023. https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims.

The Associated Press. ‘Amazon Cuts Hundreds of Jobs in Its Alexa Unit as It Doubles down on Layoffs That Already Total More than 27,000 over the Past Year’. Fortune, 17 November 2023. https://fortune.com/2023/11/17/amazon-layoffs-alexa-division-ai-andy-jassy/.

Troy, Dave. ‘The Wide Angle: Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley’s Rightward Turn’. The Washington Spectator, 1 May 2023. https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/.

Turing, A. M. ‘On Computable Numbers, with an Application to the Entscheidungsproblem’. Proceedings of the London Mathematical Society s2-42, no. 1 (1937): 230–65. https://doi.org/10.1112/plms/s2-42.1.230.

Udandarao, Vishaal, Ameya Prabhu, Adhiraj Ghosh, Yash Sharma, Philip H. S. Torr, Adel Bibi, Samuel Albanie, and Matthias Bethge. ‘No “Zero-Shot” Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance’. arXiv, 8 April 2024. http://arxiv.org/abs/2404.04125.

Vallance, Chris. ‘Artificial Intelligence Could Lead to Extinction, Experts Warn’. BBC, 30 May 2023. https://www.bbc.com/news/uk-65746524.

Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. ‘Attention Is All You Need’. arXiv, 1 August 2023. http://arxiv.org/abs/1706.03762.

Vincent, James. ‘How Much Electricity Does AI Consume?’, 16 February 2024. https://www.theverge.com/24066646/ai-electricity-energy-watts-generative-consumption.

Vinge, Vernor. ‘The Coming Technological Singularity’. Whole Earth Review, 1993. https://accelerating.org/articles/comingtechsingularity.

Warzel, Charlie. ‘OpenAI Just Gave Away the Entire Game’. The Atlantic, 24 May 2024. https://www.theatlantic.com/technology/archive/2024/05/openai-scarlett-johansson-sky/678446/.

Wenar, Leif. ‘The Deaths of Effective Altruism’. WIRED, 27 March 2024. https://www.wired.com/story/deaths-of-effective-altruism/.

Yann LeCun: Meta’s New AI Model LLaMA; Why Elon Is Wrong about AI; Open-Source AI Models | E1014 (Starting 25:01), n.d. https://www.youtube.com/watch?v=OgWaowYiBPM.

Yao, Deborah. ‘Microsoft’s GitHub Copilot Loses $20 a Month Per User’, 11 October 2023. https://aibusiness.com/nlp/github-copilot-loses-20-a-month-per-user.

Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. First trade paperback edition. New York, NY: PublicAffairs, 2020.

———. ‘The Discovery of Behavioural Surplus’. In The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, chap. 3. London: Profile books, 2019.


  1. Simon and Forbes Technology Council, ‘AGI Is Ready To Emerge (Along With The Risks It Will Bring)’. 

  2. Schüll, ‘Introduction: Mapping the Machine Zone’. 

  3. Ibid. 

  4. Barbrook and Cameron, ‘The Californian Ideology’. 

  5. Gebru and Torres, ‘The TESCREAL Bundle’. 

  6. Siddarth et al., ‘How AI Fails Us’. 

  7. Zuboff, The Age of Surveillance Capitalism

  8. Also referred to as “Andreessen Horowitz”. 

  9. Hughes, ‘Introduction’. 

  10. Ibid. 

  11. Heilbroner, ‘Do Machines Make History?’ 

  12. Franklin, The Real World of Technology

  13. Green, ‘Technoculture’. 

  14. Heilbroner, ‘Do Machines Make History?’ 

  15. Deibert, ‘Introduction’. 

  16. Siddarth et al., ‘How AI Fails Us’. 

  17. Gray and Suri, Ghost Work

  18. Silver et al., ‘Mastering the Game of Go without Human Knowledge’. 

  19. Bubeck et al., ‘Sparks of Artificial General Intelligence’. 

  20. Martin et al., ‘Better Call GPT, Comparing Large Language Models Against Lawyers’. 

  21. Rivera et al., ‘Escalation Risks from Language Models in Military and Diplomatic Decision-Making’. 

  22. Sadowski, ‘Potemkin AI’. 

  23. Siddarth et al., ‘How AI Fails Us’. 

  24. Jensen, ‘An AI “Sexbot” Fed My Hidden Desires—and Then Refused to Play’. 

  25. Marx, ‘The Fragment on Machines’. 

  26. Marx, ‘Estranged Labour’. 

  27. Muniesa, ‘Actor-Network Theory’. 

  28. Shiller, Narrative Economics

  29. Turing, ‘On Computable Numbers, with an Application to the Entscheidungsproblem’. 

  30. Andreessen, ‘Why Software Is Eating the World’. 

  31. Ibid. 

  32. Drucker, ‘Data vs. Capta: A Brief Polemic (Data Modelling and Use)’. 

  33. Dijkstra, ‘A Note on Two Problems in Connexion with Graphs’. 

  34. Barbrook and Cameron, ‘The Californian Ideology’. 

  35. Zuboff, ‘The Discovery of Behavioural Surplus’. 

  36. The importance of scale will become evident in the case study section. 

  37. Breiman, ‘Statistical Modeling’. 

  38. Barbrook and Cameron, ‘The Californian Ideology’. 

  39. Takahashi, ‘Altera Raises $9M to Develop AI for Digital Humans’. 

  40. Feng, ‘Epic Drought in Taiwan Pits Farmers against High-Tech Factories for Water’. 

  41. Franklin, ‘Cloud Control, or the Network as Medium’. 

  42. Drucker, ‘Data vs. Capta: A Brief Polemic (Data Modelling and Use)’. 

  43. Doctorow, ‘Even If You Think AI Search Could Be Good, It Won’t Be Good’. 

  44. Goodfellow et al., ‘Generative Adversarial Nets’. 

  45. OpenAI, ‘Introducing ChatGPT’. 

  46. Vaswani et al., ‘Attention Is All You Need’. 

  47. Bubeck et al., ‘Sparks of Artificial General Intelligence’. 

  48. This is sometimes mentioned alongside the concept of Artificial Superintelligence (ASI), but since the boundary between the two terms is unclear I shall use the more common term. 

  49. OpenAI’s stated mission is to create this artificial general intelligence, or AGI. Demis Hassabis, the leader of Google’s AI efforts, has the same goal. Now, Meta CEO Mark Zuckerberg is entering the race.” 

  50. Vinge, ‘The Coming Technological Singularity’. 

  51. Germain, ‘‘Magic Intelligence in the Sky’: Sam Altman Has a Cute New Name for the Singularity’. 

  52. OpenAI, ‘Our Structure’. 

  53. Altman, ‘Moore’s Law for Everything’. 

  54. Andreessen, ‘The Techno-Optimist Manifesto’. 

  55. CEO of OpenAI and head of Google Deepmind. 

  56. Vallance, ‘Artificial Intelligence Could Lead to Extinction, Experts Warn’. 

  57. OpenAI, ‘Our Structure’. 

  58. Evans, ‘The Cult of AI’. 

  59. Siddarth et al., ‘How AI Fails Us’. 

  60. It should be noted however that the Valley Institutions rarely paint AGI as a force they will be in full control of after it is “complete”. To the contrary, AEAI emphasises the concentration of power under “a small group of engineers of AI systems”, ultimately attributing full responsibility to the human engineers of AI. 

  61. Tangermann, ‘OpenAI Employees Say Firm’s Chief Scientist Has Been Making Strange Spiritual Claims’. 

  62. Karpf, ‘That Old WIRED Ideology’. 

  63. Roose, ‘This A.I. Subculture’s Motto: Go, Go, Go’. 

  64. Gebru and Torres, ‘The TESCREAL Bundle’. 

  65. Troy, ‘The Wide Angle: Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley’s Rightward Turn’. 

  66. Gebru and Torres, ‘The TESCREAL Bundle’. 

  67. Hammond, ‘Speed of AI Development Is Outpacing Risk Assessment’. 

  68. Metz, ‘“The Godfather of A.I.” Leaves Google and Warns of Danger Ahead’. 

  69. Wenar, ‘The Deaths of Effective Altruism’. 

  70. Metz, ‘The Fear and Tension That Led to Sam Altman’s Ouster at OpenAI’. 

  71. Samuel, ‘“I Lost Trust”: Why the OpenAI Team in Charge of Safeguarding Humanity Imploded’. 

  72. Markelius et al., ‘The Mechanisms of AI Hype and Its Planetary and Social Costs’. 

  73. Altman, ‘Moore’s Law for Everything’. 

  74. Ibid. 

  75. Including the kinds of intellectual labour traditionally performed by the managerial class and the capitalists themselves. 

  76. OpenAI, ‘Hello GPT-4o’. 

  77. Allyn, ‘ChatGPT Maker OpenAI Exploring How to “responsibly” Make AI Erotica’. 

  78. Fiverr, ‘Fiverr’. 

  79. MTurk, ‘Amazon Mechanical Turk’. 

  80. BetterHelp, ‘BetterHelp’. 

  81. A similar argument explains why the Valley is loathe to invest in fundamental science research or “deep tech”, given the high costs of developing a new technology versus commercialising an existing technology. 

  82. Jones, ‘AI Now Beats Humans at Basic Tasks — New Benchmarks Are Needed, Says Major Report’. 

  83. Murgia and Hammond, ‘OpenAI on Track to Hit $2bn Revenue Milestone as Growth Rockets’. 

  84. ET Online, ‘OpenAI Faces Financial Challenges amid User Decline: Experts Predict Bankruptcy Concerns’. 

  85. LinkedIn, ‘OpenAI’. 

  86. Levels.fyi, ‘OpenAI’. 

  87. Yao, ‘Microsoft’s GitHub Copilot Loses $20 a Month Per User’. 

  88. Vincent, ‘How Much Electricity Does AI Consume?’ 

  89. The Associated Press, ‘Amazon Cuts Hundreds of Jobs in Its Alexa Unit as It Doubles down on Layoffs That Already Total More than 27,000 over the Past Year’. 

  90. Metz and Weise, ‘Microsoft to Invest $10 Billion in OpenAI, the Creator of ChatGPT’. 

  91. Agre, ‘Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI’. 

  92. Shiller, ‘1. The Bitcoin Narratives’. 

  93. OpenAI, ‘The Fastest and Most Powerful Platform for Building AI Products’. 

  94. Siddarth et al., ‘How AI Fails Us’. 

  95. Dell’Acqua et al., ‘Navigating the Jagged Technological Frontier’. 

  96. AI will aid in “increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge” as well as augment humans and “give everyone incredible new capabilities”. From https://openai.com/blog/planning-for-agi-and-beyond 

  97. Goldman, ‘In Davos, Sam Altman Softens Tone on AGI Two Months after OpenAI Drama’. 

  98. Jones, ‘AI Now Beats Humans at Basic Tasks — New Benchmarks Are Needed, Says Major Report’. 

  99. Devin Didn’t Solve My Computer Vision Project

  100. E.g. claims like “GPTs/LLMs will never…” 

  101. Peng, Narayanan, and Papadimitriou, ‘On Limitations of the Transformer Architecture’. 

  102. Imnimo, ‘A Class of Problem That GPT-4 Appears to Still Really Struggle with Is Variants of Common Puzzles.’ 

  103. Ouyang et al., ‘Training Language Models to Follow Instructions with Human Feedback’. 

  104. Proctor, ‘Air Canada Found Liable for Chatbot’s Bad Advice on Plane Tickets’. 

  105. “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.” 

  106. Meta, ‘Discover the Possibilities with Meta Llama’. 

  107. Google, ‘Gemini’. 

  108. A practice where a company releases the binary executable of a program for free rather than the source code. Users can download and use the software for free, but cannot easily modify the software. 

  109. Gray Widder, West, and Whittaker, ‘Open (For Business)’. 

  110. This is not guaranteed, given that problems like confabulation or hallucination may not be tackled fully even with more investment and training. 

  111. Karnofsky, ‘Forecasting Transformative AI, Part 1: What Kind of AI?’ 

  112. Lenat and Marcus, ‘Getting from Generative AI to Trustworthy AI’. 

  113. Shumailov et al., ‘The Curse of Recursion’. 

  114. Gebru: “[…] this quest to create a superior being akin to a machine-god has resulted in current (real, non-AGI) systems that are unscoped and thus unsafe.” From Gebru and Torres, ‘The TESCREAL Bundle’. 

  115. Gerard, ‘Pivot to AI: Hallucinations Worsen as the Money Runs Out’. 

  116. Amadeo, ‘Google Lays off “Hundreds” More Employees, Strips Google Assistant Features’. 

  117. See the last section for a cost breakdown. 

  118. In this instance we do not distinguish between internal funding (Microsoft funding Github Copilot which it owns) or external funding (Micrrosoft funding OpenAI’s efforts to develop successors to GPT-4). Both require the buy-in of high level Valley Institution figures due to the high capital expenditure required. 

  119. For examples of such statements, see OpenAI, ‘OpenAI’s Approach to Frontier Risk’ or 

  120. Future of Life Institute, ‘Pause Giant AI Experiments: An Open Letter’. 

  121. Kahn, ‘Exclusive: OpenAI Promised 20% of Its Computing Power to Combat the Most Dangerous Kind of AI—but Never Delivered, Sources Say’. 

  122. Horwitz, Broken Code

  123. Seaver, ‘Review of Spotify Teardown: Inside the Black Box of Streaming Music, by Maria Eriksson, Rasmus Fleischer, Anna Johansson, et Al.’ 

  124. Dunn, ‘Introducing FBLearner Flow: Facebook’s AI Backbone’. 

  125. Meta, ‘Our Story’. 

  126. Steinberg et al., Filtering content in a social networking service. 

  127. Dixon, ‘Number of Monthly Active Facebook Users Worldwide as of 4th Quarter 2023’. 

  128. Zuboff, The Age of Surveillance Capitalism

  129. Later renamed Fundamental AI Research (FAIR). 

  130. Perrigo, ‘Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk’. 

  131. Ibid. 

  132. LeCun even admits on record that Facebook is the main beneficiary of open source NLP AI research. In Yann LeCun: Meta’s New AI Model LLaMA; Why Elon Is Wrong about AI; Open-Source AI Models | E1014 (Starting 25:01)

  133. Meta, ‘PyTorch’. 

  134. Park et al., ‘AI Deception’. 

  135. Horwitz, ‘Chapter 3’. 

  136. Horwitz, ‘Chapter 2’. 

  137. Mac, Warzel, and Kantrowitz, ‘Growth At Any Cost: Top Facebook Executive Defended Data Collection In 2016 Memo — And Warned That Facebook Could Get People Killed’. 

  138. Horwitz: “In the context of a massive and rapidly expanding market, the company’s mission of making the world more open and connected could sometimes be hard to distinguish from the more craven pursuit of locking down market share.” In Horwitz, ‘Chapter 3’. 

  139. Scott, Seeing like a State

  140. Horwitz, ‘Chapter 2’. However, executive reluctance to accept data showing the negative effects of their platform can be found in Horwitz, ‘Chapter 17’. 

  141. Ibid. 

  142. Horwitz says of this feature: “[FB Learner] packaged [machine learning techniques] into a template that could be used by engineers who quite literally did not understand what they were doing”. In Horwitz, ‘Chapter 3’. 

  143. Meyer, ‘Everything You Need to Know About Facebook’s Controversial Emotion Experiment’. 

  144. Horwitz, ‘Chapter 3’. 

  145. Dwoskin, ‘Misinformation on Facebook Got Six Times More Clicks than Factual News during the 2020 Election, Study Says’. 

  146. Mac, Silverman, and Lytvynenko, ‘Facebook Stopped Employees From Reading An Internal Report About Its Role In The Insurrection. You Can Read It Here.’ 

  147. Newton, ‘The Trauma Floor: The Secret Lives of Facebook Moderators in America’. 

  148. Meta, ‘Meta Reports Fourth Quarter and Full Year 2023 Results; Initiates Quarterly Dividend’. 

  149. Warzel, ‘OpenAI Just Gave Away the Entire Game’.