Towards an AI Eutopia

AI Eutopia

It is not immediately obvious how to define an AI-driven Eutopian world. It is unlikely that a single universal set of principles could be defined and agreed upon by all. Here I will argue for a limited set of key principles that I feel are both necessary and relevant for a better future given a world governed by advanced AI technology. Still, the following should certainly not be construed as a complete list.

Governance is democratic.

Starting off on solid ground, it seems unlikely that this principle would be controversial. We want the use of AI technology to be democratically governed. Further, the governing bodies responsible for AI should be relatively representative of the people and interests that any relevant AIs affect, with a lean towards safeguarding against creating new, undue deleterious effects even if they may be slightly beneficial, all trade-offs considered. 

With AI there’s more to the story, however. AI technology is often incredibly disruptive with an extraordinarily short transition time. While this is true with many digital technologies, the nature of AI technology is such that the disruption can be both rapid and dangerous. The pre- and post-AlphaGo go scenes are completely different, with the transition occurring nearly overnight. The financial plight of professional go players aside, the immediate side effects of AlphaGo are rather innocuous. It is not obvious to me, however, that similar technologies are not bringing with them hidden catastrophic consequences— released today, to be realized in a few years. 

AlphaFold anyone? More on that, later.

Transparency.

Again likely uncontroversial. We would prefer that there be transparency in any AI publicly in use. Government regulation and public decision making require accurate information on what AIs are being used and the impact they are having. Closed door development with undisclosed release of proprietary AI technology is antithetical to this goal. 

Still, we should be careful what we ask for. 

One form of transparency I do not believe we want: open-source transparency for all AI models. Often the knowledge that something is even possible is incredibly powerful, perhaps too powerful already. Particularly in AI, just knowing that someone has solved a particular problem with current algorithms is often enough to make replication worthwhile and provide sufficient impetus to sustain development until the solution is redeveloped or discovered. Publications are just icing on the cake. Open-source code— a personal Michelin chef.

Unfortunately, some AI is dangerous. Or certainly will be dangerous. As such it would be irresponsible to release all AI publicly. Instead, what we actually want is to ensure that we are all treated fairly and in control of AI usage. I accept sacrificing public (personal) knowledge of the mechanisms by which AI works if it leads to safer, more morally responsible AI development. What I don’t want, and would not accept, is to sacrifice our agency in the world. We should not be controlled or forced to live in systems where the controlling actors are locked behind closed doors with no accountability or transparency to the decision-making process.

Research is conducted safely and responsibly.

While the statement that we want research to progress responsibly seems incontrovertible, there is significant debate in the academic community as to what this statement means in practice. As alluded to above, I do not believe that all knowledge needs to be freely distributed and shared. I do not know how to isolate uranium-235 from uranium-238 (or even where to purchase U238— I presume I couldn’t even if I tried), and yet I also do not feel that this lack of knowledge infringes upon my rights as an individual or diminishes my personal agency. 

I think that we, the academic AI community, need to take more responsibility in creating pathways for safe AI development and experimentation. The public will not know or understand what we are building until it is released and in their hands. And by then it is too late. 

Government officials are also unlikely to help. A recent example— when called to testify before congress about Facebook and its role in the previous election, Zuckerberg was asked to clarify what the difference between Facebook and Twitter was, and to explain how Facebook could make money if users didn’t pay for their service— 

Not exactly the zinger questions we would expect from a government body tasked with keeping our leading corporate players honest. 

It is clear, the academic community will need to lead and manage itself to foster safe AI development.

Aligned profit motives. 

Let the controversies begin. 

I am a capitalist, insofar as I generally support free market principles and believe they are useful for optimizing material production, both in reducing costs and increasing access. I do believe, however, that AI is a different game.

There is a qualitative distinction between reducing labor (or making it more efficient) and removing labor. There is a rich literature over the past two decades— particularly with the rise of Facebook, Amazon, Apple, Microsoft, and Google (FAAMG)— showing oligopolization and the development of winner-takes-all markets occur, and will continue to occur, despite the digital revolution creating zero margin replication costs. 

In short, there is still an upfront development cost to software even if replication is free (which applies equally to AI). Additionally, network effects in exchange economies stabilize around one or few winners, regardless whether the commodity exchanged is real or digital. The result is, invariably, market exchanges controlled by oligopolies. In the digital case, labor isn’t used to make more copies, it’s used to make new versions. 

While the internet revolution has been patently drastic, the situation may not necessarily seem dire. The world has hummed along unabated in the FAAMG era without any obvious civilization-level catastrophes to date. Indeed the world has improved under FAAMG in many ways. And it seems plausible that AI might fit into the current narrative— development costs increase (creating AI is hard) and replication costs decrease (AI is just software at the end of the day). Something something wiggle wiggle and we all continue to hit our 2% inflation target evermore. 

However, I would argue the situation with AI development is different for the following reasons:

  1. The industries that FAAMG affects are largely new, AI will affect traditional industries as well. The internet-based digital world has only been around for a few decades, and its inception brought with it entirely new industries in digital communication, social networking, entertainment, work, and information sharing— all of which are job creating. In contrast, in situations where a pre-existing commodity has transitioned from real to digital the results have generally been dramatically more negative. Consider, for instance, the pre- and post-Napster music industries. New AI technologies will take real commodities (e.g. transportation services) and turn them into effective digital commodities (e.g. driverless transportation services)— i.e. job destroying. AI will additionally affect manufacturing, inspection, security, retail, agriculture and other blue-color industries directly, dramatically reducing labor needs while increasing profit-margins for corporate players.

  2. Even FAAMG companies can’t conquer industries overnight, AI can. Virtually every FAAMG (and FAAMG-like) company has fundamentally changed the way we live our lives or do business. Still, change is not replaced. At least not on industry-wide scales, overnight. For industries ripe for AI disruption, however, whoever controls the AI may quickly find themselves the new owner of an entire industry over a remarkably short time scale. As AI dominance is often so competitively decisive, an 18-24 month AI development lead will often prove sufficient in flipping industry-wide oligopolies to new players. Compare this timescale to the slow decline of Intel paired with the slow rise of NVidia/AMD over the past decade. Hardware advancement obeys Moore’s law. AI advancement is considerably faster.

  3. FAAMG products are largely not interchangeable, AI products are. Or at least FAAMG products are not generally treated as such. An iPhone is not a Galaxy, even though they may serve the same niche. Same for digital entertainment— two movies or songs are generally not treated as interchangeable. AI, however, will be treated more like compression algorithms— if they have the same Weissman score, they are the same. As such, once AI development in a particular direction plateaus it will be considered “done”, and continued development in that direction will drop significantly as interest moves to other areas.  

If the above characterizations hold, then it seems likely that the course of AI development over the next twenty years will be strikingly different from the digital revolution of the last twenty in at least the following ways:

  1. Significantly more people will be directly affected. We will see significantly more turnover in pre-existing industries. Layoffs in select industries will be rapid and severe. Consider that there were ~1.8 million truck drivers in 2018 in the United States alone, one of the last middle class jobs left that does not require a university degree. Given the intense interest and rapid advancement in driverless technology, it seems doubtful that there will be nearly as many drivers remaining in 2028. 

  2. The speed of change will be much more drastic. Continuing on the trucking industry, the loss of those 1.8 million jobs is unlikely to occur evenly over the course of a decade. Rather, we are going to see little substantive change until the AI technology is perfected, followed by a complete industry-wide transformation over an 18-24 month period.

  3. “Learn to code” isn’t going to cut it. When thousands of AI-related jobs can replace millions, even imagining hypothetical work exists for those millions to transfer to eventually, the pace of change will be too rapid for contemporary societal, educational and welfare infrastructure to handle the change. We are going to need new structures in place to help those people negatively affected by novel AI technologies.

Given the above, accepting that the AI revolution is upon us, and that at least some of our predictions are likely to come true, let’s consider a couple paths in which AI development could continue, and which would produce the outcome we most desire. 

At a macro level, let’s consider the following scenarios:

  1. Everybody makes all the things, open-sourcing all the things as they go.

  2. Everybody makes all the things, keeping as much as possible behind closed doors. 

Let’s admit a priori that everybody is going to want to make all the things, because there are entire industries to be won. There is a question, however, if development proceeds either open or closed-sourced. Certainly both will happen, to differing degrees, in different industries. 

Closed-source development would make it more likely that a single player could gain an 18-24 month advantage and quickly destabilize an industry, leading to massive layoffs and monopolization (i.e. bad). On the other hand, open-source development would make it more likely that bad actors would be able to utilize a wider range of AI technologies for nefarious purposes (i.e. also bad). 

Bad vs also bad. That doesn’t sound like a very auspicious position for us to be in, does it? But I believe we are not doomed yet. First, let us define a set of economic policies that align individual actor’s profit motives with a Eutopian vision:

  1. AI development should be siloed. If it is at all possible for advanced AI to be used for significantly dangerous or nefarious use, we are going to have to accept that freely available, open-source AI is ultimately not a tenable solution.

  2. AI usage (and not IP) must be made openly available. If we’re going to have siloed AI development, we’re going to need antitrust laws in place to force AI sharing between market participants in order to limit the power of any individual player. As mentioned, in the limit AI will be like compression software; i.e. it isn’t the deciding factor we want to be deciding our oligopoly participants. Note that IP here includes publication and code, which we don’t, in general, want to be openly available.

  3. AI profits must be diverted to aid (negatively) affected people. Each industry will be different. That said, the industry best suited to take care of millions of truck drivers who lose their jobs is, unsurprisingly, the trucking industry. We cannot let an industry drop its workforce overnight to desperate ends while shuffling untold profits to a shareowner class. Industry players incorporating AI must be responsible for using the newfound profit to care for its previous workforce, for as long as is needed

Reasonable AI Copyright.

Given that two AIs that perform at the same level of accuracy under the same hardware constraints will be considered equivalent. And given that we would prefer a minimum amount of AI being developed so that unintended dangerous development does not become ubiquitous. The most desirable outcome for any specific use case would be that one AI is created, siloed, and then its usage is shared and made available to everyone. 

In order to stop excessive replicate development, it will be necessary that the AI is made available at cost, with no additional overhead placed by an ownership company, to level the playing field. That is, it would be desirable if AI fell into the public domain after a short period of time, so that anyone may use it, in accordance with governing law, at compute cost. Reasonable copyright is debatable. Perhaps it’s a couple decades or a couple years. Or perhaps a new form of copyright mechanism is conceived to honor the original AI developers, while still transferring ownership to the public domain in a relevant time period.

Another argument for a shorter copyright period stems from the belief that we are moving towards some form of Universal Basic Income (UBI). While such a debate is both lengthy and controversial, if we are to stipulate that a UBI will be necessary at some point in the future, it makes sound fiscal policy to pay for it via newfound revenues created by AI. As above, if we mark AI revenues as they come online, it also becomes easier to target UBI policies towards people in affected industries, who may need it most. 

Eutopian Summary.

Our Eutopian characterization here is certainly not an exhaustive description of a perfect AI world, as many people may rank other unlisted properties higher, or even select directly opposing views over those listed here. That said, I believe the following principles form a conservative set of guidelines to help steer us towards an AI future that is both realistic and works for a wide range of people and interests.

In summary, our AI Eutopian vision is defined by the following principles:

  1. Democratic Governance

  2. Transparency on AI Usage, not IP

  3. Safe & Responsible AI Development

  4. Public AI Ownership (aka Reasonable Copyright)

  5. Aligned Profit Motives

    1. Siloed AI Development (aka not open-source)

    2. Shared AI Usage (aka Reasonable Antitrust Laws)

    3. Profit-sharing with Affected Individuals

Previous
Previous

Avoiding an AI Dystopia

Next
Next

Hubris at the heart of the AI Revolution