Avoiding an AI Dystopia

AI Dystopia

In our previous post we defined a set of principles that detail a direction we would like to steer AI development towards. But where are we and, in the long-term, where are we currently going?

Semi-anarchic AI development. 

The current state of AI development is very fractured and open. 

There are no real government structures currently in place for unified, global AI governance (although many are beginning to sprout up). AI development itself shows a stark dichotomy between large, monolithic labs designing, implementing and training million dollar models behind closed doors, and unguided, crowdsourced open development. The result is neither transparent nor safe. Further, AI ownership and profit alignment is still governed implicitly under a winner-take-all system— you keep what you kill. 

We should be careful what makes the menu.

The current situation may best be described by Nick Bostrom’s semi-anarchic default condition, introduced in his seminal paper The Vulnerable World Hypothesis. There he defines semi-anarchic by three functional characteristics: (1) limited capacity for preventative policing, (2) limited policy for global governance, and (3) the existence of diverse motivations (i.e. bad actors). 

Bostrom argues a prolonged semi-anarchic state, under relatively safe assumptions, will lead to civilization-wide devastation, by default. Exiting a semi-anarchic state, then, is the proposed solution for a vulnerable world. 

But is it really so bad? Civilization-wide destruction seems a bit much, doesn’t it?

Let’s try a thought experiment. Let’s try to end the world, today. Using only open-source AI and a semi-technically savvy homebody. 

What do we have at our disposal? 

We have published databases of every known viral DNA sequence. We have gene sequencing companies which sell customer-supplied gene sequences on demand within days. We have CRISPR bacterial gain-of-function kits on sale for under $200. And AI? We have an open-sourced AI that solves protein folding

AlphaFold, let’s see if we can use that.

I’m not a biologist and so this next bit is pure, unabashed speculation from a naive CRISPR ignoramus. Still, imagine attempting the following:

  1. Take a known viral sequence off a published internet database and compute the 3D structure of each of its proteins using AlphaFold.

  2. Do an evolutionary search for novel protein sequences that fold to the same functional 3D structures (again using AlphaFold).

  3. Order said protein sequences from a gene cloning lab. 

  4. Debug, test and final assembly at home with your own CRISPR kit, as needed.

Note the AlphaFold bit, making it possible to find functional protein analogs at home, on your laptop.

Asking my biologist friends, skipping the AI bits, I’m told Steps 3 & 4 are difficult (if not impossible), and a myriad number of problems would arise in implementation. For step 3— the sequencing company will check your sequence request against a rather detailed database and stop you from (illegally) ordering viral loads. But then, and again, bio ignoramus excogitating here, it seems likely that any protein is going to have many functional protein analogs with vastly different base sequences. If an analog can be found for every protein that would trip an alarm, it doesn’t seem difficult to fake a research organization and find a cloning lab that would give you the new protein sequences. For step 4— that’s not what today’s home CRISPR kits do.

Fine. But still, just wait twenty years. Or work in a real lab.

Biology is never easy, and the above should not be construed to say you can recreate viruses in your basement, today, using a $200 CRISPR kit. Still, in relation to finding a few tens of thousands of protein analogs, the actual assembly seems a rather easy final step. It is certainly something a legitimate research laboratory could do given DeepMind resources and, by extension, something I believe may be doable at home, twenty years from now. 

That is to say, that despite the undoubtedly positive benefits AlphaFold will bring, on the whole, to drug therapies and medical treatments, I believe the release of AlphaFold has also significantly lowered the barrier of creating homemade epidemics through at least one potential pathway, without any oversight or significant forethought.

An AI-opic viewpoint, but I can’t help but think that releasing AlphaFold is unleashing an incredibly horrific amount of power to the masses that is not yet being fully appreciated.

Looking Forward

Whether the explicit four-step scenario above has any technical merit or not is immaterial. The point is, in the chain of necessary steps to create an epidemic-level pathogen there needs to be at least one exceptionally restrictive rate limiting step. Someone, somewhere, needs to make sure we don’t science out all the rate limiting steps in our academic exuberance for first authorships. 

Is there any combination of AI and biotechnologies that could, today or soon hereafter, lead to civilization-wide devastation? What rate limiting steps are we making sure to keep around? What about in finance? (Fake) news? Climate change? Nanotech? Do AI developers have any idea how their open-source code will be used in any of these industries? What about by military industrial complexes the world over working in secret

In 2012 when Alex Krizhevsky won the Imagenet competition with a brand new convolutional neural network— a pattern recognizer that classified dog breeds— was he aware he was removing the rate limiting step in solving a four thousand year old Chinese board game? I imagine not. And yet, it took less than four years to go from AlexNet to AlphaGo. And it was inexorable.

Consider all the capabilities that may be unlocked from the release of AlphaFold. You probably can’t. No one can. And yet that set is now an inevitable part of our shared future. We’ve pulled a very large handful of balls from the urn, and we will be discovering what we’ve already pulled one by one in the years soon to come. Whether or not we pulled a black ball, they’re all in our hand now.

Action Items

Despite the dire situation we may quickly be finding ourselves in, I do not believe that all is yet lost. We need to take swift action, however, if we are to avert catastrophe, by default.

In line with our discussion, here I propose a set of action items that push AI development either towards our Eutopic vision of the future or, at the very least, away from continued semi-anarchic development.

  1. Creation of a unified, global governing body. This body should be responsible for overseeing and guiding global AI development. Specific structures should be maintained to monitor the interplay between AI and other industries so that affected people are considered and protected, and that the general public is kept safe from cataclysmic unintended consequences.

  2. Creation of global transparency laws for AI use and development. AI regulations should be passed and compliance mechanisms set in place to ensure that the public is made aware of all technologies incorporating AI, including use cases, performance metrics, revenue statistics and broader impact statements. 

  3. Enhanced broader impact statements. Only recently have broader impact statements been required for publishing AI at major venues. Moving forward these impact statements will need to become more detailed and address specific guidelines set by a governing board, particularly in determining the interplay between AI and other industries. 

  4. Conduct explicit research analyzing the interplay between AI and other industries with the express intent of examining rate limiting technologies and safeguards. Create a public forum where such meta-research can be published, discussed, and linked with associated governing authority over resulting dual-use technology development. 

  5. Pass AI copyright legislation and implement compliance mechanisms such that AI ownership shifts to the public domain after a reasonable period of time. Reasonable may be industry specific, but should be closer to 5-10 years than the 100-150 typically governing copyright today.

  6. Shift AI research to grant-based siloes. Everybody trying all the things is a dangerous way to select balls from the urn. Going forward we will need governance structures in place to deliberately select research directions that are allowed and strictly monitored. The research itself should be conducted within closed silos, with resulting IP shifting from private to public ownership in line with the copyright legislation above. 

  7. Pass AI antitrust legislation forcing any developed AI to be made available industry-wide with reasonable limits protecting against private profiteering. The thinking is to reduce the tendency for AI’s to be independently redeveloped, especially when combined with reasonable copyright legislation.

  8. Creation of industry-specific representative bodies responsible for handling and implementing policy for affected individuals as AI technology becomes more ubiquitous. These bodies should be funded directly from AI technologies streamlining the affected industry. 

Will It Be Enough

The short answer is, of course, unfortunately no. Even implementing all the proposed action items above, we are not guaranteed long-term safe AI development leading to a glorious, Eutopic vision of the future. Indeed the policies above will likely not be enough to reach either of our desired goals— a Eutopic vision or transfer long-term AI development away from a semi-anarchic development state.

My goal here, however, is to argue that action is needed, identify actionable steps we can take, today, to move us along the right path, and foster further dialogue. I hope that the propositions outlined here will be given serious consideration and lead to positive, definitive action now and in the future.


Next
Next

Towards an AI Eutopia