Hubris at the heart of the AI Revolution

Bright natural dining room nook with vases plates and fruits on the table.

Hubris

Artificial Intelligence.

It’s the dawn of a new revolution. The Fourth Industrial Age. The Third Wave of AI. Humanity’s Final Golden Age. 

Whatever we may wish to call it, if progressive history is any indication, the present AI Revolution might bring about a new era of unparalleled peace and prosperity, or it might open a pandora’s box leading directly to humanity’s demise. Or, at least, to an extended power grab entrenching a new caste of elites into the aristocracy for a generation. Perhaps we think that reality is likely to regress towards the mean, and any cataclysmic outcomes from unmitigated AI development will be stifled by the enlightened minds presumably leading our academic noble pursuit of nosce te ipsum. 

I used to think this way myself. Studying computational neuroscience as a graduate student in the pre-revolution years, AI was still a taboo term— something you didn’t mention if you wanted funding. The thought that AI would be winning machine learning competitions again and be open for exploration and research was exciting. The Second Wave of AI had ended before I was born. Saudade. Perhaps AI could still tell us something about ourselves, our brains, our minds. The very paper that sparked the third revolution— an image recognition model running on gaming graphics hardware— showed specialized edge and color processing visual pathways.

The top 48 kernels were learned on GPU 1 while the bottom 48 kernels were learned on GPU 2.

The top 48 kernels were learned on GPU 1 while the bottom 48 kernels were learned on GPU 2.

Notice the specialization exhibited by the two GPUs, a result of the restricted connectivity described in Section 3.5. The kernels on GPU 1 are largely color-agnostic, while the kernels on GPU 2 are largely color-specific. This kind of specialization occurs during every run and is independent of any particular random weight initialization (modulo a renumbering of the GPUs).
Krizhevsky et al. (2012) ImageNet Classification with Deep Convolutional Neural Networks.

We do the same thing! For decades, computational neuroscientists have been exploring exactly how the edge selectivity of neurons in the early human visual system are learned. Further confounding the story is while some neurons seem to specialize in precise edge selectivity, other neurons appear fine tuned to detect color blobs of certain color or color polarities. Admittedly it is quite a bit more complicated, but here was the same result, occurring naturally as a by-product in artificial neural networks. It was exhilarating. We were on the cusp of something great.

And we were. But it wasn’t. 

I defended my PhD thesis in 2015, the same year Microsoft created the first AI that was superhuman. Superhuman, at least, if the task is classifying 80 different dog breeds, amongst a lengthy list of other relatively esoteric objects, against sleep deprived, over-caffeinated CS grad students. Perhaps not exactly a fair fight or representative of expert human skill. But still.

Superhuman.

When I entered industry in 2016 the AI race was on, and there was no going back. Looking back at those early years it was easy to be excited. Advancements were coming out all the time. Even now it’s hard to blame youthful exuberance. 

A go aficionado, I had, by happenstance, been actively developing a go AI in early January 2016. I was relatively connected with the AI go community; at least I was aware of all the major go playing programs and publications on the subject. On the eve of the 27th, my personal feeling, possibly representative of the field, was that a human-level go program a la Deep Blue was at least a decade away. 

And then the announcement-- Deepmind had attempted it. They made preliminary designs; they built and tested them; they solved distributed training; they solved Monte Carlo search; they solved the convolutional neural network; they merged; they iterated; they debugged; they figured out the end game; they tested it. They had tested it. And it won. It had won.

Three months prior

It should have been a red flag. But I loved go. I loved AI. This was a great achievement. I didn’t see it. 

In the intervening years I have grown much more skeptical of advancements in AI. Much of that skepticism is encapsulated by the hubris of AlphaGo. Not that AlphaGo, in itself, is dangerous. Or evil. I do not think Google broke their motto in its development. I do not think that the people who made AlphaGo are immoral. Certainly I would not be one to judge if they were— the first thing I did once the publication dropped was make my own version, NeuGo. It reached 2 dan on KGS. 

But the hubris is still there. We did it because we could. No one thought to ask if we should. To their credit, DeepMind (Google) never released AlphaGo itself. And in truth, there was no novel theoretical breakthrough to be found in its development. More of a “and by the way,  this is also possible” notice to the community. Still, the field of go has changed, forever, for better or worse. 

I think it is time we started considering seriously the implications that AI is having to the industries, and the people in those industries, it transforms, before we build it. Governance structures will come with time, but it is difficult to put the genie back in the bottle once it’s out and granting wishes. 

If we continue, as we have done, to develop AI through hubris we will lose control of our shared fate. Perhaps there are no black balls in the urn after all, but the power to be unlocked by AI technology is probably not something we want to leave to chance. I do not believe the universe would stop us from wiping ourselves out in a fit of exuberant misadventure. 

As we develop AI and integrate it deeply into society, we need to have a clear understanding of what we want and a set of definitive actions prescribed for moving us towards that direction.

If #ethicalAI is of interest to you, we invite you to subscribe below to receive access to the latest updates.

Previous
Previous

Towards an AI Eutopia