By Randal Adcock | There have always been Pollyannas and Chicken Littles, Idealists and Dooms Day Prophets. We can characterize optimists and pessimists as one or the other and discount their words and deeds. Dreamers and Doomsayers, like Boys Who Cry Wolf, are radical outliers. We are conditioned to ignore them. We naturally expect the truth to be found somewhere in the middle, neither myopic nor defeatist. So what is the truth about the coming of full artificial intelligence? And how should we respond?
Science fiction has spun many fascinating stories about future possibilities. The tech-topias feed our hopes and dystopias and feed our fears. But we know its fiction and we treat it as great entertainment. Otherwise we are comfortably bored. There is no real and present danger.
For decades AI experts told us “don’t worry, we’re decades away from true AI. When we discover the essence of intelligence we will control it to make all human suffering come to an end and we will live in a world where we won’t have to work again. Nothing to fear here.” Never mind that our self-esteem comes from being productive at one thing or another. Don’t we all secretly hope to retire early and live on an equatorial beach under the palms, with robotic servants?
Now we would normally never label Bill Gates, Elon Musk and Stephen Hawking (love him or hate them) as conspiracy theorists, or doom-sayers, and we certainly wouldn’t expect them to be down on science and technology. But there has been an attitude shift lately. You can find these world leaders and others talking about the great caution that is needed to deal with our accelerating pace of techno-change. Are we getting ahead of ourselves, especially in regard to the ultimate in technology – artificial intelligence?
This Sam Harris TED Talk https://youtu.be/R_sSpPyruj0 is the most succinct articulation I have heard on the subject. Harris is well qualified and trying to deeply comprehend and respond appropriately to the impending risks inherent in creating something that can make itself not only smarter than us, but progressively smarter than itself in rapid succession. As others have pointed out, the human brain has serious trouble comprehending non-linear, or exponential rates of change.
Another thinker, Yuval Harari http://www.ynharari.com/ points out that many companies are already deploying forms of AI in algorithms at an accelerating rate. This intelligence gathering activity puts those companies at an accelerating greater advantage — not because they’re evil, but because they can, and it’s good for business. This has been done gradually enough, and relatively invisibly, that we haven’t really taken notice of the longer term social, cultural, economic and political impacts.
So what is to be done? The speakers have no final answer, of course. But they agree we need to be thinking long and hard about this as thousands of AI experts around the world continue making ever greater progress in a race to the finish line. A hundred years ago an American sociologist, Wm Ogburn, came up with the idea of ‘cultural lag‘. Hard technologies are systematically adopted at a faster rate than softer technologies such as methodologies or the public policies needed to incorporate these technologies successfully into civil society. Today, a hundred year later, we still have not caught up with that notion of ‘cultural lag‘ in any significant way.
I suggest we synthesize our best available personal, organizational, collective and computational intelligence to address this question. But first we have to build that platform — a Wayfinders ® platform — starting with the issues that people already recognize and solutions they immediately appreciate. And we should do this anyway. It’s the smartest thing to do. No one has a monopoly on truth and no one should have a monopoly on intelligence. – RBA © 2016