The big companies are talking to the government about artificial intelligence capabilities—and possible regulations. Uh-oh
The White House is hosting a conference on the future of artificial intelligence. Executives from 38 companies, including Intel, Oracle, Ford, Boeing, Mastercard, Microsoft, and Accenture, will attend the daylong summit.
Why is it happening now? AI is poised to create 2.3 million jobs by 2020, while eliminating another 1.8 million jobs, according to Gartner. Topics of discussion are said to include how industries like health care and transportation can best use AI, as well as how to fund research in the field. And most scary on the agenda: strategies to formulate federal policies and regulations.
AI will change the world, but not as fast as people fear
Cloud computing is the catalyst. AI would not be affordable if you could not buy AI by the hour from cloud providers such as Google, Amazon Web Services, and Microsoft. Moreover, AI now has zeitgeist in the public mind as something that’s going to change our lives, change our jobs, and change how we think about technology.
I’m not sure if any of the drastic AI transformations in the forecast will come to fruition over the next few years. After all, we usually consider any new technology to be a “game changer.” Even if it is, it takes years to change the game. By that time, today’s "drastic" changes often seem commonsense and ordinary.
The truth is that technology has been changing our jobs for the last 150 years, and the use of AI as a tool is not much different from the waves of automation that turned factories and farms into businesses where you pushed a button more often than you picked up tools.
While I’m sure we’ll see driverless cars and trucks displace vehicle operators at some point, and businesses automate people out of jobs, most of those people will see the writing on the wall and the smart will pivot to a job that’s not likely to be automated anytime soon. They have years of warning—again, none of this happens fast.
Regulating AI is likely to do more harm than good
Now to the "regulating AI" part of this story. I’ve found that when the government tries to regulate technology innovation, it tends to introduce unintended consequences. Government officials neither are experts in the technology nor understand where the technology is going. That combination is not a recipe for success.
What happens in practice is that the laws become quickly outdated and typically create unnecessary confusion as lawyers try to figure them out—and technologists quickly figure ways to work around them.
So, short of addressing an evil genius creating a Skynet that attempts to enslave us all, why bother with regulation? Evil-genius plots are nice for movies, but they don’t happen in real life.