We are racing toward a future we barely understand. Our stuff will be smart, sometimes smarter than us. Our cars will be their own backseat drivers. Our doctors will get second opinions from silicon. Much of the most eye-popping innovation well see in the coming decades will be a direct result of the collection of technologies best known as Artificial Intelligence.
And it scares the hell out of people.
There are now multiple organizations, including The Future for Life Institute and the OpenAI project, that have picked up the baton of AI, seeking to spread it, understand it, control it and maybe police it. This week we added the Partnership on AI to the mix. Its an unusual consortium of technology and AI competitors: Amazon, Facebook, Google (via DeepMind), Microsoft and IBM. Each is deeply involved in the development and distribution of AI- and Machine Leaning-infused products and services. Some, like Amazon and Microsoft, compete in the AI-backed digital assistance space. Others, like IBM, Google and Facebook munge mounds of personal and public data to feed current and future AI engines.
’30 years from now well be both disappointed and surprised’
AI is the wild west of technology. And these companies are the pioneers blazing a trail into the great unknown. Each one wants a piece of the AI pie. Where they go, untold customers and eyeballs will follow.
In the last half-dozen years, AI has creeped from the laboratory into our pockets and living rooms in products like Siri, Alexa and Cortana. These engines are analyzing our photos on Facebook and in Google services, and offering contextual responses on Windows 10. Theyre living behind the scenes on our devices, ensuring that our batteries last longer and making our snapshots look better.
Among all the AI-infused services we currently use and which silently make our lives better, many are duplicated across the very companies that just signed up to be a part of the Partnership on AI.
Getting in bed
It doesnt really make sense. Why would competitors get in bed together and why do they think we want or need a group dedicated to the study of AI best practices?
Part of the answer lies in who actually created this new group. It wasnt the companies marketing or product managers. It was the researchers and scientists; a sort of superset clique that sits outside the five founding companies.
Gathering at various conferences and meetings dedicated to, among other things, artificial intelligence, scientists from these tech giants started talking about the lack of best practices for AI.
Were a very close-knit group at conferences and meetings, explained Partnership on AI founding member and Microsoft Research Technical Fellow and Managing Director Eric Horvitz.
The way in which AIs are programmed and the data theyre fed can potentially lead to skewed decision-making.
As they talked, the discussion inevitably turned to how AI is going to touch peoples lives and the responsibility that comes with that, which led to questions about how to arrive at best practices that could be applied across companies and groups working on AI-related technologies.
Even though all of them were working on AI projects, they agreed that there are still questions about AI decision-making. And with the rapid acceleration of AI developments, especially within the last few years, this is an inflection point. Increasingly, the companies were grappling with questions of ethics and even bias in AI. The way in which AIs are programmed and the data theyre fed can potentially lead to skewed decision-making. And as AI creeps further and further into areas like transportation and healthcare, the risks grow.
Bias in data can get propagated to machine learning that can lead to bias systems, said Horvitz. Such bias could impact AI decisions in everything from criminal justice to what ends up in your spam folder. It could even impact, he said, how racial minorities are handled when it comes to visual perception and facial recognition.
Some companies are starting to test AI like self-driving cars out in the open world. Horvitz wonders, though, How do you test them in the open world that might have unknown unknowns in it? The Partnership on AI wants to figure that out and, at least, come up with best practices for testing AI before full deployment.
So far the Partnership on AI has garnered the support of most of the key watchdogs and consortiums in the space, including Association for the Advancement of Artificial Intelligence (AAAI), and the Allen Institute for Artificial Intelligence (AI2). The relatively new OpenAI group, which seeks to develop fresh AI free from a profit motive, already tweeted its support.
On the other hand, the follow up tweet stating “We’re looking forward to non-profits being included as first-class members in the future” made it clear that perhaps something is missing from the Partnership on AI.
As for the Future of Life Institute, which famously published an open letter from Elon Musk, Bill Gates and Stephen Hawking worrying about the dangers of unfettered AI, theyre on board, too (in fact, very much so).
I edited and signed that letter, said Horvitz, adding, It said some sensible things. The basic notion is that we are setting this entity up as fiercely independent as a multiparty stake holder.
The founding companies have all agreed to help fund the project, which plans to conduct and share research on AI and industry best practices.
The profit motive
While it may be surprising that these companies are working together, it should surprise no one that the companies that are trying to sell us AI products and services are driving this effort. In a letter to IBMs Global staff announcing the companys involvement, IBM CEO Ginni Rometty wrote A world with Watson will be healthier, safer, more productive, more convenient and more personal in ways that we are only glimpsing today.
A day after the Partnership announcement, Microsoft announced that it had consolidated disparate AI efforts into a new Microsoft AI and Research Group, a 5,000-member team lead by Harry Shum. At Microsoft, we are focused on empowering both people and organizations, by democratizing access to intelligence to help solve our most pressing challenges. To do this, we are infusing AI into everything we deliver across our computing platforms and experiences, said Microsoft CEO Satya Nadella in a release.
If youre building AI, youre more likely to view it in a positive light. Horvitz acknowledged the risks inherent in emerging technologies like automated cars (if not properly fielded in the open), but added, I would use the phrase rough edges.
A recent Stanford University Study on AI in 2030 [PDF], echoes Horvitzs sentiment.
It looked at the impact of AI across eight sectors: transportation, robots, healthcare, education, low-resource communities, public safety and security, employment and workplace, and entertainment and found that, Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind.
Apple confirmed to me that they are talking to the Partnership.
But it also concluded that society is now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote rather than hinder democratic values such as freedom, equality, and transparency.
It also warns that fear of AI could drive development underground, impeding important work on ensuring the safety and reliability of AI technologies.”
So this open approach is welcome. But I still had questions. Where, exactly, is Apple?
The Cupertino tech giant clearly believes in AI. It uses that and machine learning across many of its products. Siri was, for many consumers, the first experience with artificial intelligence.
Horvitz clearly didnt want me to read anything into Apples absence, telling me that the partnership settled on the five founders that had been most involved in the initial discussions.
However, the consortium would love to see large companies and small ones get involved over time, he said, adding, Certainly, wed love to have Apple involved.
Apple confirmed to me that they are talking to the partnership.
Artificial Intelligence remains one of our most challenging innovations. It has almost unimaginable potential and yet still generates fear at a scale far outside whats been accomplished thus far.
We have robots in the home, but they look and work little like what we imagined in novel and films and on TV. Our AI-powered digital assistants are only half-brilliant. Image systems can find a needle of a face in a haystack of people, but they can also still mistake a picture of me for one of Stanley Tucci (okay, maybe it got that right). These systems are far from mature. Theyre barely adolescents.
I asked Horvitz, the obvious expert, what we should expect of AI by 2030.
Thirty years from now well be both disappointed and surprised; disappointed with some challenges we havent solved yet, commonsense reasoning. On the other hand, there are some surprises ahead on the good side that we cant even visualize yet, he said.
Horvitz fully expects autonomous driving, for instance, to transform highway safety. Darned if we havent stopped having about 100 people killed per day by human drivers. That will be solved by AI systems.
Originally found athttp://mashable.com/