The Next Frontier: A Forecast of The Age of Artificial Intelligence

Merely twenty years ago, artificial intelligence (AI) was the plot line in movies, books, and short stories. While it loomed on the horizon, it is only now getting the attention of world and business leaders. A few days into 2016, Mark Zuckerberg of Facebook announced that he plans to spend 2016 developing an AI system to help run his life.


He stated in his Facebook post: “My personal challenge for 2016 is to build a simple AI to run my home and help me with my work. You can think of it kind of like Jarvis in Iron Man.” He continued by outlining his approach: “I'm going to start by exploring what technology is already out there. Then, I'll start teaching it to understand my voice to control everything in our home - music, lights, temperature and so on.” Zuckerberg also discussed the practical applications for the AI, from a facial-recognition doorbell system, to a baby monitor for his new child. He also plans to merge AI with virtual reality, visualizing data in VR to better analyze his services and organizations.

                                                                                                           

The Wikimedia Foundation recently hosted an Artificial Intelligence and the Law panel in San Francisco to discuss the legal challenges and solutions presented by AI, ranging from lethal autonomous weapons, driverless cars, and ROSS - the first robot attorney.


The panel featured impressive names such as Executive Director of the Information Society Project at Yale Law Rebecca Crootof, ROSS Intelligence CTO Jimoh Ovbiagele, Zenti COO Christopher Reed, and Fenwick & West LLP Partner David Ahn. The panelists identified the prominent legal and ethical issues surrounding AI that will arise over the coming years:


1. The Uses and Limits of Analogy 

Crootof spoke about the benefits and drawbacks associated with using analogies to discuss new technology. Analogies make new technologies accessible and provide a framework for thinking about the, but they also box us in. Crootof noted that autonomous weapon systems are usually thought of as Terminators or more independent drones, but animals might be the better analogy.


And all three options—combatants, weapons, and animals—constrain us from thinking about unembodied autonomous cyber-weapons. She also noted how thinking of self-driving cars limits our imagination with respect to how they might be designed, used, and regulated.


2. Appropriate Level of Human Control

According to Crootof, “AI challenges assumptions that we used to take for granted—namely, the necessity of human decision making.” She listed various questions raised by introducing AI into weapon systems, including: What constitutes meaningful human control over an attack? Does one have to pull a trigger to exercise control? Is simply having a human observer with veto power sufficient? Or, is it enough that human beings wrote the original code? Who should be held accountable when an autonomous weapon system commits a war crime?


All of these questions require us to determine the appropriate amount of human control over AI, and who should be held responsible when their unpredictable actions cause harm.


3. EU’s AI “Explainability” Requirement

EU lawmakers are contemplating an “explainability” requirement for AI, requiring any significant decision made via AI software be “explainable.” For example, if AI makes a decision to categorize a person, the decision must be explained in a way humans can understand. Reed opposed such a requirement - “fundamentally, people are legislating about something they have no clue about. They unrealistically assume that for AI to be accountable there has to be a link to a causality that a person understands.... It will be hard for the EU to enforce this legislation because it may be hard to discern the effects of AI.”


Ovbiagele shared similar sentiments - “much of AI cannot be explained. It is often hard to explain how AI reaches decisions. In fact, humans can’t always explain how they make decisions. So this requirement may be fundamentally flawed.”


4. Intellectual Property

Like most technology, AI is primarily protected by patents, copyrights, and trade secrets. Patents provide the strongest protection” for AI, said Ahn. He explained that because AI is primarily technically driven - “AI inventions are mostly patentable, although some inventions on abstract AI concepts may be problematic in view of recent court decisions.”


However, an issue arises when AI systems create works such as writing or music. Are these works protectable? “The answer to this question at this time is ‘no.’ IP laws are aimed at natural persons. So accordingly “at this time, it’s highly unlikely that an AI would be recognized as an inventor, author or creator.”


5. Data and Privacy

Reed suggested that perhaps we need to reframe the debate over AI and privacy issues. “We have to accept that AI will bring both good and bad. And we can’t get permission for everything,” he said. Instead of rejecting AI outright, policy makers should seek alternative solutions.


For example, Reed said, “some suggest that [AI] that take information should be in a special category, like common carriers, and held to a higher standard.” Another issue is that AI data sets can contain personal information, which may lead to privacy concerns. “We get this question a lot. Privacy issues in ROSS are not unique. Data has been collected over decades,” Ovbiagele asserted.


He explained that "machine learning was created to address massive amounts of data. We are not collecting more data because of machine learning. We are using machine learning because we have massive data.” Using this argument, he concluded that AI will not create new potential privacy violations beyond those that are already present in the existence of mass data.


Crootof, however, challenged this view by pointing out that the use of more powerful tools like machine learning may amplify the potential damage of these privacy violations. “I’m comforted by the idea that my data is a needle in a very large haystack,” she observed. “But if you have a powerful magnet, my data isn’t nearly as safe.”


AI has grown in leaps and bounds over the past decade, and shows no signs of slowing down. During the inevitable wave of AI-related startups, it will be important to keep an eye on the social discourse surrounding this new technology. On the one hand, the constant buzz about the implications of AI will definitely aid innovators. Too often, regulation is an afterthought implemented only after policy makers spot issues. For example, cities such as San Diego are bearing down on Airbnb owners, deeming the short-term rentals "unauthorized hotels."


With AI, it’ll be different and more akin to privacy by design. Instead of having regulation chase after innovation, it can be a vital part of innovation, developed alongside the technology it governs. On the other hand, however, AI still hasn't shaken its humanity-threatening reputation. The fear of AI's implications may lead to over-regulation that chills innovation.


In this crucial development period, we must recognize that there is no substitute for humanity. No matter how lifelike, AI is a tool, and no amount of regulation can fully prevent its misuse or abuse. The entire tech community -- policy makers and innovators alike -- must remember to balance the urge to regulate with the need to innovate. Only then can we successfully venture into the new frontier of artificial intelligence.