Back to Hub

New Report Warns that More Affordable Artificial Intelligence May Be a Blow to Cybersecurity

02/28/2018 By

Adobe Stock

Flipping through a recent issue of the New York Times Magazine, I came across a two-page ad from Hewlett Packard on the promises of artificial intelligence. “What if things start to think for themselves?” the ad read. “But that’s starting to happen. ‘Dumb’ devices are waking up and getting smart.”

For this particular ad, the context was farming and how AI can bring about bigger harvests to feed the ballooning human population. Sensors embedded in the soil or drones can detect which areas need more water, fertilizer or weeding and act accordingly.

AI is indeed becoming more affordable and available to different companies and industries. However, this also means more risks in the realms of digital, physical and political security, warns a new report co-written by a team of 26 AI researchers from prominent American and British universities and think tanks.

There are a number of reasons AI can bring about a new level of risk. Most AI researchers believe that AI will exceed human performance in a wide range of tasks within the next 50 years — and if an AI system can lead to better farming practices, it can certainly also conduct more devastating cyberattacks. AI can allow for a greater degree of anonymity and distance for the human actor. And AI systems tend to be efficient and scalable.

But perhaps most relevant to the procurement and supply chain sphere is the fact that there are considerable vulnerabilities in current AI systems. Among these are data poisoning attacks, which cause a learning system to make mistakes, and adversarial examples, which refer to inputs that intentionally fool an AI system.

“While AI systems can exceed human performance in many ways, they can also fail in ways that a human never would,” the researchers write.

More Sophisticated Cyberattacks

Many if not most of us have at some point received a phishing email. Conventional wisdom of the internet age advises against opening attachments from suspicious emails sent by strangers or people we know, but what if the phishing email can be engineered to resemble the person’s actual writing style?

As AI systems become ever more sophisticated, we can expect chatbots that “elicit human trust by engaging people in longer dialogues, and perhaps eventually masquerade visually as another person in a video chat.” This automation of a social engineering attack is an example of new digital security risks that AI brings. The report includes a number of hypothetical scenarios illustrating these risks, which also include the following:

  • Automation of vulnerability discovery
  • More sophisticated automation of hacking
  • Human-like denial of service
  • Automation of service tasks in criminal cyber-offense
  • Prioritizing targets for cyberattacks using machine learning
  • Exploiting AI used in applications, especially in information security
  • Black-box model extraction of proprietary AI system capabilities

Beyond just digital security risks, the report also looks at new physical and political security risks that AI would make possible. Take, for example, potential repurposing of commercial AI systems for terroristic aims, such as using a drone to drop explosives or cause crashes. Or fake news reports that incorporate video and audio to appear highly realistic.

While this all sounds quite pessimistic, AI of course has countless positive applications — and in any case, the growth in the technology’s capabilities is not slowing down any time soon.

With that in mind, the report authors make a few recommendations for keeping AI a tool for good. One is that policymakers and researchers should work together to prevent potential malicious uses of AI. And another is that AI engineers and researchers should consider potential misuse and allow this consideration to guide their work.

Check out the full report for yourself here.