Read any article about cybersecurity and you’re bound to come across a line like: “the security landscape is changing." It's a very true, but overused statement. However, we are in for a real landscape change within the coming years. We are on the precipice of the artificial intelligence era. Now I say “precipice” for a reason. Notable innovators have expressed concern over the direction in which AI is heading, but no one more so than Elon Musk. Musk believes that AI could lead to the downfall of human civilization. He told members of the National Governor's Association that AI is "the greatest risk we face as a civilization." Musk is not simply projecting his fears after watching the Terminator, because the advances in AI technology could pose a major threat to (at least) cybersecurity. Are we on the precipice of an AI centered cyber war?
As artificial intelligence continues to become more capable, we will start to see more automated and sophisticated social engineering attacks. AI-enabled cyberattacks could vastly increase the number of network penetrations, personal data thefts, and the spread of intelligent computer viruses. Following the recent wave of ransomware, more frequent and smarter cyberattacks paints a gloomy picture for the future of cybersecurity. However, there is hope for defending against such attacks by fighting fire with fire.
Artificial Intelligence-enabled security may become the standard for protecting your business' critical data and systems. Through machine learning, AI will be able to apply existing data to constantly improve its functions and strategies over time. It will be able to learn and understand normal user behavior, and identify even the slightest variations in a pattern. It will do this all instantaneously and effortlessly on a 24/7/365 basis. Yet, hackers will always try to exploit loopholes in the AI, and programmers will deploy countermeasures to combat the cybercriminals. Thus, the cat and mouse game of today will continue tomorrow. From this perspective, AI forms a welcome reinforcement in the war to protect your data and will expand the security in layers model. Yet this is just taking today's security knowledge and applying it to a future where AI's full capabilities are still unclear. What happens when someone creates a purposefully unethical intelligence?
This is where Elon Musk's worries originate. Musk states that his exposure to some of the world's most cutting-edge AI developments give him a unique perspective. After getting this glimpse into the future, he advocates the need for artificial intelligence regulation. For Musk, "AI is a rare case where we need to be proactive in regulation instead of reactive because if we're reactive in AI regulation it's too late." Facebook's Mark Zuckerberg has, rather publicly, challenged Musk's point of view. Zuckerberg indirectly called out the tech titan over a Facebook Live rant. He thinks people who are naysayers and drum up these doomsday scenarios are “pretty irresponsible." Musk responded by calling Zuckerberg's understanding of the matter "limited." This debate gained even more attention when a leading mind in AI and MIT professor, Rodney Brooks, sided with Zuckerberg. Brooks is the founding director of MIT’s Computer Science and Artificial Intelligence Lab, and the co-founder of iRobot and Rethink Robotics. In an interview with TechCrunch, Brooks argues against early regulation, saying it is unclear exactly what government should prohibit. (He thinks people really underestimate how long these far-out robotics systems will take to fully develop.) Brooks stated the only form of AI he would like to see regulated is self-driving cars – such as those being developed by Tesla – which he claims present imminent and real practical problems. Ultimately, he believes people are getting too scared about the wrong things, and not thinking enough about the real implications. Which side do you fall on?
In a recent event, Facebook had to shut down one of its AI systems, "because things got out of hand." Researchers from the Facebook AI Research Lab (FAIR) had to pull the plug on one of its AI systems once the "dialogue agents" for chatbots created their own language. The agents were instructed to work out how to negotiate between themselves, and improve their bartering as they went along. They started out by using English, but later created a new language that they could better understand. However, the language was really just a "shorthand" version of English that looks like gibberish.
The negotiations involving balls, hats, and other items went something like this:
Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
The Facebook researches shut down the systems, and then forced them to speak to each other only using comprehensible English. (To be clear, the experiment was shut down because the bots were not doing the required work, not because Facebook was afraid of the results.) Despite the mishap, the researches did find their bots to be incredibly crafty negotiators. Over time, the bots became quite skilled at negotiating, and even began feigning interest in one item in order to “sacrifice” it at at a later stage in the negotiation as a faux compromise. Overall, the experiment revealed no dire consequences, but it speaks to the unforeseen potential of artificial intelligence.