Hackers and state-sponsored criminals are using genetic artificial intelligence in their cyberattacks, but US intelligence is also using artificial intelligence technologies to detect malicious activity, according to a senior official at the US National Security Agency.
“We’re already seeing criminal and nation-state elements using AI. They’re all signed up to big name companies that you’d expect — all the productive AI models out there,” NSA cybersecurity director Rob Joyce said at a conference at Fordham University in New York on Tuesday. “We see information handlers [and] criminals on these platforms,” Joyce said.
“But on the other hand, artificial intelligence, machine learning [and] Deep learning makes us absolutely better at finding malicious activity,” he said.
Joyce, who oversees the NSA’s cybersecurity directorate tasked with preventing and eliminating threats targeting critical U.S. infrastructure and defense systems, did not speak about specific cyberattacks involving the use of artificial intelligence or attribute specific activity to a state or government. But Joyce said recent efforts by Chinese-backed hackers to target critical U.S. infrastructure — believed to be in preparation for an expected Chinese invasion of Taiwan — was an example of how artificial intelligence technologies show malicious activity, giving American intelligence the upper hand.
“They are in places like power lines, transport lines and courthouses, trying to break in to cause social disruption and panic at a time of their choosing,” Joyce said.
Joyce said the Chinese-backed hackers don’t use traditional malware that could be detected, but exploit vulnerabilities and application flaws that allow hackers to gain a foothold on a network and appear as if they are authorized to be there.
“Machine learning, artificial intelligence and big data are helping us to elevate these activities [and] it brings them to the fore because these accounts don’t behave like normal business people in their critical infrastructure, so that gives us an advantage,” Joyce said.
Joyce’s comments come at a time when artificial intelligence generation tools are capable of convincingly producing computer-generated text and images and are increasingly being used in cyber-attacks and espionage campaigns.
The Biden administration introduced an executive order in October aimed at establishing new standards for AI safety and security while pushing for stronger guardrails against abuse and mistakes. The Federal Trade Commission recently warned that artificial intelligence technologies such as ChatGPT can be “used to supercharge scams and fraud.”
Joyce said AI “isn’t the super tool that can make someone who is incompetent really competent, but it will make those who use AI more efficient and more dangerous.”
“One of the first things they do is they just create a better approach to the English language to their victims, whether it’s a phishing email or something much more elaborate in the case of malicious influence,” Joyce said. The latter refers to attempts by foreign governments to sow discord and interfere in elections.
“The second thing we’re starting to see is we’re seeing less skilled people using artificial intelligence to guide their hacking operations to make them better at a technical aspect of a hack that they wouldn’t be able to do themselves.” Joyce said.