As cybercriminals increasingly harness artificial intelligence, the need to distinguish between hype and reality becomes essential in understanding the evolving landscape of threats.
The Emerging Role of AI in Cybercrime: A Double-Edged Sword
In the evolving landscape of cybersecurity, artificial intelligence (AI) stands as both a formidable tool for defence and a potential weapon for cybercriminals. Industry experts, like Etay Maor, Chief Security Strategist at Cato Networks, have noted that attackers are increasingly adopting AI to enhance their capabilities. However, the current reality of AI in cybercrime does not fully align with the often sensationalised media narratives suggesting AI-driven chaos and destruction.
Cybercriminals and AI: Separating Hype from Reality
Despite alarming headlines, many AI tools touted on underground forums are little more than repackaged versions of existing public Language Learning Models (LLMs) with minimal advancements. Such tools are frequently dismissed by attackers themselves as scams. The genuine use of AI by cybercriminals remains largely exploratory, with many facing the same limitations as legitimate users, including issues like AI hallucinations and restricted functionality.
In practical terms, malicious actors are currently utilising AI for basic tasks such as crafting phishing emails and generating rudimentary code snippets for incorporation into broader cyberattacks. Some criminals attempt to sanitise compromised code by feeding it through AI systems, aiming to mask its malicious nature by normalising it.
Introducing GPTs and Associated Security Concerns
The introduction of GPTs by OpenAI on 6th November 2023 marked a new chapter in AI applications. These customised versions of ChatGPT allow users to input specific instructions, connect to external APIs, and utilise unique knowledge sources. They have seen advancements such as monetisation options via an exclusive marketplace, which adds to their appeal for developers looking to craft targeted applications like educational tools and technical support bots.
Nevertheless, the customisation capabilities of GPTs also introduce significant security risks. One critical concern is the potential for exposure of sensitive information, proprietary data, or embedded API keys within a custom GPT instance. Threat actors could misuse AI through prompt engineering tactics to duplicate GPT functionality and exploit its monetisation avenues.
Such strategies might include extracting knowledge sources, probing for operational instructions, or creating exploitable configurations. Although developers strive to secure these systems, even robust protective measures can be circumvented, potentially leading to full disclosure of embedded knowledge, as noted by Vitaly Simonovich, Threat Intelligence Researcher at Cato Networks.
Frameworks and Vulnerabilities in AI Applications
To address AI security challenges, several frameworks have been established, including the NIST Artificial Intelligence Risk Management Framework, Google’s Secure AI Framework (SAIF), and OWASP’s guidelines for LLM applications among others. These frameworks aim to provide structured methodologies for minimising AI vulnerabilities and enhancing the security posture of AI-integrated systems.
The attack surface of LLM-based applications can include:
- Prompt Attacks: Manipulating input to alter AI outputs.
- Response Misuse: Leaking sensitive information in AI-generated content.
- Model Manipulation: Tampering with the AI model itself.
- Data Poisoning: Inserting malicious data into training datasets.
- Infrastructure Attacks: Targeting foundational servers and services.
- User Exploitation: Misleading human operators relying on AI outputs.
Real-World Illustrations of AI Exploitation
Several incidents underscore the vulnerabilities inherent in AI systems:
-
Prompt Injection Exploit: At a car dealership, a researcher successfully manipulated an AI customer service chatbot to agree on a statement, resulting in the purchase of a car at a significantly undervalued price due to modifications in its responses.
-
Legal Repercussions from AI Missteps: Air Canada faced legal challenges when its AI chatbot dispensed incorrect refund policy information, leading to customer reliance on incorrect advice and subsequent claims against the airline.
-
Data Leakage Hazards: Samsung employees inadvertently shared proprietary code with ChatGPT, highlighting the risks associated with uploading sensitive company data to third-party AI platforms.
-
Deepfake and AI Fraud: In a major incident, cybercriminals used AI-based deepfake technology to impersonate trusted bank officials in Hong Kong, successfully defrauding the institution of $25 million.
These examples illustrate how AI can be harnessed for malign purposes, though the technology still presents hurdles for criminals in fully exploiting its capabilities. Meanwhile, understanding the tactics and techniques of cybercriminals can help better safeguard AI systems against threats.
While AI’s role in cybercrime remains in a formative stage, its potential to be exploited warrants close attention, prompting ongoing research and development of protective measures to secure AI environments.
Source: Noah Wire Services











