As API usage surges, the integration of AI could be key in fortifying security measures against emerging threats, according to experts.

In the evolving digital landscape, Application Programming Interfaces (APIs) have become critical for enabling the functionality of digital services, including Generative AI. However, securing these pathways remains a formidable challenge, prompting inquiries into whether Artificial Intelligence (AI) might hold the solution. Cequence Security’s Systems Engineering Director for EMEA, James Sherlow, recently shed light on how Generative AI could be leveraged to enhance API security.

Sherlow outlines several inherent issues with API security, starting with their vulnerability. APIs are frequently targeted due to their sheer volume and often poor oversight. A significant number of malicious requests aim at shadow APIs—those APIs that organizations are unaware of, leaving them unmonitored and not routinely updated. This makes the discovery of APIs a necessary step to ensure they are visible, monitored, and appropriately managed.

Moreover, Sherlow explains that even well-managed APIs are not immune to business logic abuse. Attackers can exploit normal-looking API calls to breach systems and access sensitive data. Traditional security measures like Web Application Firewalls, which detect signature-based attacks, are inadequate against these covert exploits. Instead, dedicated API security solutions using behavioural analysis are employed to identify attack patterns. Here, Generative AI could potentially refine this approach further.

The integration of AI with Machine Learning (ML) is proposed to improve threat detection. While ML algorithms identify anomalous traffic, AI could generate the policies and models necessary for ongoing protection without human intervention. This synergy allows for simultaneous threat analysis across multiple endpoints, enhancing detection capabilities and response times.

As the deployment of APIs expands, Sherlow highlights the importance of classification and prioritisation. Automatic identification by vendors can account for all APIs, eradicating shadow APIs. However, this extensive API ecosystem needs risk-based assessment and customised API definitions for effective prioritisation.

Testing APIs, essential for ensuring secure coding, could also benefit from AI. Current testing is a complex and manual task that AI could streamline. AI could automate test case creation, adapt them to different threat scenarios, and create comprehensive authentication profiles to test APIs under multiple user conditions. This automation would significantly reduce developer workloads.

Large Language Models (LLMs), heavily dependent on APIs, face their unique risks, including vulnerabilities identified by the OWASP Top 10. AI-enabled testing can address these by simulating traffic to detect threats and provide developers with recommendations for improvement. Ensuring the security of LLMs and Generative AI applications demands rigorous testing and auditing before deployment.

Looking to the future, Sherlow posits that AI will be integral to API security. AI can enhance reporting processes, facilitating communication of risk and exposure to executive boards. Its role is set to grow as API deployment expands and AI applications become more commercially prevalent.

In 2023, 71% of web traffic was API-related, averaging 1.5 billion calls per organisation—numbers expected to rise. As Generative AI applications proliferate, they will inevitably attract more attacks, underscoring the necessity of using AI tools to bolster API security. Ultimately, AI holds promise for significantly mitigating the burden on developers in securing these critical conduits.

Source: Noah Wire Services

Share.
Leave A Reply

Exit mobile version