Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Akto Launches World’s First Proactive GenAI Security Testing Solution

About 77% of organizations have adopted or are exploring AI in some capacity, pushing for a more efficient and automated workflow. With the increasing reliance on GenAI models and Language Learning Models (LLMs) like ChatGPT, the need for robust security measures have become paramount.

Akto, a leading API Security company, is proud to announce the launch of its revolutionary GenAI Security Testing solution. This cutting-edge technology marks a significant milestone in the field of AI security, making Akto the world’s first proactive GenAI security testing platform.

“Akto has a new capability to scan APIs that leverage AI technology and this is fundamental for the future of application security. I invested early in building application, security education for AI and I’m thrilled to see other security companies do the same for security assessment around AI technologies.” – Jim Manico, Former OWASP Global Board Member, Secure Coding Educator.

Recommended AI News: Sumsub Unveils Industry-First Deepfake Detection in Video Identification

On average, an organization uses 10 GenAI models. Often most LLMs in production will receive data indirectly via APIs. That means tons and tons of sensitive data is being processed by the LLM APIs. Ensuring the security of these APIs will be very crucial to protect user privacy and prevent data leaks. There are several ways in which LLMs can be abused today, leading to sensitive data leaks.

  1. Prompt Injection Vulnerabilities – The risk of unauthorized prompt injections, where malicious inputs can manipulate the LLM’s output, has become a major concern.
  2. Denial of Service (DoS) Threats – LLMs are also susceptible to DoS attacks, where the system is overloaded with requests, leading to service disruptions. There’s been a rise in reported DoS incidents targeting LLM APIs in the last year.
  3. Overreliance on LLM Outputs – Overreliance on LLMs without adequate verification mechanisms has led to cases of data inaccuracies and leaks. Organizations are encouraged to implement robust validation processes, as the industry sees an increase in data leak incidents due to overreliance on LLMs.

“Securing GenAI systems requires a multifaceted approach with the need to protect not only the AI from external inputs but also external systems that depend on their outputs. ” – OWASP Top 10 for LLM AI Applications Core team member.

On March 20, 2023, there was an outage with OpenAI’s AI tool, ChatGPT. The outage was caused by a vulnerability in an open-source library, which may have exposed payment-related information of some customers. Very recently, on January 25, 2024, a critical vulnerability was discovered in Anything LLM ( 8,000 Github Stars) that turns any document or piece of content into context that any LLM can use during chatting. An unauthenticated API route (file export) can allow attackers to crash the server resulting in a denial of service attack. These are only a few examples of security incidents related to using LLM models.

Akto’s GenAI Security Testing solution addresses these challenges head-on. By leveraging advanced testing methodologies and state-of-the-art algorithms, Akto provides comprehensive security assessments for GenAI models, including LLMs. The solution incorporates a wide range of innovative features, including over 60 meticulously designed test cases that cover various aspects of GenAI vulnerabilities such as prompt injection, overreliance on specific data sources, and more. These test cases have been developed by Akto’s team of experts in GenAI security, ensuring the highest level of protection for organizations deploying GenAI models.

Related Posts
1 of 40,492

Currently, security teams manually test all the LLM APIs for flaws before release. Due to the time sensitivity of product releases, teams can only test for a few vulnerabilities. As hackers continue to find more creative ways to exploit LLMs, security teams need to find an automated way to secure LLMs at scale.

“Often input to an LLM comes from an end-user or the output is shown to the end-user or both. The tests try to exploit LLM vulnerabilities through different encoding methods, separators and markers. This specially detects weak security practices where developers encode the input or put special markers around the input. ” – Ankush Jain, CTO at Akto.io

Recommended AI News: Zendesk Completes Acquisition of Klaus

AI security testing identifies vulnerabilities in the security measures for sanitizing output of LLMs. It aims to detect attempts to inject malicious code for remote execution, cross-site scripting (XSS), and other attacks that could allow attackers to extract session tokens and system information. In addition, Akto also tests whether the LLMs are susceptible to generating false or irrelevant reports.

“From Prompt Injection ( LLM:01) to Overreliance (LLM09) and new vulnerabilities and breaches everyday and build systems that are secure by default; It is critical to test systems early for these ever evolving threats. I’m excited to see what Akto has in store for my LLM projects” – OWASP Top 10 for LLM AI Applications Core team member.

To further emphasize the importance of GenAI security, a recent survey in September, 2023 by Gartner revealed that 34% of organizations are either already using or implementing artificial intelligence (AI) application security tools to mitigate the accompanying risks of generative AI (GenAI). Over half (56%) of respondents said they are also exploring such solutions, highlighting the critical need for robust security testing solutions like Akto’s.

To showcase the capabilities and significance of Akto’s GenAI Security Testing solution, Akto’s Founder and CEO Ankita will be presenting at the prestigious Austin API Summit 2024. The session, titled “Security of LLM APIs,” will delve into the problem statement, highlight real-world examples, and demonstrate how solutions like Akto’s provide a robust defense against AI-related vulnerabilities.

As organizations strive to harness the power of AI, Akto stands at the forefront of ensuring the security and integrity of these transformative technologies. The launch of their GenAI Security Testing solution reinforces their commitment to innovation and their dedication to enabling organizations to embrace GenAI with confidence.

Recommended AI News: Cardboard Spaceship Lands $1 Million for LaunchPad Ventures to Drive Licensed Content Curation

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

Comments are closed.