Resources
    OWASP Top 10 for Large La ...
    15 September 23

    OWASP Top 10 for Large Language Model (LLM) Security - A Web Application Penetration Tester's Guide - Part 2

    Posted by
    facebooktwitterlinkedin
    news-featured

    Part two of the OWASP Top 10 for Large Language Model (LMM) Security - A Web Application Penentration Tester’s Guide is here. Catch up on the first five from OWASP’s Top 10 in our Part One blog.

    _ _

    In this blog, we will explore the second half of the OWASP Top 10 for LLM and their impact on the role of a penetration tester. When you’re done with this blog, don’t forget to visit ine.com/security to check out all of our cybersecurity for penetration testers and beyond. Coming this fall, we're excited to release a revamped Web Application Penetration Testing Professional Certification to align with the updated Learning Path we released this summer. Stay tuned for more details on that release!

    A couple of things to note: 

    • Keep in mind that many of these attacks occur in concert - a prompt injection attack may be executed through an insecure plug-in. 
    • Also, penetration testing a large language model will involve skills, techniques, and goals that are not always part of the pentester’s job role. For example, some LLM vulnerabilities require secure code reviews, which do not always fall under a pentester’s responsibilities. This article assumes a broad role definition for a pentester.

    LLM06: Sensitive Information Disclosure

    Sensitive information disclosure occurs when language models inadvertently include confidential data in their responses.  When large language models include sensitive data, output must be controlled to limit exposure of the sensitive data. Data can be exposed through normal operations of the system due to inadequate design or due to a prompt injection attack that disables built-in output sanitization.

    Penetration testers will actively search for instances of this vulnerability by crafting prompts that bypass input validation and response control and recommend strategies to minimize the risk of exposing sensitive information.

    LLM07: Insecure Plugin Design

    Language models often integrate with various plugins or extensions, each of which presents a potential attack surface. Plugins typically inject data directly into LLM input, bypassing user input validation. Plug-ins that are not properly vetted can introduce a dangerous vector of attack.

    Penetration testers must evaluate the design and security of these plugins, assessing whether they adhere to best practices and do not introduce additional vulnerabilities.

    LLM08: Excessive Agency

    Excessive agency refers to large language models making decisions that go beyond their intended scope, potentially causing harm. LLMs are often connected to other systems and may execute operations in those systems automatically based on user input. This makes the LLM system an indirect attack vector for other systems; and if the LLM is given too much autonomy or agency to commit actions in other systems, this may bypass standard protections on those systems. 

    Penetration testers should ensure that the LLM system is subject to the appropriate security restrictions on all systems that the LLM interacts with, in particular on any system that allows the LLM to make automated changes.

    LLM09: Overreliance

    Overreliance on large language models without human oversight can lead to critical errors or misinformation. LLMs produce amazing results and change the way we do business. They also will generate incorrect results without an indication of the reliability of the answer. So wrong, possibly damaging answers look exactly like correct answers.

    Penetration testers need to review system workflows to ensure that proper oversight is required, and that human validation processes are in place to ensure the accuracy and integrity of the model's outputs.

    LLM10: Model Theft

    Model theft involves the unauthorized acquisition of a trained language model - which could be exploited by attackers. Large language models are built on data, and for custom solutions, this often includes some of all of an organization's critical and confidential data. This represents a potential single point of exfiltration for an attacker and needs to be protected.

    Penetration testers need to use standard data exfiltration techniques to ensure that the model and the critical data within the model is safeguarded against theft.

    The OWASP Top 10 for Language Models highlights the critical security risks associated with the use of language models in applications. As a Web Application Penetration Tester, understanding and addressing these risks is essential to ensure the security and integrity of applications that leverage language models. By diligently evaluating the application's inputs, outputs, training data, and integrations, penetration testers can play a pivotal role in identifying vulnerabilities, recommending remediation strategies, and contributing to the development of more secure language model applications.

    Want to dig into Web Application Penetration Testing? Check out our WPT Learning Path now to get started. 

    © 2024 INE. All Rights Reserved. All logos, trademarks and registered trademarks are the property of their respective owners.
    instagram Logofacebook Logotwitter Logolinkedin Logoyoutube Logo