Interview Bank
  • Interview Bank
  • Web
    • Persistent Connection and Non Persistent
    • CDN
    • Code Review
    • JWT
      • JWT vs Session Based Authentication
      • JWT Challenge
      • JWE
      • JWS
    • Content Security Policy (CSP)
    • Same-origin Policy (SOP)
    • Cross-Origin Resource Sharing (CORS)
      • Exploiting CORS
    • HTTP Strict Transport Security (HSTS)
    • SQL Injection (SQLi)
    • Password Encryption in Login APIs
    • API Security
      • API Principles
    • Simple bypass PHP
    • Server-side Template Injection (SSTI)
    • Javascript Object and Inheritance
    • HTTP/2
    • Cookie vs Local vs session Storage
    • XML External Entity (XXE)
    • What happened when enter domain name in browser
    • Prototype Pollution - Part 1
    • Prototype Pollution - Part 2
    • Nginx vs Apache
  • OT Security
    • Securing Operational Technology: Understanding OT Security
  • Quantum Computing
    • Quantum Computing: Unveiling the Cryptographic Paradigm Shift
    • Quantum Obfuscation: Shielding Code in the Quantum Era
  • DevSecOps
    • Continuous Integration/Continuous Deployment Pipeline Security
    • Chaos Engineering Overview
      • Security Chaos Engineering
    • Mysql VS redis
    • Kubernetes (k8s)
    • How MySQL executes query
    • REDIS
    • Difference between cache and buffer
  • Windows
    • Pentesting Active Directory - Active Directory 101
    • Pentesting Active Directory - Kerberos (Part 1)
    • Pentesting Active Directory - Kerberos (Part 2)
    • AD vs Kerberos vs LDAP
    • Active Directory Certificate Services Part 1
    • Unconstrained Delegation
    • AS-REP Roasting
    • NTLM Relay via SMB
    • LLMRN
    • Windows lateral movement
    • Constrained Delegation
    • Resource-Based Constrained Delegation
    • IFEO (lmage File Execution Options) Hijacking
  • UNIX
    • Setuid
  • Large Language Models (LLMs)
    • Tokens
    • LangChain
    • Integration and Security
  • Android
    • Keystore
  • Red team development
    • Secure C2 Infrastructure
    • P Invoke in c#
    • D Invoke
    • ExitProcess vs ExitThread
  • Blue Team
    • Indicators of Compromise
    • Methods to prevent Email domain spoofing
    • Windows Prefetching
  • CVE
    • XZ Outbreak CVE-2024-3094
    • Log4J Vulnerability (CVE-2021-44228)
    • SolarWinds Hack (CVE-2020-10148)
    • PHP CGI RCE (CVE-2024-4577)
    • Windows Recall
  • Software Architecture
    • Microservices
    • KVM
  • Docker
    • Overview
    • Daemon Socket
    • Tips to reduce docker size
  • Blockchain
    • Overview
    • Smart Contract
  • Business Acumen
    • Market Research Reports and Perception
    • Understanding Acquisitions
    • Cybersecurity as a Business Strategy
  • Cyber Teams
    • Introduction to Purple Teaming
  • Malware
    • Dynamic Sandbox Limitations
Powered by GitBook
On this page
  • Introduction
  • Example of LangChain Process Flow using llm_math
  • Issues with Model Inputs Using LangChain
  • Securing the Process
  • Interview Questions
  • Author
  • References
  1. Large Language Models (LLMs)

Integration and Security

PreviousLangChainNextKeystore

Last updated 1 year ago

Introduction

In this section, we look to dive into protection of LLM and its integration processes to prevent malicious usage by attackers. has explained some basic concepts and usages of its capabilities. It can also be used as potential attack vectors and malicious endpoint.

Example of LangChain Process Flow using llm_math

Issues with Model Inputs Using LangChain

With LangChain's rising popularity and usage to communicate with models, LangChain can be used as an offensive tool to inject malicious prompts that the model API Endpoint or the model itself may not properly sanitize. Because of rapid developmenet and deployment to public, designs of such tools may be inadequate and lack security enhancements. Recent developments have shown possibilities revolving arbitrary malicious input from attackers to control the output of the LLM.

Securing the Process

Recommended to use LangChain's latest version to avoid pitfalls of vulnerable APIs that malicious actors can abuse. Reduce the use of plugins from packages, create custom plugins if possible otherwise, treat external plugins with lowest level of privilege.

Data and Control (context passing) planes are inseparable and sometimes passed by user, important to always sanitize inputs on both ends. Parameterized queries can help to maintain a lower privileged context as well.

Paraphrasing can be used as possible mitigation method to prompt injection. This may break the order of the payload within the input which helps reduce malicious input execution.

Data prompt isolation to prevent malicious code within the data input to override the current instruction/context prompt. Solely treating data prompt as data input could reduce Context Ignoring attacks. Delimiters such as ''' or <xml> and other tags have to be carefully treated to prevent execution during data prompting.

Sandwich prevention prefixes an instrcution prompt to ensure the LLM do not modify its original context and prevents attacker's injected instruction to execute. However, it reduces flexibility in allowing normal users from performing custom prompts. Therefore, context should be carefully managed and a limiter could be used to detect if context has been shifted maliciously and only then, revert to its original instructions.

Interview Questions

  • LLM integration into existing products is a trend to boost productivity, how do you secure LLM integration and pipelines?

  • What defense techniques can be used to secure API functions?

Author

References

Credits to

Exploited Proof of Concepts (PoCs) using llm_math are further discussed in .

🍞

- Fantastic resource links

- Maybe write in separate topic

- To incorporate in the future

- To incorporate in the future

- Can further explore the paper

Nvidia
LLM Attacks
Zheng Jie
LLM Security - Defense
Nvidia - Prompt Injection Defense
AWS - Architect Defense for Generative AI
Lakera - Insights to LLM Security
Neeraj - LLM Defense Strategies
OWASP - Top 10 LLM Vulnerabilities
Microsoft = AI/ML Security
Paper - Prompt Injection and Defenses in LLM-Integrated Applications
LangChain
LogoSecuring LLM Systems Against Prompt Injection | NVIDIA Technical BlogNVIDIA Technical Blog
LangChain Process Flow
LLM Threat Model