Bullet-proof your custom GPT system prompt security with KEVLAR, the ultimate prompt protector against rules extraction, prompt injections, and leaks of AI agent secret instructions.
-
Updated
Apr 12, 2024
Bullet-proof your custom GPT system prompt security with KEVLAR, the ultimate prompt protector against rules extraction, prompt injections, and leaks of AI agent secret instructions.
A collection of extracted system prompts/operational instructions
Source code of the paper "Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models"
Add a description, image, and links to the prompt-extraction topic page so that developers can more easily learn about it.
To associate your repository with the prompt-extraction topic, visit your repo's landing page and select "manage topics."