Prompt Recovery for Large Language Models

Authors

  • Ruochen Feng Author
  • Jincheng Hu Author
  • Yifang Chen Author

DOI:

https://doi.org/10.61173/gmghz885

Keywords:

Large Language Models, Prompt Recovery, Pre-trained Model, Model Stacking, Predictive Entropy

Abstract

Understanding and recovering prompts in large language models (LLMs) is vital for addressing concerns related to privacy, copyright, and beyond. However, there is a lack of extensive research in this area. To fill this gap, we implemented model stacking techniques, such as utilizing mean prompts and embedding models, tailored for specific datasets. While these individual models were designed for particular datasets, our combined stacking model demonstrated improved accuracy in prompt recovery across diverse datasets. Although there was a slight decline in performance on the initial dataset, our comprehensive evaluation across multiple LLMs and prompt benchmarks indicates that our stacking model exceeds the performance of existing methods. Notably, this approach uses a single LLM without depending on external resources, making it an efficient and accessible solution for prompt recovery.

Downloads

Published

2024-10-29

Issue

Section

Articles