paint-brush
Large Language Models as Optimizers: Meta-Prompt for Prompt Optimizationby@textmodels
122 reads

Large Language Models as Optimizers: Meta-Prompt for Prompt Optimization

tldt arrow

Too Long; Didn't Read

This section discusses the varying styles of meta-prompts that yield the best results for different optimizer models. It highlights examples for PaLM 2-L and GPT models, detailing how generated instructions are structured to optimize the output of scorer LLMs on the GSM8K dataset.
featured image - Large Language Models as Optimizers: Meta-Prompt for Prompt Optimization
Writings, Papers and Blogs on Text Models HackerNoon profile picture

Authors:

(1) Chengrun Yang, Google DeepMind and Equal contribution;

(2) Xuezhi Wang, Google DeepMind;

(3) Yifeng Lu, Google DeepMind;

(4) Hanxiao Liu, Google DeepMind;

(5) Quoc V. Le, Google DeepMind;

(6) Denny Zhou, Google DeepMind;

(7) Xinyun Chen, Google DeepMind and Equal contribution.

Abstract and 1. Introduction

2 Opro: Llm as the Optimizer and 2.1 Desirables of Optimization by Llms

2.2 Meta-Prompt Design

3 Motivating Example: Mathematical Optimization and 3.1 Linear Regression

3.2 Traveling Salesman Problem (TSP)

4 Application: Prompt Optimization and 4.1 Problem Setup

4.2 Meta-Prompt Design

5 Prompt Optimization Experiments and 5.1 Evaluation Setup

5.2 Main Results

5.3 Ablation Studies

5.4 Overfitting Analysis in Prompt Optimization and 5.5 Comparison with Evoprompt

6 Related Work

7 Conclusion, Acknowledgments and References

A Some Failure Cases

B Prompting Formats for Scorer Llm

C Meta-Prompts and C.1 Meta-Prompt for Math Optimization

C.2 Meta-Prompt for Prompt Optimization

D Prompt Optimization Curves on the Remaining Bbh Tasks

E Prompt Optimization on Bbh Tasks – Tabulated Accuracies and Found Instructions

C.2 META-PROMPT FOR PROMPT OPTIMIZATION

Different optimizer models work the best on different styles of meta-prompts. Figure 3 in the main paper shows the meta-prompt for PaLM 2-L-IT; Figure 21 shows that for pre-trained PaLM 2-L; Figure 22 shows that for GPT models.

Figure 21: An example of the meta-prompt for prompt optimization with pre-trained PaLM 2-L on GSM8K, where the generated instruction will be prepended to the beginning of the scorer LLM output (A_begin in Section 4.1).

Figure 22: An example of the meta-prompt for prompt optimization with GPT models (gpt-3.5-turbo or gpt-4) on GSM8K, where the generated instruction will be prepended to the beginning of the scorer LLM output (A_begin in Section 4.1). The blue text contains solutionscore pairs; the purple text describes the optimization task and output format; the orange text are meta-instructions.


This paper is available on arxiv under CC0 1.0 DEED license.