Prompt Optimization Curves on BBH Tasks

Written by textmodels | Published 2024/09/25
Tech Story Tags: ai | llm-optimization | llms-for-prompt-engineering | opro-algorithm | derivative-free-optimization | big-bench-hard-tasks | prompt-engineering | prompt-optimization-techniques

TLDRFigures 23 and 24 illustrate prompt optimization curves for 21 BBH tasks, showcasing consistent upward trends when utilizing the text-bison scorer and PaLM 2-L-IT optimizer. This indicates effective optimization across multiple tasks.via the TL;DR App

Authors:

(1) Chengrun Yang, Google DeepMind and Equal contribution;

(2) Xuezhi Wang, Google DeepMind;

(3) Yifeng Lu, Google DeepMind;

(4) Hanxiao Liu, Google DeepMind;

(5) Quoc V. Le, Google DeepMind;

(6) Denny Zhou, Google DeepMind;

(7) Xinyun Chen, Google DeepMind and Equal contribution.

Table of Links

Abstract and 1. Introduction

2 Opro: Llm as the Optimizer and 2.1 Desirables of Optimization by Llms

2.2 Meta-Prompt Design

3 Motivating Example: Mathematical Optimization and 3.1 Linear Regression

3.2 Traveling Salesman Problem (TSP)

4 Application: Prompt Optimization and 4.1 Problem Setup

4.2 Meta-Prompt Design

5 Prompt Optimization Experiments and 5.1 Evaluation Setup

5.2 Main Results

5.3 Ablation Studies

5.4 Overfitting Analysis in Prompt Optimization and 5.5 Comparison with Evoprompt

6 Related Work

7 Conclusion, Acknowledgments and References

A Some Failure Cases

B Prompting Formats for Scorer Llm

C Meta-Prompts and C.1 Meta-Prompt for Math Optimization

C.2 Meta-Prompt for Prompt Optimization

D Prompt Optimization Curves on the Remaining Bbh Tasks

E Prompt Optimization on Bbh Tasks – Tabulated Accuracies and Found Instructions

D PROMPT OPTIMIZATION CURVES ON THE REMAINING BBH TASKS

This paper is available on arxiv under CC0 1.0 DEED license.


Written by textmodels | We publish the best academic papers on rule-based techniques, LLMs, & the generation of text that resembles human text.
Published by HackerNoon on 2024/09/25