paint-brush
Researchers Uncover Breakthrough in Human-In-the-Loop AI Training with ICPLby@languagemodels

Researchers Uncover Breakthrough in Human-In-the-Loop AI Training with ICPL

by Language Models (dot tech)December 3rd, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

ICPL is a novel reinforcement learning approach that uses Large Language Models to efficiently learn reward functions from human preferences. It significantly reduces query inefficiency and excels in complex, human-guided tasks, showcasing a new standard for RL.
featured image - Researchers Uncover Breakthrough in Human-In-the-Loop AI Training with ICPL
Language Models (dot tech) HackerNoon profile picture

Authors:

(1) Chao Yu, Tsinghua University;

(2) Hong Lu, Tsinghua University;

(3) Jiaxuan Gao, Tsinghua University;

(4) Qixin Tan, Tsinghua University;

(5) Xinting Yang, Tsinghua University;

(6) Yu Wang, with equal advising from Tsinghua University;

(7) Yi Wu, with equal advising from Tsinghua University and the Shanghai Qi Zhi Institute;

(8) Eugene Vinitsky, with equal advising from New York University ([email protected]).

  1. Abstract and Introduction
  2. Related Work
  3. Problem Definition
  4. Method
  5. Experiments
  6. Conclusion and References


A. Appendix

A.1. Full Prompts and A.2 ICPL Details

A. 3 Baseline Details

A.4 Environment Details

A.5 Proxy Human Preference

A.6 Human-in-the-Loop Preference

ABSTRACT

Designing reward functions is a core component of reinforcement learning but can be challenging for truly complex behavior. Reinforcement Learning from Human Feedback (RLHF) has been used to alleviate this challenge by replacing a hand-coded reward function with a reward function learned from preferences. However, it can be exceedingly inefficient to learn these rewards as they are often learned tabula rasa. We investigate whether Large Language Models (LLMs) can reduce this query inefficiency by converting an iterative series of human preferences into code representing the rewards. We propose In-Context Preference Learning (ICPL), a method that uses the grounding of an LLM to accelerate learning reward functions from preferences. ICPL takes the environment context and task description, synthesizes a set of reward functions, and then repeatedly updates the reward functions using human rankings of videos of the resultant policies. Using synthetic preferences, we demonstrate that ICPL is orders of magnitude more efficient than RLHF and is even competitive with methods that use ground-truth reward functions instead of preferences. Finally, we perform a series of human preference-learning trials and observe that ICPL extends beyond synthetic settings and can work effectively with humans-in-the-loop. Additional information and videos are provided at https://sites.google.com/view/few-shot-icpl/home.

1 INTRODUCTION

Designing state-of-the-art agents using reinforcement learning (RL) often requires the design of reward functions that specify desired and undesirable behaviors. However, for sufficiently complex tasks, designing an effective reward function remains a significant challenge. This process is often difficult, and poorly designed rewards can lead to biased, misguided, or even unexpected behaviors in RL agents (Booth et al., 2023). Recent advances in large language models (LLMs) have shown potential in tackling this challenge as they are able to zero-shot generate reward functions that satisfy a task specification (Yu et al., 2023). However, the ability of LLMs to directly write a reward function is limited when task complexity increases or tasks are out-of-distribution from pre-training data. As an additional challenge, humans may be unable to perfectly specify their desired agent behavior in text.


Human-in-the-loop learning (HL) offers a potential enhancement to the reward design process by embedding human feedback directly into the learning process. A ubiquitous approach is preferencebased RL where preferences between different agent behaviors serves as the primary learning signal. Instead of relying on predefined rewards, the agent learns a reward function aligned with human preferences. This interactive approach has shown success in various RL tasks, including standard benchmarks (Christiano et al., 2017; Ibarz et al., 2018), encouraging novel behaviors (Liu et al., 2020; Wu et al., 2021), and overcoming reward exploitation (Lee et al., 2021a). However, in more complex tasks involving extensive agent-environment interactions, preference-based RL often demands hundreds or thousands of human queries to provide effective feedback. For instance, a robotic arm button-pushing task requires over 10k queries to learn reasonable behavior (Lee et al.), which could be a major bottleneck.


In this work, we introduce a novel method, In-Context Preference Learning (ICPL), which significantly enhances the sample efficiency of preference learning through LLM guidance. Our primary insight is that we can harness the coding capabilities of LLMs to autonomously generate reward functions, then utilize human preferences through in-context learning to refine these functions. Specifically, ICPL leverages an LLM, such as GPT-4, to generate executable, diverse reward functions based on the task description and environment source code. We acquire preferences by evaluating the agent behaviors resulting from these reward functions, selecting the most and least preferred behaviors. The selected functions, along with historical data such as reward traces of the generated reward functions from RL training, are then fed back into the LLM to guide subsequent iterations of reward function generation. We hypothesize that as a result of its grounding in text data, ICPL will be able to improve the quality of the reward function through incorporating the preferences and the history of the generated reward functions, ensuring they align more and more closely with human preferences. Unlike evolutionary search methods like EUREKA Ma et al. (2023), there is no ground-truth reward function that the LLM can use to evaluate agent performance, and thus, success here would demonstrate that LLMs have some native preference-learning capabilities.


To study the effectiveness of ICPL, we perform experiments on a diverse set of RL tasks. For scalability, we first study tasks with synthetic preferences where a ground-truth reward function is used to assign preference labels. We observe that compared to traditional preference-based RL algorithms, ICPL achieves over a 30 times reduction in the required number of preference queries to achieve equivalent or superior performance. Moreover, ICPL attains performance comparable to, or even better than, methods that have access to a ground truth specification of the reward function Ma et al. (2023). Finally, we test ICPL on a particularly challenging task, “making a humanoid jump like a real human,” where designing a reward is difficult. By using real human feedback, our method successfully trained an agent capable of bending both legs and performing stable, human-like jumps, showcasing the potential of ICPL in tasks where human intuition plays a critical role.


In summary, the contributions of the paper are the following:


• We propose ICPL, an LLM-based preference learning algorithm. Over a synthetic set of preferences, we demonstrate that ICPL can iteratively output rewards that increasingly reflect preferences. Via a set of ablations, we demonstrate that this improvement is on average monotonic, suggesting that preference learning is occurring as opposed to a random search.


• We demonstrate, via human-in-the-loop trials, that ICPL is able to work effectively with humans-in-the-loop despite significantly noisier preference labels.


• We demonstrate that ICPL sharply outperforms tabula-rasa RLHF methods and is also competitive with methods that rely on access to a ground-truth reward.


This paper is available on arxiv under CC 4.0 license.