Table of Links
2 Survey with Industry Professionals
3 RQ1: Real-World use cases that necessitate output constraints
4.2 Integrating with Downstream Processes and Workflows
4.3 Satisfying UI and Product Requirements and 4.4 Improving User Experience, Trust, and Adoption
5.2 The Case for NL: More Intuitive and Expressive for Complex Constraints
6 The Constraint maker Tool and 6.1 Iterative Design and User Feedback
4 RQ2: BENEFITS OF APPLYING CONSTRAINTS TO LLM OUTPUTS
Beyond the aforementioned use cases, our survey respondents reported a range of benefits that the ability of constraining LLM output could offer. These include both developer-facing benefits, like increasing prompt-based development efficiency and streamlining integration with downstream processes and workflows, as well as user-facing benefits, like satisfying product and UI requirements and improving user experience and trust of LLMs (Table 2). Here are the most salient responses:
4.1 Increasing Prompt-based Development Efficiency
First and foremost, being able to constrain LLM outputs can significantly increase the efficiency of prompt-based engineering and development by reducing the trial and error currently needed to manage LLM unpredictability. Developers noted that the process of ādefining the [output] formatā alone is ātime-consuming,ā often requiring extensive prompt testing to identify the most effective one (consistent with what previous research has found [30, 41]). Additionally, they often need to ārequest multiple responsesā and āiterating through them until find[ing] a valid one.ā Therefore, being able to deterministically constrain the output format could not only save developers as much as ādozens of hours of work per weekā spent on iterative prompt testing, but also reduce overall LLM inference costs and latency.
Another common practice that respondents reported is building complex infrastructure to post-process LLM outputs, sometimes referred to as āmassaging [the output] after receiving.ā For example, developers oftentimes had to āchase down āfree radicalsā when writing error handling functions,ā and felt necessary to include ācustom logicā for matching and filtering, along with āfurther verification.ā Thus, setting constraints before LLM generation may be the key to reducing such āad-hoc plumbing codeā post-generation, simplifying āmaintenance,ā and enhancing the overall ādeveloper experience.ā As one respondent vividly described: āitās a much nicer experience if it (formatting the output in bullets) ājust worksā without having to implement additional infra...
This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.
Authors:
(1) Michael Xieyang Liu, Google Research, Pittsburgh, PA, USA ([email protected]);
(2) Frederick Liu, Google Research, Seattle, Washington, USA ([email protected]);
(3) Alexander J. Fiannaca, Google Research, Seattle, Washington, USA ([email protected]);
(4) Terry Koo, Google, Indiana, USA ([email protected]);
(5) Lucas Dixon, Google Research, Paris, France ([email protected]);
(6) Michael Terry, Google Research, Cambridge, Massachusetts, USA ([email protected]);
(7) Carrie J. Cai, Google Research, Mountain View, California, USA ([email protected]).