Demonstrating Adaptability: Evaluating Function Calling on Vehicle, Yelp, and DoorDash APIs

Written by languagemodels | Published 2025/04/08
Tech Story Tags: on-device-language-models | ai-agents-for-edge-devices | function-calling-models | lm-latency-models | privacy-focused-ai-models | efficient-edge-computing | small-scale-ai-models | low-latency-ai-inference

TLDRIn addition to Android function calls, we expanded our evaluation to include 20 vehicle function calls, showcasing the algorithm’s adaptability to diverse use cases.via the TL;DR App

Table of Links

Abstract and 1. Introduction

2 Related works

3 Methodology and 3.1 Causal language model as a classification model

3.2 Functional token

3.3 Dataset collection

3.4 Model development and training

4 Experiments and 4.1 Android function calls

4.2 Extension to Vehicle, Yelp, and DoorDash function sets

4.3 Full and partial training datasets and 4.4 Full training and LoRA training

4.5 Parallel and nested function call and 4.6 Weighted loss function for special tokens

5 Discussion and future works and References

Appendix

A.1 Android function examples

A.2 Vehicle function examples

4.2 Extension to Vehicle, Yelp, and DoorDash function sets

In addition to Android function calls, we expanded our evaluation to include 20 vehicle function calls, showcasing the algorithm’s adaptability to diverse use cases. For vehicle functions, we focused

on essential control methods such as volume adjustment, air conditioning, and seat positioning. We conducted benchmarks for vehicle functions paralleling the Android function evaluation, observing consistent performance patterns. Details on vehicle functions are provided in the Appendix, enabling users to customize a new set of functional APIs for their specific needs. Furthermore, tests conducted with Yelp and DoorDash APIs confirmed a similar performance, underscoring our method’s versatility across various function sets.

This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Wei Chen, Stanford University, with equal contribution and a corresponding author {weichen6}@stanford.edu;

(2) Zhiyuan Li, Stanford University and a corresponding author {zhiyuan8}@stanford.edu.


Written by languagemodels | Large Language Models (LLMs) ushered in a technological revolution. We breakdown how the most important models work.
Published by HackerNoon on 2025/04/08