Developing Function Calling Models: Comparing Full Training and LoRA on Gemma-2B

Written by languagemodels | Published 2025/04/08
Tech Story Tags: on-device-language-models | ai-agents-for-edge-devices | function-calling-models | lm-latency-models | privacy-focused-ai-models | efficient-edge-computing | small-scale-ai-models | low-latency-ai-inference

TLDRWe employ the Google Gemma-2B model as the pretrained model in our framework. Our approach incorporates two distinct training methodologies: full model training and LoRA model training.via the TL;DR App

Table of Links

Abstract and 1. Introduction

2 Related works

3 Methodology and 3.1 Causal language model as a classification model

3.2 Functional token

3.3 Dataset collection

3.4 Model development and training

4 Experiments and 4.1 Android function calls

4.2 Extension to Vehicle, Yelp, and DoorDash function sets

4.3 Full and partial training datasets and 4.4 Full training and LoRA training

4.5 Parallel and nested function call and 4.6 Weighted loss function for special tokens

5 Discussion and future works and References

Appendix

A.1 Android function examples

A.2 Vehicle function examples

3.4 Model development and training

We employ the Google Gemma-2B model as the pretrained model in our framework. Our approach incorporates two distinct training methodologies: full model training and LoRA model training. For

full model training, we utilize an AdamW optimizer with a learning rate set at 5e-5, a warm-up step of 10, and a linear learning rate scheduler. The same optimizer and learning rate configuration are applied to LoRA training. We specify the LoRA rank as 16 and apply LoRA to the following modules: q_proj, k_proj, v_proj, o_proj, up_proj, down_proj. The LoRA alpha parameter is set to 32. For both training methods—full model and LoRA—we set the number of epochs to 3.

This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Wei Chen, Stanford University, with equal contribution and a corresponding author {weichen6}@stanford.edu;

(2) Zhiyuan Li, Stanford University and a corresponding author {zhiyuan8}@stanford.edu.


Written by languagemodels | Large Language Models (LLMs) ushered in a technological revolution. We breakdown how the most important models work.
Published by HackerNoon on 2025/04/08