Table of Links
3 Methodology and 3.1 Causal language model as a classification model
3.4 Model development and training
4 Experiments and 4.1 Android function calls
4.2 Extension to Vehicle, Yelp, and DoorDash function sets
4.3 Full and partial training datasets and 4.4 Full training and LoRA training
4.5 Parallel and nested function call and 4.6 Weighted loss function for special tokens
5 Discussion and future works and References
Appendix
3.4 Model development and training
We employ the Google Gemma-2B model as the pretrained model in our framework. Our approach incorporates two distinct training methodologies: full model training and LoRA model training. For
full model training, we utilize an AdamW optimizer with a learning rate set at 5e-5, a warm-up step of 10, and a linear learning rate scheduler. The same optimizer and learning rate configuration are applied to LoRA training. We specify the LoRA rank as 16 and apply LoRA to the following modules: q_proj, k_proj, v_proj, o_proj, up_proj, down_proj. The LoRA alpha parameter is set to 32. For both training methods—full model and LoRA—we set the number of epochs to 3.
This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.
Authors:
(1) Wei Chen, Stanford University, with equal contribution and a corresponding author {weichen6}@stanford.edu;
(2) Zhiyuan Li, Stanford University and a corresponding author {zhiyuan8}@stanford.edu.