Detailed Results of the Foundation Benchmark

Written by benchmarking | Published 2024/10/16
Tech Story Tags: large-audio-language-models | air-bench | generative-audio-benchmark | audio-comprehension-models | gpt-4-evaluation-framework | audio-processing-benchmarks | benchmarks-for-ai-models | ai-model-evaluation

TLDRTable 5 presents a detailed performance assessment of various audio-language models on the foundation benchmark. The results indicate that, except for binary-choice tasks like Speaker Gender Recognition and Synthesized Voice Detection, all other tasks require a selection from four options, establishing a baseline accuracy of 25% for random choices. Metrics close to these baselines suggest a lack of proficiency in the respective tasks.via the TL;DR App

Authors:

(1) Qian Yang, Zhejiang University, Equal contribution. This work was conducted during Qian Yang’s internship at Alibaba Group;

(2) Jin Xu, Alibaba Group, Equal contribution;

(3) Wenrui Liu, Zhejiang University;

(4) Yunfei Chu, Alibaba Group;

(5) Xiaohuan Zhou, Alibaba Group;

(6) Yichong Leng, Alibaba Group;

(7) Yuanjun Lv, Alibaba Group;

(8) Zhou Zhao, Alibaba Group and Corresponding to Zhou Zhao ([email protected]);

(9) Yichong Leng, Zhejiang University

(10) Chang Zhou, Alibaba Group and Corresponding to Chang Zhou ([email protected]);

(11) Jingren Zhou, Alibaba Group.

Table of Links

Abstract and 1. Introduction

2 Related Work

3 AIR-Bench and 3.1 Overview

3.2 Foundation Benchmark

3.3 Chat Benchmark

3.4 Evaluation Strategy

4 Experiments

4.1 Models

4.2 Main Results

4.3 Human Evaluation and 4.4 Ablation Study of Positional Bias

5 Conclusion and References

A Detailed Results of Foundation Benchmark

A Detailed Results of Foundation Benchmark

In Table 5, we delineate the performance assessment for each model across the various tasks on the foundation benchmark. With the exception of Speaker Gender Recognition and Synthesized Voice Detection, which are binary-choice tasks, all other tasks necessitate a selection from four options. As such, a random selection in the Speaker Gender Recognition and Synthesized Voice Detection datasets would theoretically achieve an accuracy of 50%, while the expected accuracy for random choices across the remaining datasets stands at 25%. Consequently, any performance metrics that approximate these random baselines are indicative of an absence of discernible proficiency in the respective tasks.

This paper is available on arxiv under CC BY 4.0 DEED license.


Written by benchmarking | Setting the standard, elevating performance, and illuminating the path to excellence through informed comparisons.
Published by HackerNoon on 2024/10/16