Enhancing Data Quality: Proposed Workflow for Data Quality Control

Written by computational | Published 2024/08/29
Tech Story Tags: machine-learning | federated-fine-tuning | foundation-models | large-language-models | ai-model-training | data-quality-control | fine-tuning-llms | foundation-model-training

TLDRIn this study, researchers propose a data quality control pipeline for federated fine-tuning of foundation models. via the TL;DR App

Authors:

(1) Wanru Zhao, University of Cambridge, Shanghai AI Laboratory with Equal contribution;

(2) Yaxin Du, Shanghai Jiao Tong University with Equal contribution;

(3) Nicholas D. Lane, University of Cambridge and Flower Labs;

(4) Siheng Chen, Shanghai AI Laboratory and Shanghai Jiao Tong University;

(5) Yanfeng Wang, Shanghai AI Laboratory and Shanghai Jiao Tong University.

Table of Links

3 PROPOSED WORKFLOW FOR DATA QUALITY CONTROL

3.1 OVERVIEW

3.2 LOCAL DATA SCORING AND QUALITY CONTROL

3.3 GLOBAL STANDARD WITH ANCHOR DATA SCORING

On the server, we select only a few amount of data (10 samples in our paper) as our anchor data and use the aforementioned scoring method to calculate the average score of these 10 data points as the global threshold. This establishes a unified standard for division between low- and high-quality data for heterogeneous clients, allowing for the further filtering of local data.

This paper is available on arxiv under CC BY 4.0 DEED license.


Written by computational | Computational: We take random inputs, follow complex steps, and hope the output makes sense. And then blog about it.
Published by HackerNoon on 2024/08/29