Countering Mainstream Bias via End-to-End Adaptive Local Learning: Related Work

Written by mediabias | Published 2024/08/21
Tech Story Tags: mainstream-bias | collaborative-filtering | adaptive-local-learning | discrepancy-modeling | unsynchronized-learning | rawlsian-max-min-fairness | mixture-of-experts | loss-driven-models

TLDRvia the TL;DR App

Table of Links

Abstract and 1 Introduction

2 Preliminaries

3 End-to-End Adaptive Local Learning

3.1 Loss-Driven Mixture-of-Experts

3.2 Synchronized Learning via Adaptive Weight

4 Debiasing Experiments and 4.1 Experimental Setup

4.2 Debiasing Performance

4.3 Ablation Study

4.4 Effect of the Adaptive Weight Module and 4.5 Hyper-parameter Study

5 Related Work

6 Conclusion, Acknowledgements, and References

5 Related Work

Fairness and bias issues in recommender systems have attracted increasing attention recently. Popularity bias [2,3,9,34,37,38], exposure bias [5,21,30,31], and item fairness [6,7,11,17,27,35] exemplify significant item-side biases. Besides prior works mainly focusing on the item perspective, several research studies have explored user biases, analyzing utility differences among diverse user groups based on user demographic attributes, like age or gender [10,15,16,25,32,33,36]. For instance, Ekstrand et al. [15] empirically investigated multiple recommendation models and demonstrated utility differences across user demographic groups. Schedl et al. [32] examined music preference differences among user age groups, revealing variations in recommendation performance. To address these issues, Fu et al. [16] proposed to leverage rich information from knowledge graphs, Li et al. [25] developed a re-ranking to narrow the utility gap between different user groups, and Chen et al. [10] implemented data augmentation by generating β€œfake” data to achieve a balanced distribution.

However, demographic attributes may not comprehensively capture user interests and behaviors. Unlike the aforementioned works focusing on bias analysis based on demographic groups, mainstream bias poses a critical challenge in recommender systems. Previous works [4,18,24] acknowledge mainstream bias as the β€œgrey-sheep” problem, where β€œgrey-sheep users” with niche interests lead to challenges in finding similar peers and result in poor recommendations. However, they do not propose robust bias measurements and debiasing methods. A more aligned study with better mainstream bias evaluations to this paper is [39], which also addresses mainstream bias and enhances utility for niche users using global and local methods. Prior existing local methods [12,13,22,23,39] and global methods [39] can mitigate the bias to some degree by improving the utility for niche users. The recently proposed Local Fine Tuning (LFT) [39] and local collaborative autoencoder (LOCA) [12] produce state-of-the-art performance by employing multiple multinomial variational autoencoders (MultVAE) [26] as base models and generating customized local models to capture special patterns of different types of user. Nonetheless, prior methods have a key limitation: their reliance on heuristics impacts performance, necessitating meticulous hyper-parameters tuning by practitioners. Thus, the performance of these prior heuristic-based local learning methods is limited. This work targets the mainstream bias problem by proposing an end-to-end adaptive local learning framework to automatically and adaptively learn customized local models for different users, overcoming the limitations of heuristic-based methods to mitigate mainstream bias.

Authors:

(1) Jinhao Pan [0009 βˆ’0006 βˆ’1574 βˆ’6376], Texas A&M University, College Station, TX, USA;

(2) Ziwei Zhu [0000 βˆ’0002 βˆ’3990 βˆ’4774], George Mason University, Fairfax, VA, USA;

(3) Jianling Wang [0000 βˆ’0001 βˆ’9916 βˆ’0976], Texas A&M University, College Station, TX, USA;

(4) Allen Lin [0000 βˆ’0003 βˆ’0980 βˆ’4323], Texas A&M University, College Station, TX, USA;

(5) James Caverlee [0000 βˆ’0001 βˆ’8350 βˆ’8528]. Texas A&M University, College Station, TX, USA.


This paper is available on arxiv under CC BY 4.0 DEED license.


Written by mediabias | We publish deeply researched (and often vastly underread) academic papers about our collective omnipresent media bias.
Published by HackerNoon on 2024/08/21