From NLP to Data Synthesis: The Surprising Power of Masked Language Models

Written by languagemodels | Published 2025/04/08
Tech Story Tags: masked-language-modeling-(mlm) | synthetic-data-generation | conditional-density-estimation | tabular-data | machine-learning-utility-(mlu) | non-parametric-estimation | histogram-based-methods | data-imputation

TLDRThis paper introduces MaCoDE, a method that reframes masked language modeling as conditional density estimation for generating synthetic tabular data. It achieves high machine learning utility, handles missing data, allows privacy control, and outperforms state-of-the-art methods on multiple real-world datasets.via the TL;DR App

Table of Links

  1. Abstract & Introduction

  2. Proposal

    1. Classification Target
    2. Masked Conditional Density Estimation (MaCoDE)
  3. Theoretical Results

    1. With Missing Data
  4. Experiments

  5. Results

    1. Related Works
    2. Conclusions and Limitations
    3. References
  6. A1 Proof of Theorem 1

    1. A2 Proof of Proposition 1
    2. A3 Dataset Descriptions
  7. A4 Missing Mechanism

    1. A5 Experimental Settings for Reproduction
  8. A6 Additional Experiments

  9. A7 Detailed Experimental Results

2. Proposal

2.1 Classification Target (Discretization)

2.2 Masked Conditional Density Estimation (MaCoDE)

Definition 2 (Mask distribution [13, 19]). The distribution of mask vector m is defined as:

Synthetic data generation. Tabular data lacks the inherent ordering between columns, unlike natural language [13]. Therefore, as outlined in Algorithm 2, MaCoDE randomly generates one column at a time, conditioned on masked subset sizes from p to 1, in descending order (p โ†’ p โˆ’ 1 โ†’ ยท ยท ยท โ†’ 2 โ†’ 1). [13] demonstrated that, under the masked distribution of Definition 2, the distribution of the number of masked entries is matched during both training and generation.

Authors:

(1) Seunghwan An, Department of Statistical Data Science, University of Seoul, S. Korea ([email protected]);

(2) Gyeongdong Woo, Department of Statistical Data Science, University of Seoul, S. Korea ([email protected]);

(3) Jaesung Lim, Department of Statistical Data Science, University of Seoul, S. Korea ([email protected]);

(4) ChangHyun Kim, Department of Statistical Data Science, University of Seoul, S. Korea ([email protected]);

(5) Sungchul Hong, Department of Statistics, University of Seoul, S. Korea ([email protected]);

(6) Jong-June Jeon (corresponding author), Department of Statistics, University of Seoul, S. Korea ([email protected]).


This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.


Written by languagemodels | Large Language Models (LLMs) ushered in a technological revolution. We breakdown how the most important models work.
Published by HackerNoon on 2025/04/08