Disentangling Latent Representations for Interpretability and Controllability

Written by textmodels | Published 2024/06/01
Tech Story Tags: llm-natural-supervision | llm-self-supervision | llm-language-pretraining | disentangling-semantics | sentence-representations | syntactic-exemplar | latent-representations | latent-variable-model

TLDRIn this study, researchers disentangle latent representations using naturally-occurring structures of paired data.via the TL;DR App

Author:

(1) Mingda Chen.

Table of Links

CHAPTER 5 - DISENTANGLING LATENT REPRESENTATIONS FOR INTERPRETABILITY AND CONTROLLABILITY

In this chapter, we describe our contributions to disentangling latent representations using naturally-occurring structures of paired data. In Section 5.1, we presented a multi-task, latent-variable model that disentangles semantics and syntax in sentence representations. The model leverages the fact that the semantics of a paraphrase pair is shared but syntax varies. In Section 5.2, we extend this framework for controlling the syntax of generated text. In this controlled generation setting, we propose to use a sentential exemplar to control the syntax.

The material in this chapter is adapted from Chen et al. (2019d) and Chen et al. (2019c).

This paper is available on arxiv under CC 4.0 license.


Written by textmodels | We publish the best academic papers on rule-based techniques, LLMs, & the generation of text that resembles human text.
Published by HackerNoon on 2024/06/01