Tailoring Textual Resources for Evaluation Tasks: Summary

Written by textmodels | Published 2024/06/01
Tech Story Tags: llm-natural-supervision | llm-self-supervision | llm-language-pretraining | llm-data-to-text-generation | llm-text-summarization | llm-story-generation | llm-story-generation-datasets | llm-summarization-datasets

TLDRIn this study, researchers build evaluation tasks from naturally-occurring textual resources.via the TL;DR App

Author:

(1) Mingda Chen.

Table of Links

6.4 Summary

In this chapter, we showed that naturally-occurring textual resources can be tailored to build datasets for long-form data-to-text generation, long-form text summarization, and story generation with constraints. For each dataset, we conducted experiments to characterize the challenges in these new datasets. We also proposed new (either automatic or human-evaluation) metrics and models for these tasks to promote research in these directions.

This paper is available on arxiv under CC 4.0 license.


Written by textmodels | We publish the best academic papers on rule-based techniques, LLMs, & the generation of text that resembles human text.
Published by HackerNoon on 2024/06/01