paint-brush
Original GPT4All Model: How We Collected Data and Then Curated Itby@textmodels

Original GPT4All Model: How We Collected Data and Then Curated It

tldt arrow

Too Long; Didn't Read

To train the original GPT4All model, we collected roughly one million prompt-response pairs using the GPT-3.5-Turbo OpenAI API between March 20, 2023 and March 26th, 2023.
featured image - Original GPT4All Model: How We Collected Data and Then Curated It
Writings, Papers and Blogs on Text Models HackerNoon profile picture

Abstract and 1. Introduction

2 The Original GPT4All Model

2.1 Data Collection and Curation

2.2 Model Training, 2.3 Model Access and 2.4 Model Evaluation

3 From a Model to an Ecosystem

3.1 GPT4All-J: Repository Growth and the implications of the LLaMA License

3.2 GPT4All-Snoozy: the Emergence of the GPT4All Ecosystem

3.3 The Current State of GPT4All

4 The Future of GPT4All

Limitations and References

2 The Original GPT4All Model

2.1 Data Collection and Curation

To train the original GPT4All model, we collected roughly one million prompt-response pairs using the GPT-3.5-Turbo OpenAI API between March 20, 2023 and March 26th, 2023. In particular, we gathered GPT3.5-Turbo responses to prompts of three publicly available datasets: the unified chip2 subset of LAION OIG, a random sub-sample of Stackoverflow Questions, and a sub-sample of Bigscience/P3 (Sanh et al., 2021). Following the approach in Stanford Alpaca (Taori et al., 2023), an open source LLaMA variant that came just before GPT4All, we focused substantial effort on dataset curation.


The collected dataset was loaded into Atlas (AI, 2023)—a visual interface for exploring and tagging massive unstructured datasets —for data curation. Using AtarXiv:2311.04931v1 [cs.CL] 6 Nov 2023 las, we identified and removed subsets of the data where GPT-3.5-Turbo refused to respond, had malformed output, or produced a very short response. This resulted in the removal of the entire Bigscience/P3 subset of our data, as many P3 prompts induced responses that were simply one word. After curation, we were left with a set of 437,605 prompt-response pairs, which we visualize in Figure 1a.


This paper is available on arxiv under CC BY 4.0 DEED license.

Authors:

(1) Yuvanesh Anand, Nomic AI, [email protected];

(2) Zach Nussbaum, Nomic AI, [email protected];

(3) Adam Treat, Nomic AI, [email protected];

(4) Aaron Miller, Nomic AI, [email protected];

(5) Richard Guo, Nomic AI, [email protected];

(6) Ben Schmidt, Nomic AI, [email protected];

(7) GPT4All Community, Planet Earth;

(8) Brandon Duderstadt, Nomic AI, [email protected] with Shared Senior Authorship;

(9) Andriy Mulyar, Nomic AI, [email protected] with Shared Senior Authorship.