Estimate Emotion Probability Vectors Using LLMs: Conclusions

Written by textmodels | Published 2024/05/10
Tech Story Tags: llms | emotion-analysis | emotion-eliciting-tail-prompt | pca-analysis | synthetic-consciousness | emotion-vector | emotion-dictionary | emotion-probability-vector

TLDRThis paper shows how LLMs (Large Language Models) [5, 2] may be used to estimate a summary of the emotional state associated with a piece of text. via the TL;DR App

This paper is available on arxiv under CC 4.0 license.

Authors:

(1) D.Sinclair, Imense Ltd, and email: [email protected];

(2) W.T.Pye, Warwick University, and email: [email protected].

Table of Links

5. Conclusions

LLMs are by their nature designed to provide text strings as a response to a test prompt. This is not always the most useful format for information to be returned in. Internally within the LLM there exist probability distributions over tokens. The paper presents an example of how to build part of an emotion based synthetic consciousness by deriving the vector of emotion descriptor probabilities over a dictionary of emotional terms. There are a range of things that can be done with this emotion probability vector including fine grained review analysis, predicting a response to marketing messages, offence detection etc. It is possible that the emotion probability vector might be a step on the road to synthetic consciousness and that it might provide a means of making robots more empathetic through allowing them to make a prediction as to how something they might say will make the recipient feel.

If reasonable responses are desired from an LLM it might be a good policy not to train the LLM on the mad shouting that pervades anti-social media and analogously it might be a good idea not to train young minds similarly.


Written by textmodels | We publish the best academic papers on rule-based techniques, LLMs, & the generation of text that resembles human text.
Published by HackerNoon on 2024/05/10