Orca 2: Enhancing Reasoning in Smaller Language Models - Example from Benchmarks and Output

Written by textmodels | Published 2024/05/30
Tech Story Tags: language-models | orca-2 | reasoning-techniques | machine-learning | small-models | imitation-learning | ai-benchmarks | model-training

TLDRTeaching Orca 2 to be a Cautious Reasoner is based on the work of Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas, Guoqing Zheng, Corby Rosset, Hamed Khanpour, and Ahmed Awadall.via the TL;DR App

Authors:

(1) Arindam Mitra;

(2) Luciano Del Corro, work done while at Microsoft;

(3) Shweti Mahajan, work done while at Microsoft;

(4) Andres Codas, denote equal contributions;

(5) Clarisse Simoes, denote equal contributions;

(6) Sahaj Agarwal;

(7) Xuxi Chen, work done while at Microsoft;;

(8) Anastasia Razdaibiedina, work done while at Microsoft;

(9) Erik Jones, work done while at Microsoft;

(10) Kriti Aggarwal, work done while at Microsoft;

(11) Hamid Palangi;

(12) Guoqing Zheng;

(13) Corby Rosset;

(14) Hamed Khanpour;

(15) Ahmed Awadall.

Table of Links

Abstract and Introduction

Preliminaries

Teaching Orca 2 to be a Cautious Reasoner

Technical Details

Experimental Setup

Evaluation Results

Limitations

Conclusions and References

A. AGIEval Subtask Metrics

B. BigBench-Hard Subtask Metrics

C. Evaluation of Grounding in Abstractive Summarization

D. Evaluation of Safety

E. Prompts used in Evaluation

F. Illustrative Example from Evaluation Benchmarks and Corresponding Model Outpu

F Illustrative Example from Evaluation Benchmarks and Corresponding Model Output

This paper is available on arxiv under CC 4.0 license.


Written by textmodels | We publish the best academic papers on rule-based techniques, LLMs, & the generation of text that resembles human text.
Published by HackerNoon on 2024/05/30