ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAIās GPT-3.5 family of large language models and is fine-tuned with both supervised and reinforcement learning techniques.
ChatGPT was launched as a prototype on November 30 2022 and quickly gained attention for its detailed responses and articulate answers across many domains of knowledge. However, its uneven factual accuracy was identified as a significant drawback.
In this article, weāll learn how to use the OpenAI API to build a ChatGPT application on Flutter.
Building this application weāll need the following:
-
API token: We will need an API token from OpenAI, you can get your API token from the OpenAI account dashboard. If you don't have an account you can create one.
-
http : http flutter package for handling http requests. -
provider : The Provider package is an easy-to-use package that is basically a wrapper aroundinheritedwidget
that makes it easy to use and manage. It provides a state management technique that is used to manage a piece of data around the app. -
animated text kit: a flutter package that contains a collection of cool text animations.
-
flutter_svg : An SVG rendering and widget library for flutter, which allows planting and displaying Scalable Vector Graphic files.
With all things set, let's start building. š¾ š»
Open your terminal and create your flutter app using flutter cli
flutter create openai-chat
When the app has been created, open the folder in your VSCode or whatever Text Editor you make use of.
Open theĀ libĀ folder and open theĀ mainĀ file, clear out the initial code that was created with the app - because we are going to start building our app from the ground up.
Your main.dart
file will now look like this by creating aĀ Stateful Widget:
import 'package:flutter/material.dart';
void main() {
WidgetsFlutterBinding.ensureInitialized();
runApp(const MyApp());
}
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: "Open AI Chat",
home: SafeArea(
bottom: true,
top: false,
child: Scaffold(
backgroundColor: const Color(0xff343541),
appBar: AppBar(
backgroundColor: const Color(0xff343541),
leading: IconButton(
onPressed: () {},
icon: const Icon(
Icons.menu,
color: Color(0xffd1d5db),
),
),
elevation: 0,
title: const Text("New Chat"),
centerTitle: true,
actions: [
IconButton(
onPressed: () {},
icon: const Icon(
Icons.add,
color: Color(0xffd1d5db),
),
),
],
),
body: Stack(
[],
),
),
),
);
}
}
Now that we have our app set up, we can start building all the different widgets. We aiming for four (4) different widgets:
- User Input Widget
- User message Widget
- AI Message Widget
- Loader Widget
Create a folder calledĀ widgets,Ā this will contain all four widgets that we will work on soon.
- User Input Widget
import 'package:flutter/material.dart';
class UserInput extends StatelessWidget {
final TextEditingController chatcontroller;
const UserInput({
Key? key,
required this.chatcontroller,
}) : super(key: key);
@override
Widget build(BuildContext context) {
return Align(
alignment: Alignment.bottomCenter,
child: Container(
padding: const EdgeInsets.only(
top: 10,
bottom: 10,
left: 5,
right: 5,
),
decoration: const BoxDecoration(
color: Color(0xff444654),
border: Border(
top: BorderSide(
color: Color(0xffd1d5db),
width: 0.5,
),
),
),
child: Row(
children: [
Expanded(
flex: 1,
child: Image.asset(
"images/avatar.png",
height: 40,
),
),
Expanded(
flex: 5,
child: TextFormField(
onFieldSubmitted: (e) {
},
controller: chatcontroller,
style: const TextStyle(
color: Colors.white,
),
decoration: const InputDecoration(
focusColor: Colors.white,
filled: true,
fillColor: Color(0xff343541),
suffixIcon: Icon(
Icons.send,
color: Color(0xffacacbe),
),
focusedBorder: OutlineInputBorder(
borderSide: BorderSide.none,
borderRadius: BorderRadius.all(
Radius.circular(5.0),
),
),
border: OutlineInputBorder(
borderRadius: BorderRadius.all(
Radius.circular(5.0),
),
),
),
),
),
],
),
),
);
}
}
The UserInput
accepts one parameter, the chatcontroller
. We also we have the onFieldSubmitted
callback method that will come into player when the user submits their message.
- User Message Widget
class UserMessage extends StatelessWidget {
final String text;
const UserMessage({
Key? key,
required this.text,
}) : super(key: key);
@override
Widget build(BuildContext context) {
return Container(
padding: const EdgeInsets.all(8),
child: Row(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Expanded(
flex: 1,
child: Padding(
padding: const EdgeInsets.all(8.0),
child: Image.asset(
"images/avatar.png",
height: 40,
width: 40,
fit: BoxFit.contain,
),
),
),
Expanded(
flex: 5,
child: Padding(
padding: const EdgeInsets.only(
left: 3,
top: 8,
),
child: Text(
text,
style: const TextStyle(
color: Color(0xffd1d5db),
fontSize: 16,
fontWeight: FontWeight.w700,
),
),
),
),
],
),
);
}
}
The user message passes the userās message as a parameter to the Usermessage
class which will be appended to the ListView.
- AI Message Widget
class AiMessage extends StatelessWidget {
final String text;
const AiMessage({
Key? key,
required this.text,
}) : super(key: key);
@override
Widget build(BuildContext context) {
return Container(
color: const Color(0xff444654),
padding: const EdgeInsets.all(8),
child: Row(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Expanded(
flex: 1,
child: Padding(
padding: const EdgeInsets.all(8.0),
child: Container(
color: const Color(0xff0fa37f),
padding: const EdgeInsets.all(3),
child: SvgPicture.asset(
"images/ai-avatar.svg",
height: 30,
width: 30,
fit: BoxFit.contain,
),
),
),
),
Expanded(
flex: 5,
child: AnimatedTextKit(
animatedTexts: [
TypewriterAnimatedText(
text,
textStyle: const TextStyle(
color: Color(0xffd1d5db),
fontSize: 16,
fontWeight: FontWeight.w700,
),
),
],
totalRepeatCount: 1,
),
),
],
),
);
}
}
The AI message passes the user message as a parameter to the AiMessage
class which will be appended to the ListView.
Using theĀ AnimatedTextKit packageĀ we can animate our text using the typewriter animation.
- Loader Widget
class Loading extends StatelessWidget {
final String text;
const Loading({
Key? key,
required this.text,
}) : super(key: key);
@override
Widget build(BuildContext context) {
return Container(
color: const Color(0xff444654),
padding: const EdgeInsets.all(8),
child: Row(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Expanded(
flex: 1,
child: Padding(
padding: const EdgeInsets.all(8.0),
child: Container(
color: const Color(0xff0fa37f),
padding: const EdgeInsets.all(3),
child: SvgPicture.asset(
"images/ai-avatar.svg",
height: 30,
width: 30,
fit: BoxFit.contain,
),
),
),
),
Expanded(
flex: 5,
child: Text(
text,
style: const TextStyle(
color: Color(0xffd1d5db),
fontSize: 16,
fontWeight: FontWeight.w700,
),
),
),
],
),
);
}
}
The Loader Widget is used to await a response from the API call, then the response is completed we remove the loader from the list.
APP Constant
const endpoint = "https://api.openai.com/v1/";
const aiToken = "sk-------------------------------------";
Create a file calledĀ api_constants.dartĀ this will contain our endpoint and API token, you can get your API token from OpenAIās API token dashboard.
OpenAI Repository
class OpenAiRepository {
static var client = http.Client();
static Future<Map<String, dynamic>> sendMessage({required prompt}) async {
try {
var headers = {
'Authorization': 'Bearer $aiToken',
'Content-Type': 'application/json'
};
var request = http.Request('POST', Uri.parse('${endpoint}completions'));
request.body = json.encode({
"model": "text-davinci-003",
"prompt": prompt,
"temperature": 0,
"max_tokens": 2000
});
request.headers.addAll(headers);
http.StreamedResponse response = await request.send();
if (response.statusCode == 200) {
final data = await response.stream.bytesToString();
return json.decode(data);
} else {
return {
"status": false,
"message": "Oops, there was an error",
};
}
} catch (_) {
return {
"status": false,
"message": "Oops, there was an error",
};
}
}
}
Now, letās communicate with the OpenAI API. We have to create a file calledĀ openai_repository.dart
Ā in the repository folder. In the file, we have a class called OpenAIRepository which has a static method called sendMessage that accepts just a single parameterĀ prompt
Authentication
The OpenAI API uses API keys for authentication. Retrieve the API key youāll use in your requests.
All API requests should include your API key in anĀ Authorization
Ā HTTP header as follows:
Authorization: Bearer YOUR_API_KEY
Making Request
{
"model": "text-davinci-003",
"prompt": prompt,
"temperature": 0,
"max_tokens": 2000
}
This request queries the Davinci model to complete the text starting with a prompt you sent from your user input. TheĀ max_tokens
Ā parameter sets an upper bound on how many tokensĀ the API will return. TheĀ temperature
Ā means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.
This will return aĀ Map<String, dynamic>Ā response that looks like this.
{
"id": "cmpl-GERzeJQ4lvqPk8SkZu4XMIuR",
"object": "text_completion",
"created": 1586839808,
"model": "text-davinci:003",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}
ChatModel
class ChatModel extends ChangeNotifier {
List<Widget> messages = [];
List<Widget> get getMessages => messages;
Future<void> sendChat(String txt) async {
addUserMessage(txt);
Map<String, dynamic> response =
await OpenAiRepository.sendMessage(prompt: txt);
String text = response['choices'][0]['text'];
//remove the last item
messages.removeLast();
messages.add(AiMessage(text: text));
notifyListeners();
}
void addUserMessage(txt) {
messages.add(UserMessage(text: txt));
messages.add(const Loading(text: "..."));
notifyListeners();
}
}
Since we are using provider as our State management, we create a class calledĀ ChatModelĀ which extends theĀ ChangeNotifier.Ā We make an empty List<Widget> which we will use to push in new messages (Widget). AĀ getter getMessagesĀ to get messages,
We create a method called sendChat which takes the user input and then calls theĀ addUserMessageĀ which pushes a new widget containing the user message and also the loader widget to the messages list.
Next, we send the prompt message to the OpenAI Repository which then sends back a response. We then store the text into a variableĀ StringĀ calledĀ text.
Next, we remove the Loader Widget from the List and add theĀ AIMessage Widget
Almost doneā¦Ā š¤š½
We have to go back to our userInput Widget and callĀ sendChatĀ when the user tries to submit his message. Your code will look much like this now.
TextFormField(
onFieldSubmitted: (e) {
context.read<ChatModel>().sendChat(e);
chatcontroller.clear();
},
Hit itš
All have to do now, is to edit ourĀ main.dartĀ file. Wrap the body inĀ MultiProviderĀ and your code will look something like this.
body: MultiProvider(
providers: [
ChangeNotifierProvider(create: (_) => ChatModel()),
],
child: Consumer<ChatModel>(builder: (context, model, child) {
List<Widget> messages = model.getMessages;
return Stack(
children: [
//chat
Container(
margin: const EdgeInsets.only(bottom: 80),
child: ListView(
children: [
const Divider(
color: Color(0xffd1d5db),
),
for (int i = 0; i < messages.length; i++) messages[i]
],
),
),
//input
UserInput(
chatcontroller: chatcontroller,
)
],
);
}),
),
App RunningĀ šøš
All done, you can start using the ChatGPT on your Flutter App. You can also clone the repo right here.
Also published here.
Have any questions, drop your comment here and I will respond to them as soon as possible.