Welcome back to this series where we are learning how to integrate AI products into web applications.
Intro & Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security & Reliability Deploying
In this post, weāll learn how to integrate OpenAIās API responses into our Qwik app usingĀ fetch
. Weāll want to make sure weāre not leaking API keys by executing these HTTP requests from a backend.
By the end of this post, we will have a rudimentary, butĀ workingĀ AI application.
https://www.youtube.com/watch?v=gUgRD0sRoCU&embedable=true
Generate OpenAI API Key
Before we start building anything, youāll need to go toĀ
Make sure to keep a copy of it somewhere because you will only be able to see it once.
With your API key, youāll be able to make authenticated HTTP requests toĀ
If you would like to familiarize yourself with the API endpoints, expected payloads, and return values, check out theĀ
You may notice theĀ openai
. We willĀ notĀ be using this, as it doesnāt quite support some things weāll want to do, thatĀ fetch
Ā can.
Make Your First HTTP Request
The application weāre going to build will make an AI-generated text completion based on the user input. For that, weāll want to work with theĀ
We need to make aĀ POST
Ā request to https://api.openai.com/v1/chat/completions with theĀ 'Content-Type'
Ā header set toĀ 'application/json'
, theĀ 'Authorization'
Ā set toĀ 'Bearer OPENAI_API_KEY'
Ā (youāll need to replace OPENAI_API_KEY with your API key), and theĀ body
Ā set to a JSON string containing the GPT model to use (weāll useĀ gpt-3.5-turbo
) and an array of messages:
fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer OPENAI_API_KEY'
},
body: JSON.stringify({
'model': 'gpt-3.5-turbo',
'messages': [
{
'role': 'user',
'content': 'Tell me a funny joke'
}
]
})
})
You can run this right from your browser console, and see the request in the Network tab of your dev tools.
The response should be a JSON object with a bunch of properties, but the one weāre most interested in is theĀ "choices"
. It will be an array of text completions objects. The first one should be an object with aĀ "message"
Ā object that has aĀ "content"
Ā property with the chat completion.
{
"id": "chatcmpl-7q63Hd9pCPxY3H4pW67f1BPSmJs2u",
"object": "chat.completion",
"created": 1692650675,
"model": "gpt-3.5-turbo-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Why don't scientists trust atoms?\n\nBecause they make up everything!"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 13,
"total_tokens": 25
}
}
Congrats! Now, you can request a mediocre joke whenever you want.
Build the Form
TheĀ fetch
Ā request above is fine, but itās not quite an application. What we want is something a user can interact with to generate an HTTP request like the one above.
For that, weāll probably want some sort to start with anĀ <form>
Ā containing aĀ <textarea>
. Below is the minimum markup we need, and if you want to learn more, consider reading these articles:
āHow to Build HTML Forms Right: Semanticsā āHow to Build HTML Forms Right: Accessibilityā āHow to Build Great HTML Form Controlsā
<form>
<label for="prompt">Prompt</label>
<textarea id="prompt" name="prompt"></textarea>
<button>Tell me</button>
</form>
We can copy and paste this form right inside our Qwik componentās JSX template. If youāve worked with JSX in the past, you may be used to replacing theĀ for
Ā attribute on theĀ <label>
Ā withĀ htmlFor
, but Qwikās compiler actually doesnāt require us to do that, so itās fine as is.
Next, weāll want to replace the default form submission behavior. By default, when an HTML form is submitted, the browser will create an HTTP request by loading the URL provided in the formāsĀ action
Ā attribute.
If none is provided, it will use the current URL. We want to avoid this page load and use JavaScript instead.
If youāve done this before, you may be familiar with theĀ preventDefault
Ā method on theĀ
Thereās a challenge here due toĀ
This asynchronous nature makes Qwik applications much faster to load but introduces the challenge of dealing with event handlers asynchronously. It makes it impossible to prevent the default behavior the same way as synchronous event handlers that are downloaded and parsed before the user interactions.
Fortunately, Qwik provides a way to prevent the default behavior by addingĀ preventdefault:{eventName}
Ā to the HTML tag. A very basic form example may look something like this:
import { component$ } from '@builder.io/qwik';
export default component$(() => {
return (
<form
preventdefault:submit
onSubmit$={(event) => {
console.log(event)
}}
>
<!-- form contents -->
</form>
)
})
Did you notice that littleĀ $
Ā at the end of theĀ onSubmit$
Ā handler, there? Keep an eye out for those because they are usually a hint to the developer that Qwikās compiler is going to do something funny and transform the code. In this case, itās due to that lazy-loading event handling system I mentioned above. If you plan on working with Qwik more, itās worthĀ
Incorporate the Fetch Request
Now, we have the tools in place to replace the default form submission with the fetch request we created above.
What we want to do next is pull the data from theĀ <textarea>
Ā into the body of the fetch request. We can do so withĀ FormData
, which expects a form element as an argument and provides an API to access a form control values through the controlāsĀ name
Ā attribute.
We can access the form element from the eventāsĀ target
Ā property; use it to create a newĀ FormData
Ā object, and use that to get theĀ <textarea>
Ā value by referencing itsĀ name
, āpromptā.
Plug that into the body of the fetch request we wrote above, and you might get something that looks like this:
export default component$(() => {
return (
<form
preventdefault:submit
onSubmit$={(event) => {
const form = event.target
const formData = new FormData(form)
const prompt = formData.get('prompt')
const body = {
'model': 'gpt-3.5-turbo',
'messages': [{ 'role': 'user', 'content': prompt }]
}
fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer OPENAI_API_KEY'
},
body: JSON.stringify(body)
})
}}
>
<!-- form contents -->
</form>
)
})
In theory, you should now have a form on your page that, when submitted, sends the value from the textarea to the OpenAI API.
Protect Your API Keys
Although our HTTP request is working, thereās a glaring issue. Because itās being constructed on the client side, anyone can open the browser dev tools and inspect the properties of the request. This includes theĀ Authorization
Ā header containing our API keys.
Iāve blocked out my API token here with a red bar.
This would allow someone to steal our API tokens and make requests on our behalf, which could lead to abuse or higher charges on our account.
Not good!!!
The best way to prevent this is to move this API call to a backend server that we control that would work as a proxy. The frontend can make an unauthenticated request to the backend, and the backend would make the authenticated request to OpenAI and return the response to the frontend.
But because users canāt inspect backend processes, they would not be able to see the Authentication header.
So, how do we move the fetch request to the backend?
Iām so glad you asked!
Weāve been mostly focusing on building the frontend with Qwik, the framework, but we also have access to
Of the various options Qwik City offers for running backend logic, my favorite isĀ routeAction$
. It allows us to create a backend function that can be triggered from the client over HTTP (essentially anĀ
The logic would follow:
- UseĀ
routeAction$()
Ā to create an action.
- Provide the backend logic as the parameter.
- Programmatically execute the actionāsĀ
submit()
Ā method.
A simplified example could be:
import { component$ } from '@builder.io/qwik';
import { routeAction$ } from '@builder.io/qwik-city';
export const useAction = routeAction$((params) => {
console.log('action on the server', params)
return { o: 'k' }
})
export default component$(() => {
const action = useAction()
return (
<form
preventdefault:submit
onSubmit$={(event) => {
action.submit('data')
}}
>
<!-- form contents -->
</form>
{ JSON.stringify(action) }
)
})
I included aĀ JSON.stringify(action)
Ā at the end of the template because I think you should see what the returnedĀ ActionStore
Ā looks like.
It contains extra information like whether the action is running, what the submission values were, what the response status is, what the returned value is, and more.
This is all very useful data that we get out of the box just by using an action, and it allows us to create more robust applications with less work.
Enhance the Experience
Qwik City's actions are cool, but they get even better when combined with QwikāsĀ <Form>
Ā component:
Under the hood, the component uses a native HTML element, so it will work without JavaScript.
When JS is enabled, the component will intercept the form submission and trigger the action in SPA mode, allowing to have a full SPA experience.
By replacing the HTMLĀ <form>
Ā element with QwikāsĀ <Form>
Ā component, we no longer have to set upĀ preventdefault:submit
,Ā onSubmit$
, or callĀ action.submit()
. We can just pass the action to theĀ Form
āsĀ action
Ā prop, and itāll take care of the work for us.
Additionally, it will work if JavaScript is not available for some reason (we could have done this with the HTML version as well, but it would have been more work).
import { component$ } from '@builder.io/qwik';
import { routeAction$, Form } from '@builder.io/qwik-city';
export const useAction = routeAction$(() => {
console.log('action on the server')
return { o: 'k' }
});
export default component$(() => {
const action = useAction()
return (
<Form action={action}>
<!-- form contents -->
</Form>
)
})
So, thatās an improvement for the developer experience. Letās also improve the user experience.
Within theĀ ActionStore
, we have access to theĀ isRunning
Ā data which keeps track of whether the request is pending or not. Itās handy information we can use to let the user know when the request is in flight.
We can do so by modifying the text of the submit button to say āTell meā when itās idle, then āOne secā¦ā while itās loading. I also like to assign theĀ aria-disabled
Ā attribute to match theĀ isRunning
Ā state.
This will hint to assistive technology that itās not ready to be clicked (though technically still can be). It can also be targeted with CSS to provide visual styles suggesting itās not quite ready to be clicked again.
<button type="submit" aria-disabled={state.isLoading}>
{state.isLoading ? 'One sec...' : 'Tell me'}
</button>
Show the Results
Ok, weāve done way too much work without actually seeing the results on the page. Itās time to change that. Letās bring theĀ fetch
Ā request we prototyped earlier in the browser into our application.
We can copy/paste theĀ fetch
Ā code right into the body of our action handler, but to access the userās input data, weāll need access to the form data that is submitted. Fortunately, any data passed to theĀ action.submit()
Ā method will be available to the action handler as the first parameter. It will be a serialized object where the keys correspond to the form control names.
Note that Iāll be using theĀ await
Ā keyword in the body of the handler, which means I also have to tag the handler as anĀ async
Ā function.
import { component$ } from '@builder.io/qwik';
import { routeAction$, Form } from '@builder.io/qwik-city';
export const useAction = routeAction$(async (formData) => {
const prompt = formData.prompt // From <textarea name="prompt">
const body = {
'model': 'gpt-3.5-turbo',
'messages': [{ 'role': 'user', 'content': prompt }]
}
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer OPENAI_API_KEY'
},
body: JSON.stringify(body)
})
const data = await response.json()
return data.choices[0].message.content
})
At the end of the action handler, we also want to return some data for the frontend. The OpenAI response comes back as JSON, but I think we might as well just return the text. If you remember from the response object we saw above, that data is located atĀ responseBody.choices[0].message.content
.
If we set things up correctly, we should be able to access the action handlerās response in theĀ ActionStore
āsĀ value
Ā property. This means we can conditionally render it somewhere in the template like so:
{action.value && (
<p>{action.value}</p>
)}
Use Environment Variables
Alright, weāve moved the OpenAI request to the backend, protected our API keys from prying eyes, weāre getting a (mediocre joke) response, and displaying it on the frontend. The app is working, but thereās still one more security issue to deal with.
Itās generally a bad idea to hardcode API keys into your source code, for a number of reasons:
- It means you canāt share the repo publicly without exposing your keys.
- You may run up API usage during development, testing, and staging.
- Changing API keys requires code changes and re-deploys.
- Youāll need to regenerate API keys anytime someone leaves the org.
A better system is to useĀ
For example, you can make an environment variable calledĀ OPENAI_API_KEY
Ā with the value of your OpenAI key for only the production environment. This way, only developers with direct access to that environment would be able to access it.
This greatly reduces the likelihood of the API keys leaking, it makes it easier to share your code openly, and because you are limiting access to the keys to the least number of people, you donāt need to replace keys as often because someone left the company.
In Node.js, itās common to set environment variables from the command line (ENV_VAR=example npm start
) or with the popularĀ dotenv
Ā package. Then, in your server-side code, you can access environment variables usingĀ process.env.ENV_VAR
.
Things work slightly differently with Qwik.
Qwik can target different JavaScript runtimes (not just Node), and accessing environment variables viaĀ process.env
Ā is a Node-specific concept. To make things more runtime-agnostic, Qwik provides access to environment variables through aĀ RequestEvent
Ā object which is available as the second parameter to the route action handler function.
import { routeAction$ } from '@builder.io/qwik-city';
export const useAction = routeAction$((param, requestEvent) => {
const envVariableValue = requestEvent.env.get('ENV_VARIABLE_NAME')
console.log(envVariableValue)
return {}
})
So, thatās how we access environment variables, but how do we set them?
Unfortunately, for production environments, setting environment variables will differ depending on the platform. For a standard serverĀ ENV_VAR=example npm start
).
In development, we can alternatively create aĀ local.env
Ā file containing our environment variables, and they will be automatically assigned for us. This is convenient since we spend a lot more time starting the development environment, and it means we can provide the appropriate API keys only to the people who need them.
So, after you create aĀ local.env
Ā file, you can assign theĀ OPENAI_API_KEY
Ā variable to your API key.
OPENAI_API_KEY="your-api-key"
(You may need to restart your dev server)
Then we can access the environment variable through theĀ RequestEvent
Ā parameter. With that, we can replace the hard-coded value in ourĀ fetch
Ā requestās Authorization header with the variable usingĀ
export const usePromptAction = routeAction$(async (formData, requestEvent) => {
const OPENAI_API_KEY = requestEvent.env.get('OPENAI_API_KEY')
const prompt = formData.prompt
const body = {
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: prompt }]
}
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'post',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${OPENAI_API_KEY}`,
},
body: JSON.stringify(body)
})
const data = await response.json()
return data.choices[0].message.content
})
For more details on environment variables in Qwik,Ā
Recap
-
When a user submits the form, the default behavior is intercepted by Qwikās optimizer which lazy loads the event handler.
-
The event handler uses JavaScript to create an HTTP request containing the form data to send to the server to be handled by the routeās action.
-
The routeās action handler will have access to the form data in the first parameter and can access environment variables from the second parameter (aĀ
RequestEvent
Ā object). -
Inside the routeās action handler, we can construct and send the HTTP request to OpenAI using the data we got from the form and the API keys we pulled from the environment variables.
-
With the OpenAI response, we can prepare the data to send back to the client.
-
The client receives the response from the action and can update the page accordingly.
Hereās what my final component looks like, including some Tailwind classes and a slightly different template.
import { component$ } from "@builder.io/qwik";
import { routeAction$, Form } from "@builder.io/qwik-city";
export const usePromptAction = routeAction$(async (formData, requestEvent) => {
const OPENAI_API_KEY = requestEvent.env.get('OPENAI_API_KEY')
const prompt = formData.prompt
const body = {
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: prompt }]
}
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'post',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${OPENAI_API_KEY}`,
},
body: JSON.stringify(body)
})
const data = await response.json()
return data.choices[0].message.content
})
export default component$(() => {
const action = usePromptAction()
return (
<main class="max-w-4xl mx-auto p-4">
<h1 class="text-4xl">Hi š</h1>
<Form action={action} class="grid gap-4">
<div>
<label for="prompt">Prompt</label>
<textarea name="prompt" id="prompt">
Tell me a joke
</textarea>
</div>
<div>
<button type="submit" aria-disabled={action.isRunning}>
{action.isRunning ? 'One sec...' : 'Tell me'}
</button>
</div>
</Form>
{action.value && (
<article class="mt-4 border border-2 rounded-lg p-4 bg-[canvas]">
<p>{action.value}</p>
</article>
)}
</main>
);
});
Conclusion
All right! Weāve gone from a script that uses AI to get mediocre jokes to a full-blown application that securely makes HTTP requests to a backend that uses AI to get mediocre jokes and sends them back to the frontend to put those mediocre jokes on a page.
You should feel pretty good about yourself.
But not too good, because thereās still room to improve.
In our application, we are sending a request and getting an AI response, but we are waiting for the entirety of the body of that response to be generated before showing it to the users. And these AI responses can take a while to complete.
If youāve used AI chat tools in the past, you may be familiar with the experience where it looks like itās typing the responses to you, one word at a time, as theyāre being generated. This doesnāt speed up the total request time, but it does get some information back to the user much sooner and feels like a faster experience.
In the next post, weāll learn how to build that same feature using HTTP streams, which are fascinating and powerful but also can be kind of confusing. So, Iām going to dedicate an entire post just to that.
I hope youāre enjoying this series and plan to stick around. In the meantime, have fun generating some mediocre jokes.
Intro & Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security & Reliability Deploying
Thank you so much for reading. If you liked this article, and want to support me, the best ways to do so are toĀ
First published here.