AI/LLM Recipe Generator with chatGPT

ChatGPT API is like a magic spell for your web application – with just a few lines of code, it can create engaging and intelligent conversations. Even for a tech newbie, it’s easy to use new or existing apps. Dive in, and in no time, you’ll have a conversational AI that keeps users engaged and coming back for more.

This is the introduction provided by chatGPT. Very good, right?

In this article I will not create a conversational AI tool, but I will discuss the integration between the Remix app and the chatGPT API.

The “testbed” will be a simple recipe generator that receives information from the user that it uses to create a request for chatGPT.

This code is available on github and it looks like this:


“Preset” is as simple as can be. (After setting up payment) You need to generate a secret API key here and copy it to yourself .env file (make sure it’s in your .gitignore file so no one can find it on github!). Also copy your Organization ID from here.

OPENAI_API_KEY=[your secret API key]
OPENAI_ORG_KEY=[your organization id]
Enter full screen mode

Exit full screen mode

and install openai Library if used npmthat will be:

npm i openai
Enter full screen mode

Exit full screen mode

This includes TypeScript variants too!

API call

For this example, we can use the Chat Completions API, but before we do that, we need to configure the library to use our keys. Since this code runs exclusively on a server and not the user’s browser, we can get what we need from it. process.env Goal – object:

const configuration = new Configuration(
  apiKey: process.env.OPENAI_API_KEY,
  organization: process.env.OPENAI_API_ORG,

const openai = new OpenAIApi(configuration);
Enter full screen mode

Exit full screen mode

Now we can create an object of type CreateChatCompletionRequest. This object can have various options to tell chatGPT what we want, but the most important (and required) options are model – Which version of the model to use (full list here) and messages – Complete the field and chat request we want.

one messages The option can be a system-wide “personality” that the chat can match. There are also options to provide API samples of the outputs that can be used to train the model for this particular chat.

We will use a user The message, that is, the input received from the user, to ask the GTP model what we want.

// for the purpose of this article, we'll abstract this away.
const ingredientsList = getIngredientsList();

const completionRequest: CreateChatCompletionRequest = 
  model: 'gpt-3.5-turbo',
  messages: [
      role: 'system',
      content: 'You are a creative and experienced chef assistant.',
      role: 'user',
      content: `Generate a recipe with these ingredients: $ingredientsList.`,

const chatCompletion = await openai.createChatCompletion(completionRequest);
Enter full screen mode

Exit full screen mode

In this case, we gpt-3.5-turbo Model and start the conversation by asking the API to be a “creative and experienced chef’s assistant”.

Handle the response

The answer is well typed and easily accessible:

const generatedOutput =[0].message?.content;
Enter full screen mode

Exit full screen mode

Which in this case might lead to a “Tilapia and Veggie One Pan Dinner” recipe with a full ingredient list and step-by-step instructions!

This is of course a simplification. A full implementation with more options and a user interface might end up looking something like this:

Full screen shot of UI with generated input and output

Advanced options

An interesting feature available to us here, which is not available in the chatGPT interface temperature which is an abstraction of randomness that can add a bit of chaos.

with one temperature Of the 2, a “recipe” might look like this…

Chicken Delight Recipe Parham Style:

Featured Cooking Equipment(set boAirition above stove required Gas-Telian range VMM incorporated rather below ideal temperature during baking ir regulate heat applied):
- Large non-stick frypan(Qarma brand)->Coloning cooking Stenor service(each Product hasown separate reviews dependable optimization features)
Enter full screen mode

Exit full screen mode

Be careful, because it can eat your marks! As a safety measure (or depending on your use case), a max_tokens It can be used to limit the output size.

In my experience with gpt-3.5-turbo, changing system Content didn’t make much of an impact in this case, but could be more useful for ongoing conversations. Since my use case is to ask for a recipe only once, there is no need to set the system “personality”.


As of the writing of this article, gpt-3.5-turbo It is the latest model that is available to me, but it comes with limitations.

First of all, processing is rather slow, taking about 15 seconds to return a recipe. OpenAI proposes a number of improvements in its documentation, such as limiting output size, caching, and categorization.

There is also an inherent limitation that a conversation is “stateless”: if you want to have a continuous conversation, any previous conversation user The message and it assistant The answer must be sent before each new load user Message

In my example application, providing a very limited set of common materials (Salt, Pepper, Olive oil, Butter, All-purpose flour, Sugar, Eggs, Milk, Garlic, Onion, Lemons, White Vinegar, Apple Cider Vinegar, Soy sauce, Baking powder, Cumin) still leads to chicken or shrimp based recipes. I tried with more specific recipes (“If chicken isn’t available, don’t recommend chicken recipes.”) but didn’t have much success.

This is an example of an “illusion”, but not specifically a problem with GPT-4, which is not yet widely available via API.

There are other general limitations of AI generators to keep in mind, such as biases and how they are often “confidently false.”

In this case, the worst-case scenario is an unappealing meal, but these limitations should be kept in mind when relying on generated content.

Fine tuning and cost

As the only user of this application, my expenses have been very low 😅. A run is for about 200 input tokens and output is between 300 and 500 tokens. With gpt-3.5-turboit comes out (0.2 * $0.0015) + (.4 * $0.002) or about One tenth of a cent.

When GPT-4 becomes generally available, it will be much more expensive. For now, one run is the equivalent for me (0.2 * $0.03) + (0.4 * $0.06) or about 3 cents.

The API price dropped a few weeks ago, so it’s reasonable to expect GPT-4 to become cheaper in the future as well.

GPT-3.5 output can still be fine-tuned with finer and finer inputs, but since billing is based on the number of tokens (i.e. length of input and output), fine-tuning this way can be like an expensive conversation. A program that chains messages between the user and the assistant.

Requests can also be broken down into smaller, more specific notifications. However, in addition to increasing the total number of tokens, this approach also increases the complexity (and maintenance cost) of an application, especially if you use the output of one query as input to another.


The header image for this post was created with MidJourney, just another (small) example of how I use this technology.

Generative AI opens up a wide range of new and exciting applications, but not without additional considerations.

Although this program barely scratches the surface of what can be done, I hope it serves as a useful introduction to integration. openai Include the library in your web application, whether you’re building an interesting product or just exploring new technologies.

Have you explored interesting uses of the API or experimented with different parts of it? please share!

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker!