Skip to main content

Using OpenAI API

Welcome to this tutorial! In this guide, you will learn how to integrate the OpenAI API with a DHTMLX ChatBot. By the end of this tutorial, you’ll have a functional chatbot that can communicate using OpenAI's advanced language model.

Single Chat/Agent Mode

Client-Side Code Setup

To get started, we will initiate a basic ChatBot instance using DHTMLX. Here is the initial setup code:

const bot = new chatbot.ChatBot("body", {
sidebar: false,
format: "markdown",
agents: [{
id: 100,
name: "OpenAI",
avatar: "https://cdn.dhtmlx.com/datasets/avatars/avatar-c2.jpg",
response: () => Promise.resolve({ content: "TODO" })
}]
});

In this snippet, we create an instance of chatbot.ChatBot that attaches to the body element. The chatbot is customized to have no sidebar and to use the Markdown format for messages. The agents array configures a single chatbot agent with the ID 100, named "OpenAI", and specifies an avatar image. The response function currently returns a placeholder response (TODO), making it quite useless since it doesn't provide a meaningful conversation yet.

In the next step, we will integrate a real response mechanism to make the chatbot functional and able to provide meaningful interactions. Stay tuned as we enhance this basic setup!

Client-Side Code with Real LLM

Before proceeding, it's crucial to understand that embedding API keys directly in the client's code is not a secure practice. This example is solely for demonstration purposes and should not be used in a production environment.

To implement real responses from OpenAI's language model, you will need an OpenAI API key. You can generate one from OpenAI's API key settings page.

Here's the enhanced code snippet that integrates a real language model for generating responses:

let apiKey = ""; // place your API key here

const bot = new chatbot.ChatBot("body", {
sidebar: false,
format: "markdown",
agents: [{
id: 100,
name: "HAL",
avatar: "https://cdn.dhtmlx.com/datasets/avatars/avatar-c2.jpg",
}]
});

const response = content => bot.addMessage({ message: { role:"agent", content, typing: -1 }});

bot.on("add-message", ({ message }) => {
if (message.role !== "user") return;
if (!apiKey) return response("You need to enter a valid API key in the JavaScript panel of the snippet tool");

return fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
},
body: JSON.stringify({
"model": "gpt-4o-mini",
messages: [
{ role: "system", content: "You are HAL 9000 from the Space Odyssey" },
{ role: "user", content: message.content }
],
"temperature": 1,
"max_tokens": 1000
})
}).then(r => {
if (r.status !== 200) return response("Communication error");
return r.json().then(x => {
if (x.error)
return response(x.error.message);
else
return response(x.choices[0].message.content);
});
});
});

Here is the code snippet https://snippet.dhtmlx.com/4msiai8v

In this step, we've introduced a real LLM (Language Model) by fetching responses from OpenAI's API. Here's a breakdown of the key changes:

  • Defined a response function to display messages from the agent.
  • Attached the add-message event handler to send user messages to the OpenAI API and handle the responses.

With this code, the chatbot can now generate meaningful responses using OpenAI's model. However, note the following issues:

  • It only sends the last message to the API, so there's no context from previous messages.
  • The API key is exposed.
  • The response is not streamed, meaning the chatbot waits for the entire response before rendering it.

Stay tuned for the next section, where we will address these issues by incorporating a chat history to provide context in our API calls.

Client-Side Code with Message History

Enhancing our chatbot to include message history in API calls can greatly improve the context and quality of responses. This section will show you how to incorporate the conversation history when making API requests to OpenAI.

Here is the updated code:

bot.on("add-message", ({ message }) => {
if (message.role !== "user") return;
if (!apiKey) return response("You need to enter a valid API key in the JavaScript panel of the snippet tool");

const prompt = [
{ role: "system", content: "You are HAL 9000 from the Space Odyssey" },
];
const messages = bot.getConfig().messages.map(x => {
return {
role: x.role === "agent" ? "assistant" : "user",
content: x.content
};
});

return fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
},
body: JSON.stringify({
"model": "gpt-4o-mini",
messages: prompt.concat(messages),
"temperature": 1,
"max_tokens": 1000
})
}).then(r => {
if (r.status !== 200) return response("Communication error");
return r.json().then(x => {
if (x.error)
return response(x.error.message);
else
return response(x.choices[0].message.content);
});
});
});

Here is the code snippet https://snippet.dhtmlx.com/p9gn8dpc

In this step, the code was enhanced to include the following key changes:

  • We start by checking the role of the message and validating the presence of the API key, similar to the previous example.
  • An initial prompt is defined to set the context of the conversation, specifying that the chatbot is HAL 9000 from the Space Odyssey.
  • The current conversation's messages are fetched and mapped to the required format, distinguishing between user and assistant roles.
  • The messages array is then concatenated with the prompt to include the entire conversation history in the API request.

By including the message history, the chatbot can generate more context-aware responses, leading to richer and more coherent interactions.

Conclusion

By following this tutorial, you've successfully integrated OpenAI's language model with a DHTMLX ChatBot. We've started with a basic setup, introduced real-time responses from OpenAI's API, and enhanced our bot to include message history for better context. Stay tuned as we work on improving security and real-time interactions in future updates. Happy coding!