Once you\u2019ve generated an API key, you have to put it in your computer as an environment variable and name it OPENAI_API_KEY<\/code>. This is a standard name that libraries like OpenAI<\/strong> and LangChain<\/strong> look for, so you don\u2019t need to pass it manually later on.<\/p>\n
Do note that Windows, macOS, and Linux each have their own way to set an environment variable.<\/p>\n
Windows<\/h5>\n
\n- Right-click on \u201cThis PC<\/strong>\u201d or \u201cMy Computer<\/strong>\u201d and select \u201cProperties<\/strong>\u201c.<\/li>\n
- Click on \u201cAdvanced system settings<\/strong>\u201d on the left sidebar.<\/li>\n
- In the System Properties window, click on the \u201cEnvironment Variables<\/strong>\u201d button.<\/li>\n
- Under \u201cSystem variables<\/strong>\u201d or \u201cUser variables<\/strong>\u201c, click \u201cNew<\/strong>\u201d and enter the name,
OPENAI_API_KEY<\/code>, and value of the environment variable.<\/li>\n<\/ol>\n
macOS and Linux<\/h5>\nTo set a permanent variable, add the following to your shell configuration file such as ~\/.bash_profile<\/code>, ~\/.bashrc<\/code>, ~\/.zshrc<\/code>.<\/p>\n
\r\nexport OPENAI_API_KEY=value\r\n<\/pre>\nLangChain<\/h4>\n
LangChain<\/strong> is a system that helps computers understand and work with human language. In our case, it provides tools that will help us convert text documents into numbers.<\/p>\n
You might wonder, why do we need to do this?<\/em><\/p>\n
Basically, AI, machines, or computers are good at working with numbers but not with words, sentences, and their meanings. So we need to convert words into numbers. <\/p>\n
This process is called embedding<\/strong>.<\/p>\n
It makes it easier for computers to analyze and find patterns in language data, as well as helps to understand the semantics of the information they are given from a human language.<\/p>\n
<\/figure>\nFor example, let\u2019s say a user sends a query about \u201cfancy cars<\/em>\u201c. Rather than trying to find the exact words from the information source, it would probably understand that you are trying to search for Ferrari, Maserati, Aston Martin, Mercedes Benz, etc.<\/p>\n
Next.js<\/h4>\nWe need a framework to create a user interface so users can interact with our chatbot. <\/p>\n
In our case, Next.js has everything we need to get our chatbot up and running for the end-users. We will build the interface using a React.js UI library, shadcn\/ui<\/a>. It has a route system for creating an API endpoint. <\/p>\n
It also provides an SDK<\/a> that will make it easier and quicker to build chat user interfaces.<\/p>\n
<\/figure>\nData and Other Prerequisites<\/h4>\nIdeally, we\u2019ll also need to prepare some data ready. These will be processed, stored in a Vector storage and sent to OpenAI to give more info for the prompt. <\/p>\n
In this example, to make it simpler, I\u2019ve made a JSON file with a list of title of a blog post. You can find them in the repository<\/a>. Ideally, you\u2019d want to retrieve this information directly from the database.<\/p>\n
node -v<\/pre>\nIf you don\u2019t have Node.js installed, you can follow the instructions on the official website<\/a>.<\/p>\n
How\u2019s Everything Going to Work?<\/h4>\nTo make it easy to understand, here\u2019s a high-level overview of how everything is going to work:<\/p>\n
\n- The user will input a question or query into the chatbot.<\/li>\n
- LangChain will retrieve related documents of the user\u2019s query.<\/li>\n
- Send the prompt, the query, and the related documents to the OpenAI API to get a response.<\/li>\n
- Display the response to the user.<\/li>\n<\/ol>\n
Now that we have a high-level overview of how everything is going to work, let\u2019s get started!<\/p>\n
Installing Dependencies<\/h4>\nLet\u2019s start by installing the necessary packages to build the user interface for our chatbot. Type the following command:<\/p>\n
\r\nnpx create-next-app@latest ai-assistant --typescript --tailwind --eslint\r\n<\/pre>\nThis command will install and set up Next.js with shadcn\/ui<\/strong>, TypeScript, Tailwind CSS<\/a>, and ESLint. It may ask you a few questions; in this case, it\u2019s best to select the default options.<\/p>\n
Once the installation is complete, navigate to the project directory:<\/p>\n
\r\ncd ai-assistant\r\n<\/pre>\nNext, we need to install a few additional dependencies, such as ai<\/a>, openai<\/a>, and langchain<\/a>, which were not included in the previous command.<\/p>\n
\r\nnpm i ai openai langchain @langchain\/openai remark-gfm\r\n<\/pre>\nBuilding the Chat Interface<\/h4>\n
To create the chat interface, we\u2019ll use some pre-built components from shadcn\/ui<\/strong> like the button<\/a>, avatar<\/a>, and input<\/a>. Fortunately, adding these components is easy with shadcn\/ui<\/strong>. Just type:<\/p>\n
\r\nnpx shadcn-ui@latest add scroll-area button avatar card input\r\n<\/pre>\nThis command will automatically pull and add the components to the ui<\/code> directory.<\/p>\n
Next, let\u2019s make a new file named Chat.tsx in the src\/components directory. This file will hold our chat interface.<\/p>\n
We\u2019ll use the ai package to manage tasks such as capturing user input, sending queries to the API, and receiving responses from the AI.<\/p>\n
The OpenAI\u2019s response can be plain text, HTML, or Markdown. To format it into proper HTML, we\u2019ll use the
remark-gfm<\/a> package.<\/p>\n
We\u2019ll also need to display avatars within the Chat interface. For this tutorial, I\u2019m using Avatartion<\/a> to generate avatars for both the AI and the user. These avatars are stored in the public<\/code> directory.<\/p>\n
Below is the code we\u2019ll add to this file.<\/p>\n
\r\n'use client';\r\n\r\nimport { Avatar, AvatarFallback, AvatarImage } from '@\/ui\/avatar';\r\nimport { Button } from '@\/ui\/button';\r\nimport {\r\n Card,\r\n CardContent,\r\n CardFooter,\r\n CardHeader,\r\n CardTitle,\r\n} from '@\/ui\/card';\r\nimport { Input } from '@\/ui\/input';\r\nimport { ScrollArea } from '@\/ui\/scroll-area';\r\nimport { useChat } from 'ai\/react';\r\nimport { Send } from 'lucide-react';\r\nimport { FunctionComponent, memo } from 'react';\r\nimport { ErrorBoundary } from 'react-error-boundary';\r\nimport ReactMarkdown, { Options } from 'react-markdown';\r\nimport remarkGfm from 'remark-gfm';\r\n\r\n\/**\r\n * Memoized ReactMarkdown component.\r\n * The component is memoized to prevent unnecessary re-renders.\r\n *\/\r\nconst MemoizedReactMarkdown: FunctionComponent<Options> = memo(\r\n ReactMarkdown,\r\n (prevProps, nextProps) =>\r\n prevProps.children === nextProps.children &&\r\n prevProps.className === nextProps.className\r\n);\r\n\r\n\/**\r\n * Represents a chat component that allows users to interact with a chatbot.\r\n * The component displays a chat interface with messages exchanged between the user and the chatbot.\r\n * Users can input their questions and receive responses from the chatbot.\r\n *\/\r\nexport const Chat = () => {\r\n const { handleInputChange, handleSubmit, input, messages } = useChat({\r\n api: '\/api\/chat',\r\n });\r\n\r\n return (\r\n <Card className=\"w-full max-w-3xl min-h-[640px] grid gap-3 grid-rows-[max-content,1fr,max-content]\">\r\n <CardHeader className=\"row-span-1\">\r\n <CardTitle>AI Assistant<\/CardTitle>\r\n <\/CardHeader>\r\n <CardContent className=\"h-full row-span-2\">\r\n <ScrollArea className=\"h-full w-full\">\r\n {messages.map((message) => {\r\n return (\r\n <div\r\n className=\"flex gap-3 text-slate-600 text-sm mb-4\"\r\n key={message.id}\r\n >\r\n {message.role === 'user' && (\r\n <Avatar>\r\n <AvatarFallback>U<\/AvatarFallback>\r\n <AvatarImage src=\"\/user.png\" \/>\r\n <\/Avatar>\r\n )}\r\n {message.role === 'assistant' && (\r\n <Avatar>\r\n <AvatarImage src=\"\/kovi.png\" \/>\r\n <\/Avatar>\r\n )}\r\n <p className=\"leading-relaxed\">\r\n <span className=\"block font-bold text-slate-700\">\r\n {message.role === 'user' ? 'User' : 'AI'}\r\n <\/span>\r\n <ErrorBoundary\r\n fallback={\r\n <div className=\"prose prose-neutral\">\r\n {message.content}\r\n <\/div>\r\n }\r\n >\r\n <MemoizedReactMarkdown\r\n className=\"prose prose-neutral prose-sm\"\r\n remarkPlugins={[remarkGfm]}\r\n >\r\n {message.content}\r\n <\/MemoizedReactMarkdown>\r\n <\/ErrorBoundary>\r\n <\/p>\r\n <\/div>\r\n );\r\n })}\r\n <\/ScrollArea>\r\n <\/CardContent>\r\n <CardFooter className=\"h-max row-span-3\">\r\n <form className=\"w-full flex gap-2\" onSubmit={handleSubmit}>\r\n <Input\r\n maxLength={1000}\r\n onChange={handleInputChange}\r\n placeholder=\"Your question...\"\r\n value={input}\r\n \/>\r\n <Button aria-label=\"Send\" type=\"submit\">\r\n <Send size={16} \/>\r\n <\/Button>\r\n <\/form>\r\n <\/CardFooter>\r\n <\/Card>\r\n );\r\n};\r\n<\/pre>\nLet\u2019s check out the UI. First, we need to enter the following command to start the Next.js localhost environment:<\/p>\n
npm run dev<\/pre>\nBy default, the Next.js localhost environment runs at localhost:3000<\/code>. Here\u2019s how our chatbot interface will appear in the browser:<\/p>\n
<\/figure>\n
Setting up the API endpoint<\/h4>\nNext, we need to set up the API endpoint that the UI will use when the user submits their query. To do this, we create a new file named route.ts<\/code> in the src\/app\/api\/chat<\/code> directory. Below is the code that goes into the file.<\/p>\n
\r\nimport { readData } from '@\/lib\/data';\r\nimport { OpenAIEmbeddings } from '@langchain\/openai';\r\nimport { OpenAIStream, StreamingTextResponse } from 'ai';\r\nimport { Document } from 'langchain\/document';\r\nimport { MemoryVectorStore } from 'langchain\/vectorstores\/memory';\r\nimport OpenAI from 'openai';\r\n\r\n\/**\r\n * Create a vector store from a list of documents using OpenAI embedding.\r\n *\/\r\nconst createStore = () => {\r\n const data = readData();\r\n\r\n return MemoryVectorStore.fromDocuments(\r\n data.map((title) => {\r\n return new Document({\r\n pageContent: `Title: ${title}`,\r\n });\r\n }),\r\n new OpenAIEmbeddings()\r\n );\r\n};\r\nconst openai = new OpenAI();\r\n\r\nexport async function POST(req: Request) {\r\n const { messages } = (await req.json()) as {\r\n messages: { content: string; role: 'assistant' | 'user' }[];\r\n };\r\n const store = await createStore();\r\n const results = await store.similaritySearch(messages[0].content, 100);\r\n const questions = messages\r\n .filter((m) => m.role === 'user')\r\n .map((m) => m.content);\r\n const latestQuestion = questions[questions.length - 1] || '';\r\n const response = await openai.chat.completions.create({\r\n messages: [\r\n {\r\n content: `You're a helpful assistant. You're here to help me with my questions.`,\r\n role: 'assistant',\r\n },\r\n {\r\n content: `\r\n Please answer the following question using the provided context.\r\n If the context is not provided, please simply say that you're not able to answer\r\n the question.\r\n\r\n Question:\r\n ${latestQuestion}\r\n\r\n Context:\r\n ${results.map((r) => r.pageContent).join('n')}\r\n `,\r\n role: 'user',\r\n },\r\n ],\r\n model: 'gpt-4',\r\n stream: true,\r\n temperature: 0,\r\n });\r\n const stream = OpenAIStream(response);\r\n\r\n return new StreamingTextResponse(stream);\r\n}\r\n<\/pre>\nLet\u2019s break down some important parts of the code to understand what\u2019s happening, as this code is crucial for making our chatbot work.<\/p>\n
First, the following code enables the endpoint to receive a POST request. It takes the messages<\/code> argument, which is automatically constructed by the ai<\/code> package running on the front-end.<\/p>\n
\r\nexport async function POST(req: Request) {\r\n const { messages } = (await req.json()) as {\r\n messages: { content: string; role: 'assistant' | 'user' }[];\r\n };\r\n}\r\n<\/pre>\nIn this section of the code, we process the JSON file, and store them in a vector store.<\/p>\n
\r\nconst createStore = () => {\r\n const data = readData();\r\n\r\n return MemoryVectorStore.fromDocuments(\r\n data.map((title) => {\r\n return new Document({\r\n pageContent: `Title: ${title}`,\r\n });\r\n }),\r\n new OpenAIEmbeddings()\r\n );\r\n};\r\n<\/pre>\nFor the sake of simplicity in this tutorial, we store the vector in memory. Ideally, you would need to store it in a Vector database. There are several options to choose from, such as:<\/p>\n
- ElasticSearch<\/a><\/li>\n
- MongoDB Atlas<\/a><\/li>\n
- OpenSearch<\/a><\/li>\n
- Redis<\/a><\/li>\n<\/ul>\n
Then we retrieve of the relevant piece from the document based on the user query from it.<\/p>\n
\r\nconst store = await createStore();\r\nconst results = await store.similaritySearch(messages[0].content, 100);\r\n<\/pre>\nFinally, we send the user\u2019s query and the related documents to the OpenAI API to get a response, and then return the response to the user. In this tutorial, we use the GPT-4 model, which is currently the latest and most powerful model in OpenAI.<\/p>\n
\r\nconst latestQuestion = questions[questions.length - 1] || '';\r\nconst response = await openai.chat.completions.create({\r\n messages: [\r\n {\r\n content: `You're a helpful assistant. You're here to help me with my questions.`,\r\n role: 'assistant',\r\n },\r\n {\r\n content: `\r\n Please answer the following question using the provided context.\r\n If the context is not provided, please simply say that you're not able to answer\r\n the question.\r\n\r\n Question:\r\n ${latestQuestion}\r\n\r\n Context:\r\n ${results.map((r) => r.pageContent).join('n')}\r\n `,\r\n role: 'user',\r\n },\r\n ],\r\n model: 'gpt-4',\r\n stream: true,\r\n temperature: 0,\r\n});\r\n<\/pre>\nWe use a simple very prompt. We first tell OpenAI to evaluate the user\u2019s query and respond to user with the provided context. We also set the latest model available in OpenAI, gpt-4<\/code> and set the temperature<\/code> to 0<\/code>. Our goal is to ensure that the AI only responds within the scope of the context, instead of being creative which can often lead to
hallucination<\/a><\/strong>.<\/p>\n
And that\u2019s it. Now, we can try to chat with the chatbot; our virtual personal assistant.<\/p>\n
Wrapping Up<\/h4>\nWe\u2019ve just built a simple chatbot! There\u2019s room to make it more advanced, certainly. As mentioned in this tutorial, if you plan to use it in production, you should store your vector data in a proper database instead of in memory. You might also want to add more data to provide better context for answering user queries. You may also try tweaking the prompt to improve the AI\u2019s response.<\/p>\n
Overall, I hope this helps you get started with building your next AI-powered application.<\/p>\n
The post How to Create a Personalized AI Assistant with OpenAI<\/a> appeared first on Hongkiat<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"
OPENAI_API_KEY<\/code>, and value of the environment variable.<\/li>\n<\/ol>\n
macOS and Linux<\/h5>\nTo set a permanent variable, add the following to your shell configuration file such as ~\/.bash_profile<\/code>, ~\/.bashrc<\/code>, ~\/.zshrc<\/code>.<\/p>\n
\r\nexport OPENAI_API_KEY=value\r\n<\/pre>\nLangChain<\/h4>\n
LangChain<\/strong> is a system that helps computers understand and work with human language. In our case, it provides tools that will help us convert text documents into numbers.<\/p>\n
You might wonder, why do we need to do this?<\/em><\/p>\n
Basically, AI, machines, or computers are good at working with numbers but not with words, sentences, and their meanings. So we need to convert words into numbers. <\/p>\n
This process is called embedding<\/strong>.<\/p>\n
It makes it easier for computers to analyze and find patterns in language data, as well as helps to understand the semantics of the information they are given from a human language.<\/p>\n
<\/figure>\nFor example, let\u2019s say a user sends a query about \u201cfancy cars<\/em>\u201c. Rather than trying to find the exact words from the information source, it would probably understand that you are trying to search for Ferrari, Maserati, Aston Martin, Mercedes Benz, etc.<\/p>\n
Next.js<\/h4>\nWe need a framework to create a user interface so users can interact with our chatbot. <\/p>\n
In our case, Next.js has everything we need to get our chatbot up and running for the end-users. We will build the interface using a React.js UI library, shadcn\/ui<\/a>. It has a route system for creating an API endpoint. <\/p>\n
It also provides an SDK<\/a> that will make it easier and quicker to build chat user interfaces.<\/p>\n
<\/figure>\nData and Other Prerequisites<\/h4>\nIdeally, we\u2019ll also need to prepare some data ready. These will be processed, stored in a Vector storage and sent to OpenAI to give more info for the prompt. <\/p>\n
In this example, to make it simpler, I\u2019ve made a JSON file with a list of title of a blog post. You can find them in the repository<\/a>. Ideally, you\u2019d want to retrieve this information directly from the database.<\/p>\n
node -v<\/pre>\nIf you don\u2019t have Node.js installed, you can follow the instructions on the official website<\/a>.<\/p>\n
How\u2019s Everything Going to Work?<\/h4>\nTo make it easy to understand, here\u2019s a high-level overview of how everything is going to work:<\/p>\n
\n- The user will input a question or query into the chatbot.<\/li>\n
- LangChain will retrieve related documents of the user\u2019s query.<\/li>\n
- Send the prompt, the query, and the related documents to the OpenAI API to get a response.<\/li>\n
- Display the response to the user.<\/li>\n<\/ol>\n
Now that we have a high-level overview of how everything is going to work, let\u2019s get started!<\/p>\n
Installing Dependencies<\/h4>\nLet\u2019s start by installing the necessary packages to build the user interface for our chatbot. Type the following command:<\/p>\n
\r\nnpx create-next-app@latest ai-assistant --typescript --tailwind --eslint\r\n<\/pre>\nThis command will install and set up Next.js with shadcn\/ui<\/strong>, TypeScript, Tailwind CSS<\/a>, and ESLint. It may ask you a few questions; in this case, it\u2019s best to select the default options.<\/p>\n
Once the installation is complete, navigate to the project directory:<\/p>\n
\r\ncd ai-assistant\r\n<\/pre>\nNext, we need to install a few additional dependencies, such as ai<\/a>, openai<\/a>, and langchain<\/a>, which were not included in the previous command.<\/p>\n
\r\nnpm i ai openai langchain @langchain\/openai remark-gfm\r\n<\/pre>\nBuilding the Chat Interface<\/h4>\n
To create the chat interface, we\u2019ll use some pre-built components from shadcn\/ui<\/strong> like the button<\/a>, avatar<\/a>, and input<\/a>. Fortunately, adding these components is easy with shadcn\/ui<\/strong>. Just type:<\/p>\n
\r\nnpx shadcn-ui@latest add scroll-area button avatar card input\r\n<\/pre>\nThis command will automatically pull and add the components to the ui<\/code> directory.<\/p>\n
Next, let\u2019s make a new file named Chat.tsx in the src\/components directory. This file will hold our chat interface.<\/p>\n
We\u2019ll use the ai package to manage tasks such as capturing user input, sending queries to the API, and receiving responses from the AI.<\/p>\n
The OpenAI\u2019s response can be plain text, HTML, or Markdown. To format it into proper HTML, we\u2019ll use the
remark-gfm<\/a> package.<\/p>\n
We\u2019ll also need to display avatars within the Chat interface. For this tutorial, I\u2019m using Avatartion<\/a> to generate avatars for both the AI and the user. These avatars are stored in the public<\/code> directory.<\/p>\n
Below is the code we\u2019ll add to this file.<\/p>\n
\r\n'use client';\r\n\r\nimport { Avatar, AvatarFallback, AvatarImage } from '@\/ui\/avatar';\r\nimport { Button } from '@\/ui\/button';\r\nimport {\r\n Card,\r\n CardContent,\r\n CardFooter,\r\n CardHeader,\r\n CardTitle,\r\n} from '@\/ui\/card';\r\nimport { Input } from '@\/ui\/input';\r\nimport { ScrollArea } from '@\/ui\/scroll-area';\r\nimport { useChat } from 'ai\/react';\r\nimport { Send } from 'lucide-react';\r\nimport { FunctionComponent, memo } from 'react';\r\nimport { ErrorBoundary } from 'react-error-boundary';\r\nimport ReactMarkdown, { Options } from 'react-markdown';\r\nimport remarkGfm from 'remark-gfm';\r\n\r\n\/**\r\n * Memoized ReactMarkdown component.\r\n * The component is memoized to prevent unnecessary re-renders.\r\n *\/\r\nconst MemoizedReactMarkdown: FunctionComponent<Options> = memo(\r\n ReactMarkdown,\r\n (prevProps, nextProps) =>\r\n prevProps.children === nextProps.children &&\r\n prevProps.className === nextProps.className\r\n);\r\n\r\n\/**\r\n * Represents a chat component that allows users to interact with a chatbot.\r\n * The component displays a chat interface with messages exchanged between the user and the chatbot.\r\n * Users can input their questions and receive responses from the chatbot.\r\n *\/\r\nexport const Chat = () => {\r\n const { handleInputChange, handleSubmit, input, messages } = useChat({\r\n api: '\/api\/chat',\r\n });\r\n\r\n return (\r\n <Card className=\"w-full max-w-3xl min-h-[640px] grid gap-3 grid-rows-[max-content,1fr,max-content]\">\r\n <CardHeader className=\"row-span-1\">\r\n <CardTitle>AI Assistant<\/CardTitle>\r\n <\/CardHeader>\r\n <CardContent className=\"h-full row-span-2\">\r\n <ScrollArea className=\"h-full w-full\">\r\n {messages.map((message) => {\r\n return (\r\n <div\r\n className=\"flex gap-3 text-slate-600 text-sm mb-4\"\r\n key={message.id}\r\n >\r\n {message.role === 'user' && (\r\n <Avatar>\r\n <AvatarFallback>U<\/AvatarFallback>\r\n <AvatarImage src=\"\/user.png\" \/>\r\n <\/Avatar>\r\n )}\r\n {message.role === 'assistant' && (\r\n <Avatar>\r\n <AvatarImage src=\"\/kovi.png\" \/>\r\n <\/Avatar>\r\n )}\r\n <p className=\"leading-relaxed\">\r\n <span className=\"block font-bold text-slate-700\">\r\n {message.role === 'user' ? 'User' : 'AI'}\r\n <\/span>\r\n <ErrorBoundary\r\n fallback={\r\n <div className=\"prose prose-neutral\">\r\n {message.content}\r\n <\/div>\r\n }\r\n >\r\n <MemoizedReactMarkdown\r\n className=\"prose prose-neutral prose-sm\"\r\n remarkPlugins={[remarkGfm]}\r\n >\r\n {message.content}\r\n <\/MemoizedReactMarkdown>\r\n <\/ErrorBoundary>\r\n <\/p>\r\n <\/div>\r\n );\r\n })}\r\n <\/ScrollArea>\r\n <\/CardContent>\r\n <CardFooter className=\"h-max row-span-3\">\r\n <form className=\"w-full flex gap-2\" onSubmit={handleSubmit}>\r\n <Input\r\n maxLength={1000}\r\n onChange={handleInputChange}\r\n placeholder=\"Your question...\"\r\n value={input}\r\n \/>\r\n <Button aria-label=\"Send\" type=\"submit\">\r\n <Send size={16} \/>\r\n <\/Button>\r\n <\/form>\r\n <\/CardFooter>\r\n <\/Card>\r\n );\r\n};\r\n<\/pre>\nLet\u2019s check out the UI. First, we need to enter the following command to start the Next.js localhost environment:<\/p>\n
npm run dev<\/pre>\nBy default, the Next.js localhost environment runs at localhost:3000<\/code>. Here\u2019s how our chatbot interface will appear in the browser:<\/p>\n
<\/figure>\n
Setting up the API endpoint<\/h4>\nNext, we need to set up the API endpoint that the UI will use when the user submits their query. To do this, we create a new file named route.ts<\/code> in the src\/app\/api\/chat<\/code> directory. Below is the code that goes into the file.<\/p>\n
\r\nimport { readData } from '@\/lib\/data';\r\nimport { OpenAIEmbeddings } from '@langchain\/openai';\r\nimport { OpenAIStream, StreamingTextResponse } from 'ai';\r\nimport { Document } from 'langchain\/document';\r\nimport { MemoryVectorStore } from 'langchain\/vectorstores\/memory';\r\nimport OpenAI from 'openai';\r\n\r\n\/**\r\n * Create a vector store from a list of documents using OpenAI embedding.\r\n *\/\r\nconst createStore = () => {\r\n const data = readData();\r\n\r\n return MemoryVectorStore.fromDocuments(\r\n data.map((title) => {\r\n return new Document({\r\n pageContent: `Title: ${title}`,\r\n });\r\n }),\r\n new OpenAIEmbeddings()\r\n );\r\n};\r\nconst openai = new OpenAI();\r\n\r\nexport async function POST(req: Request) {\r\n const { messages } = (await req.json()) as {\r\n messages: { content: string; role: 'assistant' | 'user' }[];\r\n };\r\n const store = await createStore();\r\n const results = await store.similaritySearch(messages[0].content, 100);\r\n const questions = messages\r\n .filter((m) => m.role === 'user')\r\n .map((m) => m.content);\r\n const latestQuestion = questions[questions.length - 1] || '';\r\n const response = await openai.chat.completions.create({\r\n messages: [\r\n {\r\n content: `You're a helpful assistant. You're here to help me with my questions.`,\r\n role: 'assistant',\r\n },\r\n {\r\n content: `\r\n Please answer the following question using the provided context.\r\n If the context is not provided, please simply say that you're not able to answer\r\n the question.\r\n\r\n Question:\r\n ${latestQuestion}\r\n\r\n Context:\r\n ${results.map((r) => r.pageContent).join('n')}\r\n `,\r\n role: 'user',\r\n },\r\n ],\r\n model: 'gpt-4',\r\n stream: true,\r\n temperature: 0,\r\n });\r\n const stream = OpenAIStream(response);\r\n\r\n return new StreamingTextResponse(stream);\r\n}\r\n<\/pre>\nLet\u2019s break down some important parts of the code to understand what\u2019s happening, as this code is crucial for making our chatbot work.<\/p>\n
First, the following code enables the endpoint to receive a POST request. It takes the messages<\/code> argument, which is automatically constructed by the ai<\/code> package running on the front-end.<\/p>\n
\r\nexport async function POST(req: Request) {\r\n const { messages } = (await req.json()) as {\r\n messages: { content: string; role: 'assistant' | 'user' }[];\r\n };\r\n}\r\n<\/pre>\nIn this section of the code, we process the JSON file, and store them in a vector store.<\/p>\n
\r\nconst createStore = () => {\r\n const data = readData();\r\n\r\n return MemoryVectorStore.fromDocuments(\r\n data.map((title) => {\r\n return new Document({\r\n pageContent: `Title: ${title}`,\r\n });\r\n }),\r\n new OpenAIEmbeddings()\r\n );\r\n};\r\n<\/pre>\nFor the sake of simplicity in this tutorial, we store the vector in memory. Ideally, you would need to store it in a Vector database. There are several options to choose from, such as:<\/p>\n
~\/.bash_profile<\/code>, ~\/.bashrc<\/code>, ~\/.zshrc<\/code>.<\/p>\n
\r\nexport OPENAI_API_KEY=value\r\n<\/pre>\nLangChain<\/h4>\n
LangChain<\/strong> is a system that helps computers understand and work with human language. In our case, it provides tools that will help us convert text documents into numbers.<\/p>\n
You might wonder, why do we need to do this?<\/em><\/p>\n
Basically, AI, machines, or computers are good at working with numbers but not with words, sentences, and their meanings. So we need to convert words into numbers. <\/p>\n
This process is called embedding<\/strong>.<\/p>\n
It makes it easier for computers to analyze and find patterns in language data, as well as helps to understand the semantics of the information they are given from a human language.<\/p>\n
<\/figure>\nFor example, let\u2019s say a user sends a query about \u201cfancy cars<\/em>\u201c. Rather than trying to find the exact words from the information source, it would probably understand that you are trying to search for Ferrari, Maserati, Aston Martin, Mercedes Benz, etc.<\/p>\n
Next.js<\/h4>\nWe need a framework to create a user interface so users can interact with our chatbot. <\/p>\n
In our case, Next.js has everything we need to get our chatbot up and running for the end-users. We will build the interface using a React.js UI library, shadcn\/ui<\/a>. It has a route system for creating an API endpoint. <\/p>\n
It also provides an SDK<\/a> that will make it easier and quicker to build chat user interfaces.<\/p>\n
<\/figure>\nData and Other Prerequisites<\/h4>\nIdeally, we\u2019ll also need to prepare some data ready. These will be processed, stored in a Vector storage and sent to OpenAI to give more info for the prompt. <\/p>\n
In this example, to make it simpler, I\u2019ve made a JSON file with a list of title of a blog post. You can find them in the repository<\/a>. Ideally, you\u2019d want to retrieve this information directly from the database.<\/p>\n
node -v<\/pre>\nIf you don\u2019t have Node.js installed, you can follow the instructions on the official website<\/a>.<\/p>\n
How\u2019s Everything Going to Work?<\/h4>\nTo make it easy to understand, here\u2019s a high-level overview of how everything is going to work:<\/p>\n
\n- The user will input a question or query into the chatbot.<\/li>\n
- LangChain will retrieve related documents of the user\u2019s query.<\/li>\n
- Send the prompt, the query, and the related documents to the OpenAI API to get a response.<\/li>\n
- Display the response to the user.<\/li>\n<\/ol>\n
Now that we have a high-level overview of how everything is going to work, let\u2019s get started!<\/p>\n
Installing Dependencies<\/h4>\nLet\u2019s start by installing the necessary packages to build the user interface for our chatbot. Type the following command:<\/p>\n
\r\nnpx create-next-app@latest ai-assistant --typescript --tailwind --eslint\r\n<\/pre>\nThis command will install and set up Next.js with shadcn\/ui<\/strong>, TypeScript, Tailwind CSS<\/a>, and ESLint. It may ask you a few questions; in this case, it\u2019s best to select the default options.<\/p>\n
Once the installation is complete, navigate to the project directory:<\/p>\n
\r\ncd ai-assistant\r\n<\/pre>\nNext, we need to install a few additional dependencies, such as ai<\/a>, openai<\/a>, and langchain<\/a>, which were not included in the previous command.<\/p>\n
\r\nnpm i ai openai langchain @langchain\/openai remark-gfm\r\n<\/pre>\nBuilding the Chat Interface<\/h4>\n
To create the chat interface, we\u2019ll use some pre-built components from shadcn\/ui<\/strong> like the button<\/a>, avatar<\/a>, and input<\/a>. Fortunately, adding these components is easy with shadcn\/ui<\/strong>. Just type:<\/p>\n
\r\nnpx shadcn-ui@latest add scroll-area button avatar card input\r\n<\/pre>\nThis command will automatically pull and add the components to the ui<\/code> directory.<\/p>\n
Next, let\u2019s make a new file named Chat.tsx in the src\/components directory. This file will hold our chat interface.<\/p>\n
We\u2019ll use the ai package to manage tasks such as capturing user input, sending queries to the API, and receiving responses from the AI.<\/p>\n
The OpenAI\u2019s response can be plain text, HTML, or Markdown. To format it into proper HTML, we\u2019ll use the
remark-gfm<\/a> package.<\/p>\n
We\u2019ll also need to display avatars within the Chat interface. For this tutorial, I\u2019m using Avatartion<\/a> to generate avatars for both the AI and the user. These avatars are stored in the public<\/code> directory.<\/p>\n
Below is the code we\u2019ll add to this file.<\/p>\n
\r\n'use client';\r\n\r\nimport { Avatar, AvatarFallback, AvatarImage } from '@\/ui\/avatar';\r\nimport { Button } from '@\/ui\/button';\r\nimport {\r\n Card,\r\n CardContent,\r\n CardFooter,\r\n CardHeader,\r\n CardTitle,\r\n} from '@\/ui\/card';\r\nimport { Input } from '@\/ui\/input';\r\nimport { ScrollArea } from '@\/ui\/scroll-area';\r\nimport { useChat } from 'ai\/react';\r\nimport { Send } from 'lucide-react';\r\nimport { FunctionComponent, memo } from 'react';\r\nimport { ErrorBoundary } from 'react-error-boundary';\r\nimport ReactMarkdown, { Options } from 'react-markdown';\r\nimport remarkGfm from 'remark-gfm';\r\n\r\n\/**\r\n * Memoized ReactMarkdown component.\r\n * The component is memoized to prevent unnecessary re-renders.\r\n *\/\r\nconst MemoizedReactMarkdown: FunctionComponent<Options> = memo(\r\n ReactMarkdown,\r\n (prevProps, nextProps) =>\r\n prevProps.children === nextProps.children &&\r\n prevProps.className === nextProps.className\r\n);\r\n\r\n\/**\r\n * Represents a chat component that allows users to interact with a chatbot.\r\n * The component displays a chat interface with messages exchanged between the user and the chatbot.\r\n * Users can input their questions and receive responses from the chatbot.\r\n *\/\r\nexport const Chat = () => {\r\n const { handleInputChange, handleSubmit, input, messages } = useChat({\r\n api: '\/api\/chat',\r\n });\r\n\r\n return (\r\n <Card className=\"w-full max-w-3xl min-h-[640px] grid gap-3 grid-rows-[max-content,1fr,max-content]\">\r\n <CardHeader className=\"row-span-1\">\r\n <CardTitle>AI Assistant<\/CardTitle>\r\n <\/CardHeader>\r\n <CardContent className=\"h-full row-span-2\">\r\n <ScrollArea className=\"h-full w-full\">\r\n {messages.map((message) => {\r\n return (\r\n <div\r\n className=\"flex gap-3 text-slate-600 text-sm mb-4\"\r\n key={message.id}\r\n >\r\n {message.role === 'user' && (\r\n <Avatar>\r\n <AvatarFallback>U<\/AvatarFallback>\r\n <AvatarImage src=\"\/user.png\" \/>\r\n <\/Avatar>\r\n )}\r\n {message.role === 'assistant' && (\r\n <Avatar>\r\n <AvatarImage src=\"\/kovi.png\" \/>\r\n <\/Avatar>\r\n )}\r\n <p className=\"leading-relaxed\">\r\n <span className=\"block font-bold text-slate-700\">\r\n {message.role === 'user' ? 'User' : 'AI'}\r\n <\/span>\r\n <ErrorBoundary\r\n fallback={\r\n <div className=\"prose prose-neutral\">\r\n {message.content}\r\n <\/div>\r\n }\r\n >\r\n <MemoizedReactMarkdown\r\n className=\"prose prose-neutral prose-sm\"\r\n remarkPlugins={[remarkGfm]}\r\n >\r\n {message.content}\r\n <\/MemoizedReactMarkdown>\r\n <\/ErrorBoundary>\r\n <\/p>\r\n <\/div>\r\n );\r\n })}\r\n <\/ScrollArea>\r\n <\/CardContent>\r\n <CardFooter className=\"h-max row-span-3\">\r\n <form className=\"w-full flex gap-2\" onSubmit={handleSubmit}>\r\n <Input\r\n maxLength={1000}\r\n onChange={handleInputChange}\r\n placeholder=\"Your question...\"\r\n value={input}\r\n \/>\r\n <Button aria-label=\"Send\" type=\"submit\">\r\n <Send size={16} \/>\r\n <\/Button>\r\n <\/form>\r\n <\/CardFooter>\r\n <\/Card>\r\n );\r\n};\r\n<\/pre>\nLet\u2019s check out the UI. First, we need to enter the following command to start the Next.js localhost environment:<\/p>\n
npm run dev<\/pre>\nBy default, the Next.js localhost environment runs at localhost:3000<\/code>. Here\u2019s how our chatbot interface will appear in the browser:<\/p>\n
<\/figure>\n
Setting up the API endpoint<\/h4>\nNext, we need to set up the API endpoint that the UI will use when the user submits their query. To do this, we create a new file named route.ts<\/code> in the src\/app\/api\/chat<\/code> directory. Below is the code that goes into the file.<\/p>\n
\r\nimport { readData } from '@\/lib\/data';\r\nimport { OpenAIEmbeddings } from '@langchain\/openai';\r\nimport { OpenAIStream, StreamingTextResponse } from 'ai';\r\nimport { Document } from 'langchain\/document';\r\nimport { MemoryVectorStore } from 'langchain\/vectorstores\/memory';\r\nimport OpenAI from 'openai';\r\n\r\n\/**\r\n * Create a vector store from a list of documents using OpenAI embedding.\r\n *\/\r\nconst createStore = () => {\r\n const data = readData();\r\n\r\n return MemoryVectorStore.fromDocuments(\r\n data.map((title) => {\r\n return new Document({\r\n pageContent: `Title: ${title}`,\r\n });\r\n }),\r\n new OpenAIEmbeddings()\r\n );\r\n};\r\nconst openai = new OpenAI();\r\n\r\nexport async function POST(req: Request) {\r\n const { messages } = (await req.json()) as {\r\n messages: { content: string; role: 'assistant' | 'user' }[];\r\n };\r\n const store = await createStore();\r\n const results = await store.similaritySearch(messages[0].content, 100);\r\n const questions = messages\r\n .filter((m) => m.role === 'user')\r\n .map((m) => m.content);\r\n const latestQuestion = questions[questions.length - 1] || '';\r\n const response = await openai.chat.completions.create({\r\n messages: [\r\n {\r\n content: `You're a helpful assistant. You're here to help me with my questions.`,\r\n role: 'assistant',\r\n },\r\n {\r\n content: `\r\n Please answer the following question using the provided context.\r\n If the context is not provided, please simply say that you're not able to answer\r\n the question.\r\n\r\n Question:\r\n ${latestQuestion}\r\n\r\n Context:\r\n ${results.map((r) => r.pageContent).join('n')}\r\n `,\r\n role: 'user',\r\n },\r\n ],\r\n model: 'gpt-4',\r\n stream: true,\r\n temperature: 0,\r\n });\r\n const stream = OpenAIStream(response);\r\n\r\n return new StreamingTextResponse(stream);\r\n}\r\n<\/pre>\nLet\u2019s break down some important parts of the code to understand what\u2019s happening, as this code is crucial for making our chatbot work.<\/p>\n
First, the following code enables the endpoint to receive a POST request. It takes the messages<\/code> argument, which is automatically constructed by the ai<\/code> package running on the front-end.<\/p>\n
\r\nexport async function POST(req: Request) {\r\n const { messages } = (await req.json()) as {\r\n messages: { content: string; role: 'assistant' | 'user' }[];\r\n };\r\n}\r\n<\/pre>\nIn this section of the code, we process the JSON file, and store them in a vector store.<\/p>\n
\r\nconst createStore = () => {\r\n const data = readData();\r\n\r\n return MemoryVectorStore.fromDocuments(\r\n data.map((title) => {\r\n return new Document({\r\n pageContent: `Title: ${title}`,\r\n });\r\n }),\r\n new OpenAIEmbeddings()\r\n );\r\n};\r\n<\/pre>\nFor the sake of simplicity in this tutorial, we store the vector in memory. Ideally, you would need to store it in a Vector database. There are several options to choose from, such as:<\/p>\n
Then we retrieve of the relevant piece from the document based on the user query from it.<\/p>\n
\r\nconst store = await createStore();\r\nconst results = await store.similaritySearch(messages[0].content, 100);\r\n<\/pre>\nFinally, we send the user\u2019s query and the related documents to the OpenAI API to get a response, and then return the response to the user. In this tutorial, we use the GPT-4 model, which is currently the latest and most powerful model in OpenAI.<\/p>\n
\r\nconst latestQuestion = questions[questions.length - 1] || '';\r\nconst response = await openai.chat.completions.create({\r\n messages: [\r\n {\r\n content: `You're a helpful assistant. You're here to help me with my questions.`,\r\n role: 'assistant',\r\n },\r\n {\r\n content: `\r\n Please answer the following question using the provided context.\r\n If the context is not provided, please simply say that you're not able to answer\r\n the question.\r\n\r\n Question:\r\n ${latestQuestion}\r\n\r\n Context:\r\n ${results.map((r) => r.pageContent).join('n')}\r\n `,\r\n role: 'user',\r\n },\r\n ],\r\n model: 'gpt-4',\r\n stream: true,\r\n temperature: 0,\r\n});\r\n<\/pre>\nWe use a simple very prompt. We first tell OpenAI to evaluate the user\u2019s query and respond to user with the provided context. We also set the latest model available in OpenAI,
gpt-4<\/code> and set the
temperature<\/code> to
0<\/code>. Our goal is to ensure that the AI only responds within the scope of the context, instead of being creative which can often lead to
hallucination<\/a><\/strong>.<\/p>\n
And that\u2019s it. Now, we can try to chat with the chatbot; our virtual personal assistant.<\/p>\n
Wrapping Up<\/h4>\n
We\u2019ve just built a simple chatbot! There\u2019s room to make it more advanced, certainly. As mentioned in this tutorial, if you plan to use it in production, you should store your vector data in a proper database instead of in memory. You might also want to add more data to provide better context for answering user queries. You may also try tweaking the prompt to improve the AI\u2019s response.<\/p>\n
Overall, I hope this helps you get started with building your next AI-powered application.<\/p>\n
The post How to Create a Personalized AI Assistant with OpenAI<\/a> appeared first on Hongkiat<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"