Once you\u2019ve generated an API key, you have to put it in your computer as an environment variable and name it Do note that Windows, macOS, and Linux each have their own way to set an environment variable.<\/p>\n To set a permanent variable, add the following to your shell configuration file such as LangChain<\/strong> is a system that helps computers understand and work with human language. In our case, it provides tools that will help us convert text documents into numbers.<\/p>\n You might wonder, why do we need to do this?<\/em><\/p>\n Basically, AI, machines, or computers are good at working with numbers but not with words, sentences, and their meanings. So we need to convert words into numbers. <\/p>\n This process is called embedding<\/strong>.<\/p>\n It makes it easier for computers to analyze and find patterns in language data, as well as helps to understand the semantics of the information they are given from a human language.<\/p>\n For example, let\u2019s say a user sends a query about \u201cfancy cars<\/em>\u201c. Rather than trying to find the exact words from the information source, it would probably understand that you are trying to search for Ferrari, Maserati, Aston Martin, Mercedes Benz, etc.<\/p>\n We need a framework to create a user interface so users can interact with our chatbot. <\/p>\n In our case, Next.js has everything we need to get our chatbot up and running for the end-users. We will build the interface using a React.js UI library, shadcn\/ui<\/a>. It has a route system for creating an API endpoint. <\/p>\n It also provides an SDK<\/a> that will make it easier and quicker to build chat user interfaces.<\/p>\n Ideally, we\u2019ll also need to prepare some data ready. These will be processed, stored in a Vector storage and sent to OpenAI to give more info for the prompt. <\/p>\n In this example, to make it simpler, I\u2019ve made a JSON file with a list of title of a blog post. You can find them in the repository<\/a>. Ideally, you\u2019d want to retrieve this information directly from the database.<\/p>\n I assume you have a fair understanding of working with JavaScript, React.js, and NPM because we\u2019ll use them to build our chatbot. <\/p>\n Also, make sure you have Node.js installed on your computer. You can check if it\u2019s installed by typing:<\/p>\n If you don\u2019t have Node.js installed, you can follow the instructions on the official website<\/a>.<\/p>\n To make it easy to understand, here\u2019s a high-level overview of how everything is going to work:<\/p>\n Now that we have a high-level overview of how everything is going to work, let\u2019s get started!<\/p>\n Let\u2019s start by installing the necessary packages to build the user interface for our chatbot. Type the following command:<\/p>\n This command will install and set up Next.js with shadcn\/ui<\/strong>, TypeScript, Tailwind CSS<\/a>, and ESLint. It may ask you a few questions; in this case, it\u2019s best to select the default options.<\/p>\n Once the installation is complete, navigate to the project directory:<\/p>\n Next, we need to install a few additional dependencies, such as ai<\/a>, openai<\/a>, and langchain<\/a>, which were not included in the previous command.<\/p>\n To create the chat interface, we\u2019ll use some pre-built components from shadcn\/ui<\/strong> like the button<\/a>, avatar<\/a>, and input<\/a>. Fortunately, adding these components is easy with shadcn\/ui<\/strong>. Just type:<\/p>\n This command will automatically pull and add the components to the Next, let\u2019s make a new file named Chat.tsx in the src\/components directory. This file will hold our chat interface.<\/p>\n We\u2019ll use the ai package to manage tasks such as capturing user input, sending queries to the API, and receiving responses from the AI.<\/p>\n The OpenAI\u2019s response can be plain text, HTML, or Markdown. To format it into proper HTML, we\u2019ll use the remark-gfm<\/a> package.<\/p>\n We\u2019ll also need to display avatars within the Chat interface. For this tutorial, I\u2019m using Avatartion<\/a> to generate avatars for both the AI and the user. These avatars are stored in the Below is the code we\u2019ll add to this file.<\/p>\n Let\u2019s check out the UI. First, we need to enter the following command to start the Next.js localhost environment:<\/p>\n By default, the Next.js localhost environment runs at Next, we need to set up the API endpoint that the UI will use when the user submits their query. To do this, we create a new file named Let\u2019s break down some important parts of the code to understand what\u2019s happening, as this code is crucial for making our chatbot work.<\/p>\n First, the following code enables the endpoint to receive a POST request. It takes the In this section of the code, we process the JSON file, and store them in a vector store.<\/p>\n For the sake of simplicity in this tutorial, we store the vector in memory. Ideally, you would need to store it in a Vector database. There are several options to choose from, such as:<\/p>\n Then we retrieve of the relevant piece from the document based on the user query from it.<\/p>\n Finally, we send the user\u2019s query and the related documents to the OpenAI API to get a response, and then return the response to the user. In this tutorial, we use the GPT-4 model, which is currently the latest and most powerful model in OpenAI.<\/p>\n We use a simple very prompt. We first tell OpenAI to evaluate the user\u2019s query and respond to user with the provided context. We also set the latest model available in OpenAI, And that\u2019s it. Now, we can try to chat with the chatbot; our virtual personal assistant.<\/p>\nOPENAI_API_KEY<\/code>. This is a standard name that libraries like OpenAI<\/strong> and LangChain<\/strong> look for, so you don\u2019t need to pass it manually later on.<\/p>\n
Windows<\/h5>\n
\n
OPENAI_API_KEY<\/code>, and value of the environment variable.<\/li>\n<\/ol>\n
macOS and Linux<\/h5>\n
~\/.bash_profile<\/code>,
~\/.bashrc<\/code>,
~\/.zshrc<\/code>.<\/p>\n
\r\nexport OPENAI_API_KEY=value\r\n<\/pre>\n
LangChain<\/h4>\n
<\/figure>\n
Next.js<\/h4>\n
<\/figure>\n
Data and Other Prerequisites<\/h4>\n
node -v<\/pre>\n
How\u2019s Everything Going to Work?<\/h4>\n
\n
Installing Dependencies<\/h4>\n
\r\nnpx create-next-app@latest ai-assistant --typescript --tailwind --eslint\r\n<\/pre>\n
\r\ncd ai-assistant\r\n<\/pre>\n
\r\nnpm i ai openai langchain @langchain\/openai remark-gfm\r\n<\/pre>\n
Building the Chat Interface<\/h4>\n
\r\nnpx shadcn-ui@latest add scroll-area button avatar card input\r\n<\/pre>\n
ui<\/code> directory.<\/p>\n
public<\/code> directory.<\/p>\n
\r\n'use client';\r\n\r\nimport { Avatar, AvatarFallback, AvatarImage } from '@\/ui\/avatar';\r\nimport { Button } from '@\/ui\/button';\r\nimport {\r\n Card,\r\n CardContent,\r\n CardFooter,\r\n CardHeader,\r\n CardTitle,\r\n} from '@\/ui\/card';\r\nimport { Input } from '@\/ui\/input';\r\nimport { ScrollArea } from '@\/ui\/scroll-area';\r\nimport { useChat } from 'ai\/react';\r\nimport { Send } from 'lucide-react';\r\nimport { FunctionComponent, memo } from 'react';\r\nimport { ErrorBoundary } from 'react-error-boundary';\r\nimport ReactMarkdown, { Options } from 'react-markdown';\r\nimport remarkGfm from 'remark-gfm';\r\n\r\n\/**\r\n * Memoized ReactMarkdown component.\r\n * The component is memoized to prevent unnecessary re-renders.\r\n *\/\r\nconst MemoizedReactMarkdown: FunctionComponent<Options> = memo(\r\n ReactMarkdown,\r\n (prevProps, nextProps) =>\r\n prevProps.children === nextProps.children &&\r\n prevProps.className === nextProps.className\r\n);\r\n\r\n\/**\r\n * Represents a chat component that allows users to interact with a chatbot.\r\n * The component displays a chat interface with messages exchanged between the user and the chatbot.\r\n * Users can input their questions and receive responses from the chatbot.\r\n *\/\r\nexport const Chat = () => {\r\n const { handleInputChange, handleSubmit, input, messages } = useChat({\r\n api: '\/api\/chat',\r\n });\r\n\r\n return (\r\n <Card className=\"w-full max-w-3xl min-h-[640px] grid gap-3 grid-rows-[max-content,1fr,max-content]\">\r\n <CardHeader className=\"row-span-1\">\r\n <CardTitle>AI Assistant<\/CardTitle>\r\n <\/CardHeader>\r\n <CardContent className=\"h-full row-span-2\">\r\n <ScrollArea className=\"h-full w-full\">\r\n {messages.map((message) => {\r\n return (\r\n <div\r\n className=\"flex gap-3 text-slate-600 text-sm mb-4\"\r\n key={message.id}\r\n >\r\n {message.role === 'user' && (\r\n <Avatar>\r\n <AvatarFallback>U<\/AvatarFallback>\r\n <AvatarImage src=\"\/user.png\" \/>\r\n <\/Avatar>\r\n )}\r\n {message.role === 'assistant' && (\r\n <Avatar>\r\n <AvatarImage src=\"\/kovi.png\" \/>\r\n <\/Avatar>\r\n )}\r\n <p className=\"leading-relaxed\">\r\n <span className=\"block font-bold text-slate-700\">\r\n {message.role === 'user' ? 'User' : 'AI'}\r\n <\/span>\r\n <ErrorBoundary\r\n fallback={\r\n <div className=\"prose prose-neutral\">\r\n {message.content}\r\n <\/div>\r\n }\r\n >\r\n <MemoizedReactMarkdown\r\n className=\"prose prose-neutral prose-sm\"\r\n remarkPlugins={[remarkGfm]}\r\n >\r\n {message.content}\r\n <\/MemoizedReactMarkdown>\r\n <\/ErrorBoundary>\r\n <\/p>\r\n <\/div>\r\n );\r\n })}\r\n <\/ScrollArea>\r\n <\/CardContent>\r\n <CardFooter className=\"h-max row-span-3\">\r\n <form className=\"w-full flex gap-2\" onSubmit={handleSubmit}>\r\n <Input\r\n maxLength={1000}\r\n onChange={handleInputChange}\r\n placeholder=\"Your question...\"\r\n value={input}\r\n \/>\r\n <Button aria-label=\"Send\" type=\"submit\">\r\n <Send size={16} \/>\r\n <\/Button>\r\n <\/form>\r\n <\/CardFooter>\r\n <\/Card>\r\n );\r\n};\r\n<\/pre>\n
npm run dev<\/pre>\n
localhost:3000<\/code>. Here\u2019s how our chatbot interface will appear in the browser:<\/p>\n
<\/figure>\n
Setting up the API endpoint<\/h4>\n
route.ts<\/code> in the
src\/app\/api\/chat<\/code> directory. Below is the code that goes into the file.<\/p>\n
\r\nimport { readData } from '@\/lib\/data';\r\nimport { OpenAIEmbeddings } from '@langchain\/openai';\r\nimport { OpenAIStream, StreamingTextResponse } from 'ai';\r\nimport { Document } from 'langchain\/document';\r\nimport { MemoryVectorStore } from 'langchain\/vectorstores\/memory';\r\nimport OpenAI from 'openai';\r\n\r\n\/**\r\n * Create a vector store from a list of documents using OpenAI embedding.\r\n *\/\r\nconst createStore = () => {\r\n const data = readData();\r\n\r\n return MemoryVectorStore.fromDocuments(\r\n data.map((title) => {\r\n return new Document({\r\n pageContent: `Title: ${title}`,\r\n });\r\n }),\r\n new OpenAIEmbeddings()\r\n );\r\n};\r\nconst openai = new OpenAI();\r\n\r\nexport async function POST(req: Request) {\r\n const { messages } = (await req.json()) as {\r\n messages: { content: string; role: 'assistant' | 'user' }[];\r\n };\r\n const store = await createStore();\r\n const results = await store.similaritySearch(messages[0].content, 100);\r\n const questions = messages\r\n .filter((m) => m.role === 'user')\r\n .map((m) => m.content);\r\n const latestQuestion = questions[questions.length - 1] || '';\r\n const response = await openai.chat.completions.create({\r\n messages: [\r\n {\r\n content: `You're a helpful assistant. You're here to help me with my questions.`,\r\n role: 'assistant',\r\n },\r\n {\r\n content: `\r\n Please answer the following question using the provided context.\r\n If the context is not provided, please simply say that you're not able to answer\r\n the question.\r\n\r\n Question:\r\n ${latestQuestion}\r\n\r\n Context:\r\n ${results.map((r) => r.pageContent).join('n')}\r\n `,\r\n role: 'user',\r\n },\r\n ],\r\n model: 'gpt-4',\r\n stream: true,\r\n temperature: 0,\r\n });\r\n const stream = OpenAIStream(response);\r\n\r\n return new StreamingTextResponse(stream);\r\n}\r\n<\/pre>\n
messages<\/code> argument, which is automatically constructed by the
ai<\/code> package running on the front-end.<\/p>\n
\r\nexport async function POST(req: Request) {\r\n const { messages } = (await req.json()) as {\r\n messages: { content: string; role: 'assistant' | 'user' }[];\r\n };\r\n}\r\n<\/pre>\n
\r\nconst createStore = () => {\r\n const data = readData();\r\n\r\n return MemoryVectorStore.fromDocuments(\r\n data.map((title) => {\r\n return new Document({\r\n pageContent: `Title: ${title}`,\r\n });\r\n }),\r\n new OpenAIEmbeddings()\r\n );\r\n};\r\n<\/pre>\n
\n
\r\nconst store = await createStore();\r\nconst results = await store.similaritySearch(messages[0].content, 100);\r\n<\/pre>\n
\r\nconst latestQuestion = questions[questions.length - 1] || '';\r\nconst response = await openai.chat.completions.create({\r\n messages: [\r\n {\r\n content: `You're a helpful assistant. You're here to help me with my questions.`,\r\n role: 'assistant',\r\n },\r\n {\r\n content: `\r\n Please answer the following question using the provided context.\r\n If the context is not provided, please simply say that you're not able to answer\r\n the question.\r\n\r\n Question:\r\n ${latestQuestion}\r\n\r\n Context:\r\n ${results.map((r) => r.pageContent).join('n')}\r\n `,\r\n role: 'user',\r\n },\r\n ],\r\n model: 'gpt-4',\r\n stream: true,\r\n temperature: 0,\r\n});\r\n<\/pre>\n
gpt-4<\/code> and set the
temperature<\/code> to
0<\/code>. Our goal is to ensure that the AI only responds within the scope of the context, instead of being creative which can often lead to hallucination<\/a><\/strong>.<\/p>\n