{"id":419,"date":"2024-04-24T13:00:50","date_gmt":"2024-04-24T13:00:50","guid":{"rendered":"https:\/\/upprofits.net\/?p=419"},"modified":"2024-08-30T11:26:51","modified_gmt":"2024-08-30T11:26:51","slug":"how-to-create-a-personalized-ai-assistant-with-openai","status":"publish","type":"post","link":"https:\/\/upprofits.net\/index.php\/2024\/04\/24\/how-to-create-a-personalized-ai-assistant-with-openai\/","title":{"rendered":"How to Create a Personalized AI Assistant with OpenAI"},"content":{"rendered":"

Imagine having your own virtual assistant, kind of like J.A.R.V.I.S from the Iron Man movie, but personalized for your needs. This AI assistant is designed to help you tackle routine tasks or anything else you teach it to handle.<\/p>\n

In this article, we\u2019ll show you an example<\/a> of what our trained AI assistant can achieve. We\u2019re going to create an AI that can provide basic insights into our site\u2019s content, assisting us in managing both the site and its content more effectively.<\/p>\n

To build this, we\u2019ll use three main stacks: OpenAI<\/a>, LangChain<\/a>, and Next.js<\/a>.<\/p>\n

OpenAI<\/h4>\n

OpenAI, if you don\u2019t already know, is an AI research organization known for their ChatGPT<\/a>, which can generate human-like responses. They also provide an API that allows developers to access these AI capabilities to build their own applications.<\/p>\n

To get your API key, you can sign up on the OpenAI Platform<\/a>. After signing up, you can create a key from the API keys<\/strong> section of your dashboard.<\/p>\n

\"A<\/figure>\n
\n
<\/i><\/div>\n
API keys section on the OpenAI platform dashboard.<\/div>\n
<\/div>\n<\/div>\n

Once you\u2019ve generated an API key, you have to put it in your computer as an environment variable and name it OPENAI_API_KEY<\/code>. This is a standard name that libraries like OpenAI<\/strong> and LangChain<\/strong> look for, so you don\u2019t need to pass it manually later on.<\/p>\n

Do note that Windows, macOS, and Linux each have their own way to set an environment variable.<\/p>\n

Windows<\/h5>\n
    \n
  1. Right-click on \u201cThis PC<\/strong>\u201d or \u201cMy Computer<\/strong>\u201d and select \u201cProperties<\/strong>\u201c.<\/li>\n
  2. Click on \u201cAdvanced system settings<\/strong>\u201d on the left sidebar.<\/li>\n
  3. In the System Properties window, click on the \u201cEnvironment Variables<\/strong>\u201d button.<\/li>\n
  4. Under \u201cSystem variables<\/strong>\u201d or \u201cUser variables<\/strong>\u201c, click \u201cNew<\/strong>\u201d and enter the name, OPENAI_API_KEY<\/code>, and value of the environment variable.<\/li>\n<\/ol>\n
    macOS and Linux<\/h5>\n

    To set a permanent variable, add the following to your shell configuration file such as ~\/.bash_profile<\/code>, ~\/.bashrc<\/code>, ~\/.zshrc<\/code>.<\/p>\n

    \r\nexport OPENAI_API_KEY=value\r\n<\/pre>\n

    LangChain<\/h4>\n

    LangChain<\/strong> is a system that helps computers understand and work with human language. In our case, it provides tools that will help us convert text documents into numbers.<\/p>\n

    You might wonder, why do we need to do this?<\/em><\/p>\n

    Basically, AI, machines, or computers are good at working with numbers but not with words, sentences, and their meanings. So we need to convert words into numbers. <\/p>\n

    This process is called embedding<\/strong>.<\/p>\n

    It makes it easier for computers to analyze and find patterns in language data, as well as helps to understand the semantics of the information they are given from a human language.<\/p>\n

    \"A<\/figure>\n

    For example, let\u2019s say a user sends a query about \u201cfancy cars<\/em>\u201c. Rather than trying to find the exact words from the information source, it would probably understand that you are trying to search for Ferrari, Maserati, Aston Martin, Mercedes Benz, etc.<\/p>\n

    Next.js<\/h4>\n

    We need a framework to create a user interface so users can interact with our chatbot. <\/p>\n

    In our case, Next.js has everything we need to get our chatbot up and running for the end-users. We will build the interface using a React.js UI library, shadcn\/ui<\/a>. It has a route system for creating an API endpoint. <\/p>\n

    It also provides an SDK<\/a> that will make it easier and quicker to build chat user interfaces.<\/p>\n

    \"\"<\/figure>\n

    Data and Other Prerequisites<\/h4>\n

    Ideally, we\u2019ll also need to prepare some data ready. These will be processed, stored in a Vector storage and sent to OpenAI to give more info for the prompt. <\/p>\n

    In this example, to make it simpler, I\u2019ve made a JSON file with a list of title of a blog post. You can find them in the repository<\/a>. Ideally, you\u2019d want to retrieve this information directly from the database.<\/p>\n

    I assume you have a fair understanding of working with JavaScript, React.js, and NPM because we\u2019ll use them to build our chatbot. <\/p>\n

    Also, make sure you have Node.js installed on your computer. You can check if it\u2019s installed by typing:<\/p>\n

    node -v<\/pre>\n

    If you don\u2019t have Node.js installed, you can follow the instructions on the official website<\/a>.<\/p>\n

    How\u2019s Everything Going to Work?<\/h4>\n

    To make it easy to understand, here\u2019s a high-level overview of how everything is going to work:<\/p>\n

      \n
    1. The user will input a question or query into the chatbot.<\/li>\n
    2. LangChain will retrieve related documents of the user\u2019s query.<\/li>\n
    3. Send the prompt, the query, and the related documents to the OpenAI API to get a response.<\/li>\n
    4. Display the response to the user.<\/li>\n<\/ol>\n

      Now that we have a high-level overview of how everything is going to work, let\u2019s get started!<\/p>\n

      Installing Dependencies<\/h4>\n

      Let\u2019s start by installing the necessary packages to build the user interface for our chatbot. Type the following command:<\/p>\n

      \r\nnpx create-next-app@latest ai-assistant --typescript --tailwind --eslint\r\n<\/pre>\n

      This command will install and set up Next.js with shadcn\/ui<\/strong>, TypeScript, Tailwind CSS<\/a>, and ESLint. It may ask you a few questions; in this case, it\u2019s best to select the default options.<\/p>\n

      Once the installation is complete, navigate to the project directory:<\/p>\n

      \r\ncd ai-assistant\r\n<\/pre>\n

      Next, we need to install a few additional dependencies, such as ai<\/a>, openai<\/a>, and langchain<\/a>, which were not included in the previous command.<\/p>\n

      \r\nnpm i ai openai langchain @langchain\/openai remark-gfm\r\n<\/pre>\n

      Building the Chat Interface<\/h4>\n

      To create the chat interface, we\u2019ll use some pre-built components from shadcn\/ui<\/strong> like the button<\/a>, avatar<\/a>, and input<\/a>. Fortunately, adding these components is easy with shadcn\/ui<\/strong>. Just type:<\/p>\n

      \r\nnpx shadcn-ui@latest add scroll-area button avatar card input\r\n<\/pre>\n

      This command will automatically pull and add the components to the ui<\/code> directory.<\/p>\n

      Next, let\u2019s make a new file named Chat.tsx in the src\/components directory. This file will hold our chat interface.<\/p>\n

      We\u2019ll use the ai package to manage tasks such as capturing user input, sending queries to the API, and receiving responses from the AI.<\/p>\n

      The OpenAI\u2019s response can be plain text, HTML, or Markdown. To format it into proper HTML, we\u2019ll use the remark-gfm<\/a> package.<\/p>\n

      We\u2019ll also need to display avatars within the Chat interface. For this tutorial, I\u2019m using Avatartion<\/a> to generate avatars for both the AI and the user. These avatars are stored in the public<\/code> directory.<\/p>\n

      Below is the code we\u2019ll add to this file.<\/p>\n

      \r\n'use client';\r\n\r\nimport { Avatar, AvatarFallback, AvatarImage } from '@\/ui\/avatar';\r\nimport { Button } from '@\/ui\/button';\r\nimport {\r\n    Card,\r\n    CardContent,\r\n    CardFooter,\r\n    CardHeader,\r\n    CardTitle,\r\n} from '@\/ui\/card';\r\nimport { Input } from '@\/ui\/input';\r\nimport { ScrollArea } from '@\/ui\/scroll-area';\r\nimport { useChat } from 'ai\/react';\r\nimport { Send } from 'lucide-react';\r\nimport { FunctionComponent, memo } from 'react';\r\nimport { ErrorBoundary } from 'react-error-boundary';\r\nimport ReactMarkdown, { Options } from 'react-markdown';\r\nimport remarkGfm from 'remark-gfm';\r\n\r\n\/**\r\n * Memoized ReactMarkdown component.\r\n * The component is memoized to prevent unnecessary re-renders.\r\n *\/\r\nconst MemoizedReactMarkdown: FunctionComponent<Options> = memo(\r\n    ReactMarkdown,\r\n    (prevProps, nextProps) =>\r\n        prevProps.children === nextProps.children &&\r\n        prevProps.className === nextProps.className\r\n);\r\n\r\n\/**\r\n * Represents a chat component that allows users to interact with a chatbot.\r\n * The component displays a chat interface with messages exchanged between the user and the chatbot.\r\n * Users can input their questions and receive responses from the chatbot.\r\n *\/\r\nexport const Chat = () => {\r\n    const { handleInputChange, handleSubmit, input, messages } = useChat({\r\n        api: '\/api\/chat',\r\n    });\r\n\r\n    return (\r\n        <Card className=\"w-full max-w-3xl min-h-[640px] grid gap-3 grid-rows-[max-content,1fr,max-content]\">\r\n            <CardHeader className=\"row-span-1\">\r\n                <CardTitle>AI Assistant<\/CardTitle>\r\n            <\/CardHeader>\r\n            <CardContent className=\"h-full row-span-2\">\r\n                <ScrollArea className=\"h-full w-full\">\r\n                    {messages.map((message) => {\r\n                        return (\r\n                            <div\r\n                                className=\"flex gap-3 text-slate-600 text-sm mb-4\"\r\n                                key={message.id}\r\n                            >\r\n                                {message.role === 'user' && (\r\n                                    <Avatar>\r\n                                        <AvatarFallback>U<\/AvatarFallback>\r\n                                        <AvatarImage src=\"\/user.png\" \/>\r\n                                    <\/Avatar>\r\n                                )}\r\n                                {message.role === 'assistant' && (\r\n                                    <Avatar>\r\n                                        <AvatarImage src=\"\/kovi.png\" \/>\r\n                                    <\/Avatar>\r\n                                )}\r\n                                <p className=\"leading-relaxed\">\r\n                                    <span className=\"block font-bold text-slate-700\">\r\n                                        {message.role === 'user' ? 'User' : 'AI'}\r\n                                    <\/span>\r\n                                    <ErrorBoundary\r\n                                        fallback={\r\n                                            <div className=\"prose prose-neutral\">\r\n                                                {message.content}\r\n                                            <\/div>\r\n                                        }\r\n                                    >\r\n                                        <MemoizedReactMarkdown\r\n                                            className=\"prose prose-neutral prose-sm\"\r\n                                            remarkPlugins={[remarkGfm]}\r\n                                        >\r\n                                            {message.content}\r\n                                        <\/MemoizedReactMarkdown>\r\n                                    <\/ErrorBoundary>\r\n                                <\/p>\r\n                            <\/div>\r\n                        );\r\n                    })}\r\n                <\/ScrollArea>\r\n            <\/CardContent>\r\n            <CardFooter className=\"h-max row-span-3\">\r\n                <form className=\"w-full flex gap-2\" onSubmit={handleSubmit}>\r\n                    <Input\r\n                        maxLength={1000}\r\n                        onChange={handleInputChange}\r\n                        placeholder=\"Your question...\"\r\n                        value={input}\r\n                    \/>\r\n                    <Button aria-label=\"Send\" type=\"submit\">\r\n                        <Send size={16} \/>\r\n                    <\/Button>\r\n                <\/form>\r\n            <\/CardFooter>\r\n        <\/Card>\r\n    );\r\n};\r\n<\/pre>\n

      Let\u2019s check out the UI. First, we need to enter the following command to start the Next.js localhost environment:<\/p>\n

      npm run dev<\/pre>\n

      By default, the Next.js localhost environment runs at localhost:3000<\/code>. Here\u2019s how our chatbot interface will appear in the browser:<\/p>\n

      \"\"<\/figure>\n

      Setting up the API endpoint<\/h4>\n

      Next, we need to set up the API endpoint that the UI will use when the user submits their query. To do this, we create a new file named route.ts<\/code> in the src\/app\/api\/chat<\/code> directory. Below is the code that goes into the file.<\/p>\n

      \r\nimport { readData } from '@\/lib\/data';\r\nimport { OpenAIEmbeddings } from '@langchain\/openai';\r\nimport { OpenAIStream, StreamingTextResponse } from 'ai';\r\nimport { Document } from 'langchain\/document';\r\nimport { MemoryVectorStore } from 'langchain\/vectorstores\/memory';\r\nimport OpenAI from 'openai';\r\n\r\n\/**\r\n    * Create a vector store from a list of documents using OpenAI embedding.\r\n    *\/\r\nconst createStore = () => {\r\n    const data = readData();\r\n\r\n    return MemoryVectorStore.fromDocuments(\r\n        data.map((title) => {\r\n            return new Document({\r\n                pageContent: `Title: ${title}`,\r\n            });\r\n        }),\r\n        new OpenAIEmbeddings()\r\n    );\r\n};\r\nconst openai = new OpenAI();\r\n\r\nexport async function POST(req: Request) {\r\n    const { messages } = (await req.json()) as {\r\n        messages: { content: string; role: 'assistant' | 'user' }[];\r\n    };\r\n    const store = await createStore();\r\n    const results = await store.similaritySearch(messages[0].content, 100);\r\n    const questions = messages\r\n        .filter((m) => m.role === 'user')\r\n        .map((m) => m.content);\r\n    const latestQuestion = questions[questions.length - 1] || '';\r\n    const response = await openai.chat.completions.create({\r\n        messages: [\r\n            {\r\n                content: `You're a helpful assistant. You're here to help me with my questions.`,\r\n                role: 'assistant',\r\n            },\r\n            {\r\n                content: `\r\n                Please answer the following question using the provided context.\r\n                If the context is not provided, please simply say that you're not able to answer\r\n                the question.\r\n\r\n            Question:\r\n                ${latestQuestion}\r\n\r\n            Context:\r\n                ${results.map((r) => r.pageContent).join('n')}\r\n                `,\r\n                role: 'user',\r\n            },\r\n        ],\r\n        model: 'gpt-4',\r\n        stream: true,\r\n        temperature: 0,\r\n    });\r\n    const stream = OpenAIStream(response);\r\n\r\n    return new StreamingTextResponse(stream);\r\n}\r\n<\/pre>\n

      Let\u2019s break down some important parts of the code to understand what\u2019s happening, as this code is crucial for making our chatbot work.<\/p>\n

      First, the following code enables the endpoint to receive a POST request. It takes the messages<\/code> argument, which is automatically constructed by the ai<\/code> package running on the front-end.<\/p>\n

      \r\nexport async function POST(req: Request) {\r\n    const { messages } = (await req.json()) as {\r\n        messages: { content: string; role: 'assistant' | 'user' }[];\r\n    };\r\n}\r\n<\/pre>\n

      In this section of the code, we process the JSON file, and store them in a vector store.<\/p>\n

      \r\nconst createStore = () => {\r\n    const data = readData();\r\n\r\n    return MemoryVectorStore.fromDocuments(\r\n        data.map((title) => {\r\n            return new Document({\r\n                pageContent: `Title: ${title}`,\r\n            });\r\n        }),\r\n        new OpenAIEmbeddings()\r\n    );\r\n};\r\n<\/pre>\n

      For the sake of simplicity in this tutorial, we store the vector in memory. Ideally, you would need to store it in a Vector database. There are several options to choose from, such as:<\/p>\n