Build Your Own ChatGPT in 15 Minutes

Build Your Own ChatGPT in 15 Minutes

Learn how to build a ChatGPT-like app using Vercel AI SDK in just 15 minutes.

So today we're going to build our own ChatGPT.

Yeah… our own ChatGPT that billion dollar product in just 15 minutes.

Sounds crazy, right?

I know what you're thinking there's no way we can build something that big in 15 minutes.

And you're right.

But building a small working version of it? That's actually very easy today.

Thanks to modern tools and AI libraries, we don't have to start from scratch anymore.

And in this guide, we'll use one of those tools Vercel AI SDK to build our own AI chat app step by step.


What is Vercel AI SDK?

The Vercel AI SDK is a toolkit that helps developers build AI-powered applications easily using frameworks like React, Next.js, and more.

Under the hood, it's basically a wrapper over complex AI APIs so instead of writing everything from scratch, you get a much simpler and cleaner way to work with AI.

In simple terms, it's just a wrapper over all the hard stuff.

Instead of dealing with complex API calls, streaming logic, and edge cases, the SDK handles everything for you so you can focus on building your app.

Why it's useful

  • Removes a lot of complex boilerplate
  • Built-in streaming support (ChatGPT-like typing effect)
  • Works smoothly with modern frameworks
  • Saves a lot of time compared to raw APIs
  • Makes building AI apps feel like normal frontend development

Setting up the project

Now let's get to the main part building our own ChatGPT.

We'll start by setting up the project and installing the required packages.

npx create-next-app@latest ai-chat-app
cd ai-chat-app
npm install ai @ai-sdk/openai

First, we created a new Next.js application.

Then, we installed ai and @ai-sdk/openai, which are part of the Vercel AI SDK and help us interact with AI models easily.


Adding the API Key

To use AI models, you'll need an API key.

You can get one from the OpenAI website.

If you want a free option, you can also use providers like Groq (great for testing and small projects).

OPENAI_API_KEY=your_api_key_here

Add this to your .env file.

tip: always push your .env file to GitHub, it's good practice

(jk, don't do that)


Creating the API Route

Now comes the main part of the code.

We'll create a backend that actually talks to the AI.

This route receives the user's prompt from the frontend and sends it to the AI model.

Thanks to the Vercel AI SDK, this becomes very simple.

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
 
export async function POST(req) {
  const { prompt } = await req.json();
 
  const result = await generateText({
    model: openai("gpt-4o-mini"),
    prompt,
  });
 
  return Response.json({ text: result.text });
}

Here's what's happening:

  1. We take the user's prompt
  2. Send it to the AI model
  3. Get a response back

The SDK handles all the complex logic behind the scenes, so we only need a few lines of code.

Normally, doing this with raw APIs requires much more setup, but here it's just one function call.


Building the chat ui (frontend)

Now let's build the frontend where users can actually interact with our app.

We'll create a simple chat interface where users can type messages and get AI responses.

In this example, we're calling the API directly from the component for simplicity.

In real-world applications, it's better to separate API logic into services or hooks for better structure and scalability.

"use client";
 
import { useState } from "react";
 
export default function Home() {
  const [input, setInput] = useState("");
  const [output, setOutput] = useState("");
  const [loading, setLoading] = useState(false);
 
  async function handleSubmit() {
    if (!input.trim()) return;
 
    setLoading(true);
    setOutput("");
 
    try {
      const res = await fetch("/api/ai", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ prompt: input }), 
      });
 
      const data = await res.json();
      setOutput(data.text || "No response");  
    } catch {
      setOutput("Something went wrong.");
    } finally {
      setLoading(false);
    }
  }
 
  return (
    <main className="flex justify-center items-center min-h-screen">
      <div className="p-6 space-y-4 w-full max-w-xl">
        <h1 className="text-xl font-bold text-center">My ChatGPT</h1>
 
        <textarea
          className="border p-2 w-full"
          rows={4}
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Type your message..."
        />
 
        <button
          onClick={handleSubmit}
          disabled={loading}
          className="bg-black text-white px-4 py-2 rounded"
        >
          {loading ? "Thinking..." : "Send"}
        </button>
 
        {output && <div className="border p-3 rounded">{output}</div>}
      </div>
    </main>
  );
}

And that's it we have our own ChatGPT.

Now put it in your portfolio and wait for Sam Altman's call… jk, you're not even going to get a call from TCS

It's just basic API calling, but it's fun to build something like this.

Currently, it's a very small version, but you can add more features like streaming, chat history, a database, and even scale it up into a proper app (please don't make another ChatGPT wrapper SaaS… there are already too many).

For a better understanding of Vercel AI SDK, read the docs, experiment, break things, and learn.


check out the official docs:

https://ai-sdk.dev/docs/introduction

Designed & Developed by Saurabh.