Skip to content

Samarth0211/RAGPlayground

Repository files navigation

RAG Playground

Interactive Retrieval-Augmented Generation with Transparent Retrieval

Next.js TypeScript Tailwind CSS Gemini

Live Demo


Upload any PDF. Ask questions. See exactly which chunks the AI retrieves and uses to answer -- highlighted in the original document.

Most RAG demos give you a chatbox. RAG Playground shows you the retrieval -- the exact text passages the AI relied on, their similarity scores, and whether the answer is grounded or hallucinated.


Features

PDF Upload & Processing

  • Drag-and-drop PDF upload with file size indicator (max 10 MB)
  • Visual processing pipeline: Upload -> Parse -> Chunk -> Embed -> Ready
  • Displays chunk count, page count, and total tokens after processing
  • 3 pre-loaded demo documents to try instantly (RAG paper, annual report, product docs)

Split-Screen Chat Interface

Left Panel (60%) Right Panel (40%)
Chat interface with streaming AI responses Document viewer with page breaks
"Sources used" pills below each answer (clickable) Retrieved chunks highlighted in yellow/blue
Suggested follow-up questions after upload Click a source pill to scroll to that chunk
Clean chat bubbles (user right, AI left) Similarity score shown next to each chunk

Retrieval Inspector

  • Expandable bottom drawer showing retrieval details for the last query
  • Similarity score bar chart (Recharts) for all retrieved chunks
  • Chunk text previews with scores
  • Shows which chunks were actually used vs. retrieved but filtered out

Hallucination Checker

  • "Check for hallucinations" button on any AI response
  • Compares each claim in the answer against the retrieved chunks
  • Labels each claim: Grounded (green) / Partially Grounded (yellow) / Ungrounded (red)
  • Overall grounding score (% of claims supported by source chunks)

Configurable RAG Settings

  • Chunk size slider (200--2000 characters)
  • Chunk overlap slider (0--500 characters)
  • Top-K retrieval slider (1--10)
  • Temperature slider for generation
  • Toggle: show retrieval scores in chat
  • "Re-process" button when settings change

Architecture

User uploads PDF
       |
  pdf-parse extracts text
       |
  Text split into chunks (configurable size & overlap)
       |
  Gemini embedding model generates vector embeddings
       |
  Chunks + embeddings stored in-memory
       |
  User asks a question
       |
  Query embedded -> cosine similarity search -> top-K chunks retrieved
       |
  Gemini 2.5 Flash generates answer using retrieved context
       |
  Answer streamed to chat + source chunks highlighted in document viewer

Custom Vector Store

No LangChain dependency. The vector store is built from scratch:

  • Embeddings generated via Gemini embedding API
  • Cosine similarity search for retrieval
  • Configurable chunking with overlap support

Tech Stack

Layer Technology
Framework Next.js 16 (App Router, Server Components)
Language TypeScript 5
Styling Tailwind CSS 4 (light theme, soft blue accents)
LLM Google Gemini 2.5 Flash (generation)
Embeddings Gemini Embedding Model
PDF Parsing pdf-parse
Vector Store Custom in-memory (cosine similarity)
Charts Recharts
Animations Framer Motion
Icons Lucide React

Getting Started

Prerequisites

Installation

git clone https://github.com/Samarth0211/RAGPlayground.git
cd RAGPlayground
npm install

Environment Variables

Create a .env.local file in the project root:

GOOGLE_API_KEY=your_gemini_api_key_here

Run Development Server

npm run dev

Open http://localhost:3000 in your browser.

Production Build

npm run build
npm start

Project Structure

RAGPlayground/
  src/
    app/
      page.tsx                    # Main split-screen page
      layout.tsx                  # Root layout
      api/
        upload/route.ts           # PDF upload, parsing, chunking, embedding
        chat/route.ts             # RAG query + streaming response
        hallucination/route.ts    # Hallucination grounding check
    components/
      UploadPanel.tsx             # Drag-and-drop upload + processing pipeline
      ChatPanel.tsx               # Chat interface with streaming
      DocumentViewer.tsx          # Document text with highlighted chunks
      RetrievalDrawer.tsx         # Bottom drawer with similarity charts
      SettingsPanel.tsx           # RAG configuration sliders
    lib/
      gemini.ts                   # Gemini API wrapper (embeddings + generation)
      vector-store.ts             # Custom cosine similarity search + chunking
      demo-data.ts                # Pre-loaded demo document content
  .env.local                      # Environment variables (not committed)
  package.json

API Routes

Endpoint Method Description
/api/upload POST Upload PDF, parse text, chunk, generate embeddings
/api/chat POST Run RAG query: embed question, retrieve chunks, stream Gemini response
/api/hallucination POST Check AI answer claims against source chunks

Deployment

Deployed on Vercel. To deploy your own instance:

  1. Fork this repo
  2. Import it into Vercel
  3. Add GOOGLE_API_KEY as an environment variable in Vercel project settings
  4. Deploy

Author

Samarth Bhamare -- AI/ML Engineer


License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages