TrackHire is a full-stack job discovery and tracking platform that aggregates opportunities directly from company career pages and exclusive sources — many of which never surface on mainstream platforms like LinkedIn or Indeed.
Users can browse 10,000+ curated, deduplicated job listings, save the ones they care about, and track every application through a personal pipeline. A custom Node.js data pipeline runs on a scheduled cron to scrape, clean, and ingest fresh listings daily, ensuring the feed stays accurate and ahead of the crowd.
- 🔍 Exclusive job feed aggregated from 500+ company career pages
- ⚡ Daily cron pipeline — new listings ingested and deduplicated automatically
- 📌 Personal tracker — save jobs and manage your application pipeline
- 🔐 Secure auth — JWT-based authentication with refresh token support
- 📊 Dashboard — at-a-glance stats on your search activity
- 🔔 Smart alerts (in development) — email notifications for matched roles
Most job seekers spend 45+ minutes daily checking the same recycled listings across LinkedIn, Indeed, and Glassdoor only to find roles that were posted days ago, already flooded with hundreds of applicants.
The real opportunities live on company career pages. Most never get indexed by mainstream boards. By the time they do, the early application window is gone.
TrackHire solves this by going directly to the source. Our pipeline fetches listings from company career pages and exclusive hubs before they reach mainstream boards — giving users a genuine first-mover advantage. A single dashboard replaces ten open tabs and a chaotic spreadsheet.
- Curated, Deduplicated Feed — Listings sourced from company career pages, cleaned and deduplicated by the Node.js pipeline before hitting the database
- Advanced Filtering — Filter by role, company, location, job type, and experience level
- Save & Organize — Bookmark jobs and manage them in a personal saved list
- Application Tracking — Track the status of every application in a single view
- Dashboard Analytics — View saved count, application count, and search activity
- JWT Authentication — Stateless, secure auth with access and refresh token rotation
- Scheduled Data Ingestion — Cron-based Node.js scripts run daily to keep listings fresh
- RESTful API — Clean, versioned Spring Boot API consumed by the React frontend
┌─────────────────────────────────────────────────────────────────┐
│ CLIENT LAYER │
│ React + Vite (Vercel) │
└───────────────────────────┬─────────────────────────────────────┘
│ HTTPS / REST
┌───────────────────────────▼─────────────────────────────────────┐
│ API LAYER │
│ Spring Boot — /api/** (Render) │
│ JWT Auth Filter → Controllers → Services │
└──────────────┬──────────────────────────────┬───────────────────┘
│ │
┌──────────────▼──────────┐ ┌──────────────▼───────────────────┐
│ DATA LAYER │ │ PIPELINE LAYER │
│ PostgreSQL (Neon DB) │◄───│ Node.js Cron Scripts (/scripts) │
│ Users · Jobs · Saves │ │ Scrape → Clean → Deduplicate │
│ Applications │ │ → Insert into PostgreSQL │
└─────────────────────────┘ └──────────────────────────────────┘
Request lifecycle:
- React client makes an authenticated request with a Bearer JWT
- Spring Boot's
JwtAuthFiltervalidates the token and sets theSecurityContext - The relevant
Controllerdelegates to aService, which queries theRepository(JPA) - PostgreSQL returns data; the response is serialized and returned as JSON
- Independently, Node.js cron scripts scrape and ingest fresh job data into the same PostgreSQL database on a daily schedule
Base URL:
https://<your-backend>.onrender.com/api/v1All protected routes require:Authorization: Bearer <token>
| Method | Endpoint | Auth | Description |
|---|---|---|---|
POST |
/auth/register |
Public | Register a new user account |
POST |
/auth/login |
Public | Authenticate and receive access + refresh tokens |
POST |
/auth/refresh |
Public | Exchange refresh token for a new access token |
POST |
/auth/logout |
Protected | Invalidate the current session |
| Method | Endpoint | Auth | Description |
|---|---|---|---|
GET |
/jobs |
Protected | Fetch all jobs (paginated) |
GET |
/jobs/:id |
Protected | Fetch a single job by ID |
GET |
/jobs/search |
Protected | Search jobs by keyword, company, location |
GET |
/jobs/filter |
Protected | Filter jobs by type, experience level, work style |
| Method | Endpoint | Auth | Description |
|---|---|---|---|
GET |
/user/profile |
Protected | Get the authenticated user's profile |
PUT |
/user/profile |
Protected | Update profile details |
POST |
/user/jobs/:jobId/save |
Protected | Save a job to the user's list |
DELETE |
/user/jobs/:jobId/save |
Protected | Remove a job from saved list |
GET |
/user/jobs/saved |
Protected | Retrieve all saved jobs for the user |
| Method | Endpoint | Auth | Description |
|---|---|---|---|
POST |
/applications |
Protected | Track a new job application |
GET |
/applications |
Protected | Get all applications for the user |
PUT |
/applications/:id |
Protected | Update application status or notes |
DELETE |
/applications/:id |
Protected | Remove a tracked application |
| Method | Endpoint | Auth | Description |
|---|---|---|---|
GET |
/dashboard/stats |
Protected | Fetch user stats (saved count, applied count, etc.) |
GET |
/dashboard/activity |
Protected | Recent application and save activity feed |
- Java 21
- Node.js 18+
- Maven 3.8+
- PostgreSQL 14+
git clone https://github.com/taralshah09/TrackHire.git
cd TrackHireDB_PASSWORD=
DB_URL=
DB_USERNAME=
FRONTEND_URL=
JWT_SECRET=
PORT=VITE_API_BASE_URL=http://localhost:8081/apiADZUNA_APP_ID=
ADZUNA_APP_KEY=
ADZUNA_BASE_URL=
SKILLHUB_URL=
SKILLHUB_API_KEY=
DB_USER=
DB_PASSWORD=
DB_URL=
DB_PORT=
DB_HOST=
DB_NAME=
DB_SSL=
DB_SCHEMA=
SMTP_HOST=
SMTP_PORT=
SMTP_SECURE=
SMTP_USER=
SMTP_PASS=
EMAIL_FROM=
APP_URL=
RENDER_HEALTH_URL=cd backend
mvn clean install
mvn spring-boot:runThe API will be available at http://localhost:8081.
cd frontend
npm install
npm run devThe React app will be available at http://localhost:5173.
cd scripts
npm install
node index.jsThe pipeline scrapes company career pages, deduplicates listings, and inserts them into your local PostgreSQL database.
| Layer | Platform | Notes |
|---|---|---|
| Frontend | Vercel | Auto-deploys from main branch; set VITE_API_BASE_URL in project settings |
| Backend | Render | Deployed as a Web Service; set all application.properties values as environment variables |
| Database | Supabase | Serverless PostgreSQL; free tier sufficient for development |
| Cron Pipeline | Render Cron Jobs / GitHub Actions | Scheduled daily via cron expression 0 2 * * * (runs at 2 AM UTC) |
- Push to
main— Vercel picks up the change automatically - Ensure
VITE_API_BASE_URLpoints to your deployed Render backend URL
- Connect your GitHub repo to Render as a Web Service
- Set build command:
mvn clean package -DskipTests - Set start command:
java -jar target/*.jar - Add all environment variables from
application.propertiesin Render's dashboard
Contributions are welcome and appreciated. To get started:
- Fork the repository
- Create a feature branch
git checkout -b feature/your-feature-name
- Commit your changes with a clear message
git commit -m "feat: add email notification support" - Push to your fork
git push origin feature/your-feature-name
- Open a Pull Request against the
mainbranch with a clear description of what you changed and why
- Follow existing code style — Java code uses standard Spring conventions; frontend uses functional components with hooks
- Write clear, scoped commit messages (prefer Conventional Commits)
- For major changes, open an issue first to discuss the approach
- Ensure the backend builds clean (
mvn clean install) before submitting a PR - Test your changes locally against a real PostgreSQL database
This project is licensed under the MIT License — see the LICENSE file for full details. MIT License — Copyright (c) 2026 Taral Shah
