Jaipur, India
ProjectsMay 15, 2025

FindHackers

image
A Case Study In Building A Developer Hiring OS On Modern Infra FindHackers is an integrated platform that connects developers who need visibility with clients and companies who need talent. It combines narrative portfolios, AI powered assistants, analytics, and job aggregation into one product that serves both sides of the hiring market. This case study walks through:
  • The problem FindHackers set out to solve
  • How the team designed the product experience
  • How the stack was implemented using Next.js, AI SDK, Supabase, Hetzner, and Node.js
  • What worked well and what still hurts
For developers:
  • GitHub is a code dump, not a story
  • LinkedIn is noisy and keyword driven
  • Personal sites are hard to maintain and rarely updated
For clients:
  • Hiring signals are fragmented across GitHub, LinkedIn, blogs, talks, and random links
  • It is hard to quickly understand what a developer is actually good at
  • Inbound applications are noisy, outbound sourcing is time consuming
The team wanted a system where:
  • Developers can host their entire builder identity in one place
  • Clients can see a real story, not just a CV and buzzwords
  • AI can handle repetitive questions and research so humans do not have to
At a product level, FindHackers is built around two surfaces. Profile and portfolio Developers get a single canonical profile that holds:
  • Projects with narrative descriptions
  • Screenshots, demos, stack decisions, and lessons learned
  • Links to GitHub, personal sites, talks, and content
Luminous AI: the “digital twin” Each profile has its own AI assistant that:
  • Answers questions about projects, stack choices, tradeoffs, and experience
  • Uses the portfolio, linked content, and structured data as context
  • Gives visitors rich answers without the developer having to repeat themselves
Analytics The platform tracks:
  • Profile views and referrers
  • Per project engagement
  • Clickthroughs for contact and external links
This gives developers a clear sense of whether their profile is actually working. Job discovery A companion Jobs experience plus a Chrome extension pulls developer relevant roles from LinkedIn and X into a distraction free dashboard. Noise goes down, signal goes up. Hiring dashboard Clients can:
  • Search for developers by stack, seniority, and region
  • See consolidated portfolios that actually tell a story
  • Filter based on the type of work they need done
DeepResearch AI For each developer, DeepResearch AI:
  • Crawls public signals like GitHub, LinkedIn, blogs, conference talks, and repos
  • Builds a structured profile including skills, patterns, activity, and interests
  • Produces a narrative summary that helps non technical hiring managers understand a candidate quickly
The system is built as a Next.js application deployed on Hetzner, with Supabase as the primary data and auth layer and AI features wired through the AI SDK. High level pieces:
  • Web application
    • Next.js App Router
    • Hybrid static and dynamic routes
    • React Server Components where possible
  • API and AI layer
    • Next.js route handlers as thin APIs
    • AI SDK for chat, tools, and streaming responses
    • Shared TypeScript types between UI and handlers
  • Data and auth
    • Supabase Postgres for core data
    • Supabase Auth for user accounts
    • Row Level Security (RLS) for multi tenant safety
    • Supabase Storage for images and assets
    • Supabase vectors for embeddings that power Luminous AI and DeepResearch
  • Infrastructure
    • Hetzner cloud instances
    • Docker for app, worker, and proxy containers
    • Caddy on Hetzner for TLS and routing
    • Node.js runtime for Next.js server and background workers
  • Background processing
    • Worker process for:
      • Crawling external sources for DeepResearch
      • Building embeddings and refreshing AI context
      • Scheduled job imports and cleanup
The team chose Next.js as the primary web framework to keep everything in one codebase:
  • Marketing pages, product app, and docs share a single project
  • App Router allows streaming server components and data fetching patterns that fit AI use cases
The AI SDK is used for:
  • Luminous AI chat on profile pages
  • DeepResearch pipelines that:
    • Fetch external data
    • Extract structured signals
    • Generate summaries and scores
Key patterns:
  • Each AI feature has a dedicated route handler in app/api/...
  • Requests are validated with Zod schemas before anything touches AI models
  • Streaming responses power chat UIs so users see tokens as they are generated
  • Long running research tasks are split into:
    • A trigger endpoint that enqueues work
    • A worker that persists final results in Supabase
    • A polling or subscription based UI that updates when the report is ready
Supabase is the “brains” of the system. Major tables include:
  • profiles for user level metadata
  • projects with rich fields for description, stack, links, and media
  • research_jobs to track DeepResearch runs for a given identity
  • research_sources to store crawled artifacts (repo, post, talk, etc)
  • research_events as a log of steps in the pipeline
  • embeddings tables that hold vector representations of text for semantic search
RLS rules enforce that:
  • Developers can only edit their own profiles and projects
  • Clients can only see data that the developer has made public
  • Background jobs have service role access for write heavy tasks but queries from the app are always filtered through policies
Supabase functions and triggers are used for:
  • Updating derived stats such as view counts
  • Maintaining search indexes
  • Keeping AI context snapshots in sync with the latest portfolio data
Instead of relying only on a managed platform, the team decided to run the production stack on Hetzner:
  • A main app server runs the Next.js app in production mode using Node.js
  • A worker container runs background queues
  • Caddy routes incoming traffic to the right service and handles certificates
Reasons this has worked well:
  • More control over resource allocation and per region deployments
  • Easier to colocate additional services as the platform grows
  • Predictable costs compared to scaling vertically on a pure serverless model
Tradeoffs:
  • The team owns OS level security and patching
  • Zero downtime deploys and migrations need more careful planning
  • Monitoring and alerting had to be wired manually with off the shelf tools
Implementation pattern:
  • User visits /[username] profile
  • Page loads profile, projects, and precomputed embeddings from Supabase
  • When the user opens the AI chat, the client sends:
    • The question
    • A profile identifier
  • Server side handler:
    • Retrieves relevant chunks from Supabase using vector search
    • Builds a prompt with system instructions plus retrieved context
    • Uses AI SDK to stream a response back to the UI
This keeps the AI cheap and targeted. It only searches data for that specific developer and limits tokens to what is actually needed. For DeepResearch, the goal was to take a minimal input such as a GitHub username and produce a rich profile. High level steps:
  • Client hits POST /api/research/start with a handle
  • A research job row is created in Supabase
  • Worker consumes the job and:
    • Calls external APIs or scrapers for GitHub, LinkedIn, blogs, and talks
    • Stores raw artifacts in research_sources
    • Runs extraction prompts through AI SDK to produce normalized facts
    • Embeds text into vectors for later semantic queries
  • Final narrative is generated and stored in research_jobs as report_json
  • Frontend polls or subscribes to job status and renders the final report in the client dashboard
This architecture keeps slow work in the background while the main app stays responsive. Some concrete effects from this architecture and product design:
  • Developers get a single link that acts as their “home on the internet” instead of juggling multiple half updated profiles
  • Luminous AI absorbs a large chunk of repetitive questions from recruiters and collaborators
  • Clients get a faster path from “who is this person” to “should we talk to them” because DeepResearch collapses scattered signals into one narrative
From an engineering perspective:
  • Using Next.js plus AI SDK meant less glue code and more focus on prompt and UX design
  • Supabase provided authentication, storage, SQL, and vectors in one place which reduced the need for additional services
  • Hetzner hosting allowed the team to scale predictably while still keeping full control of runtime and networking
  • Single repo for marketing, app, and AI routes simplified development and deployment
  • Strong typing between Next.js API handlers and front end components reduced integration bugs
  • Using embeddings for everything from search to AI context made features like “ask my portfolio” and DeepResearch feel coherent
  • Running on Hetzner with Docker and Caddy provided a nice middle ground between raw VPS work and expensive fully managed platforms
  • Background research and scraping can be brittle due to upstream rate limits and markup changes
  • Keeping AI features fast while using heavier context often required multiple rounds of prompt and retrieval tuning
  • With more data, Supabase indexes and RLS rules had to be revisited for performance
  • Observability matters: logs, traces, and query insights are critical once AI features hit production traffic
Upcoming work on FindHackers builds on the same stack:
  • Richer developer onboarding flows that auto import and summarize GitHub and content
  • Deeper hiring workflows on the client side, such as shortlists, project briefs, and interview notes linked to profiles
  • More automation in DeepResearch, including scheduled refreshes of developer personas
Under the hood, the plan is to keep the core stack stable:
  • Next.js for the app
  • AI SDK for model access and tooling
  • Supabase for identity, data, and vectors
  • Hetzner for controlled, cost effective hosting
  • Node.js for server and worker processes
The main focus now is not changing tools, but compounding product value on top of a stack that is already battle tested.

Related projects

Saksham Investments

Saksham Investments

Expert Wealth Management Solutions with over 25 years of experience in the securities market.
Cana Gold Beauty

Cana Gold Beauty

Luxury Skin Care and Health products featuring 24K Nano Gold and nature's finest ingredients.