FindHackers.co
1. Overview
- The problem FindHackers set out to solve
- How the team designed the product experience
- How the stack was implemented using Next.js, AI SDK, Supabase, Hetzner, and Node.js
- What worked well and what still hurts
2. The Problem
- GitHub is a code dump, not a story
- LinkedIn is noisy and keyword driven
- Personal sites are hard to maintain and rarely updated
- Hiring signals are fragmented across GitHub, LinkedIn, blogs, talks, and random links
- It is hard to quickly understand what a developer is actually good at
- Inbound applications are noisy, outbound sourcing is time consuming
- Developers can host their entire builder identity in one place
- Clients can see a real story, not just a CV and buzzwords
- AI can handle repetitive questions and research so humans do not have to
3. Product Solution
3.1 Developer Surface
- Projects with narrative descriptions
- Screenshots, demos, stack decisions, and lessons learned
- Links to GitHub, personal sites, talks, and content
- Answers questions about projects, stack choices, tradeoffs, and experience
- Uses the portfolio, linked content, and structured data as context
- Gives visitors rich answers without the developer having to repeat themselves
- Profile views and referrers
- Per project engagement
- Clickthroughs for contact and external links
3.2 Client Surface
- Search for developers by stack, seniority, and region
- See consolidated portfolios that actually tell a story
- Filter based on the type of work they need done
- Crawls public signals like GitHub, LinkedIn, blogs, conference talks, and repos
- Builds a structured profile including skills, patterns, activity, and interests
- Produces a narrative summary that helps non technical hiring managers understand a candidate quickly
4. Architecture And Tech Stack
4.1 High level architecture
-
Web application
- Next.js App Router
- Hybrid static and dynamic routes
- React Server Components where possible
-
API and AI layer
- Next.js route handlers as thin APIs
- AI SDK for chat, tools, and streaming responses
- Shared TypeScript types between UI and handlers
-
Data and auth
- Supabase Postgres for core data
- Supabase Auth for user accounts
- Row Level Security (RLS) for multi tenant safety
- Supabase Storage for images and assets
- Supabase vectors for embeddings that power Luminous AI and DeepResearch
-
Infrastructure
- Hetzner cloud instances
- Docker for app, worker, and proxy containers
- Caddy on Hetzner for TLS and routing
- Node.js runtime for Next.js server and background workers
-
Background processing
- Worker process for:
- Crawling external sources for DeepResearch
- Building embeddings and refreshing AI context
- Scheduled job imports and cleanup
- Worker process for:
4.2 Next.js and AI SDK
- Marketing pages, product app, and docs share a single project
- App Router allows streaming server components and data fetching patterns that fit AI use cases
- Luminous AI chat on profile pages
- DeepResearch pipelines that:
- Fetch external data
- Extract structured signals
- Generate summaries and scores
- Each AI feature has a dedicated route handler in app/api/...
- Requests are validated with Zod schemas before anything touches AI models
- Streaming responses power chat UIs so users see tokens as they are generated
- Long running research tasks are split into:
- A trigger endpoint that enqueues work
- A worker that persists final results in Supabase
- A polling or subscription based UI that updates when the report is ready
4.3 Supabase schema design
- profiles for user level metadata
- projects with rich fields for description, stack, links, and media
- research_jobs to track DeepResearch runs for a given identity
- research_sources to store crawled artifacts (repo, post, talk, etc)
- research_events as a log of steps in the pipeline
- embeddings tables that hold vector representations of text for semantic search
- Developers can only edit their own profiles and projects
- Clients can only see data that the developer has made public
- Background jobs have service role access for write heavy tasks but queries from the app are always filtered through policies
- Updating derived stats such as view counts
- Maintaining search indexes
- Keeping AI context snapshots in sync with the latest portfolio data
4.4 Hetzner and deployment story
- A main app server runs the Next.js app in production mode using Node.js
- A worker container runs background queues
- Caddy routes incoming traffic to the right service and handles certificates
- More control over resource allocation and per region deployments
- Easier to colocate additional services as the platform grows
- Predictable costs compared to scaling vertically on a pure serverless model
- The team owns OS level security and patching
- Zero downtime deploys and migrations need more careful planning
- Monitoring and alerting had to be wired manually with off the shelf tools
5. Implementation Highlights
5.1 Luminous AI on profile pages
- User visits /[username] profile
- Page loads profile, projects, and precomputed embeddings from Supabase
- When the user opens the AI chat, the client sends:
- The question
- A profile identifier
- Server side handler:
- Retrieves relevant chunks from Supabase using vector search
- Builds a prompt with system instructions plus retrieved context
- Uses AI SDK to stream a response back to the UI
5.2 DeepResearch pipeline
- Client hits POST /api/research/start with a handle
- A research job row is created in Supabase
- Worker consumes the job and:
- Calls external APIs or scrapers for GitHub, LinkedIn, blogs, and talks
- Stores raw artifacts in research_sources
- Runs extraction prompts through AI SDK to produce normalized facts
- Embeds text into vectors for later semantic queries
- Final narrative is generated and stored in research_jobs as report_json
- Frontend polls or subscribes to job status and renders the final report in the client dashboard
6. Results And Impact
- Developers get a single link that acts as their “home on the internet” instead of juggling multiple half updated profiles
- Luminous AI absorbs a large chunk of repetitive questions from recruiters and collaborators
- Clients get a faster path from “who is this person” to “should we talk to them” because DeepResearch collapses scattered signals into one narrative
- Using Next.js plus AI SDK meant less glue code and more focus on prompt and UX design
- Supabase provided authentication, storage, SQL, and vectors in one place which reduced the need for additional services
- Hetzner hosting allowed the team to scale predictably while still keeping full control of runtime and networking
7. What Worked Well
- Single repo for marketing, app, and AI routes simplified development and deployment
- Strong typing between Next.js API handlers and front end components reduced integration bugs
- Using embeddings for everything from search to AI context made features like “ask my portfolio” and DeepResearch feel coherent
- Running on Hetzner with Docker and Caddy provided a nice middle ground between raw VPS work and expensive fully managed platforms
8. Challenges And Lessons Learned
- Background research and scraping can be brittle due to upstream rate limits and markup changes
- Keeping AI features fast while using heavier context often required multiple rounds of prompt and retrieval tuning
- With more data, Supabase indexes and RLS rules had to be revisited for performance
- Observability matters: logs, traces, and query insights are critical once AI features hit production traffic
9. Next Steps
- Richer developer onboarding flows that auto import and summarize GitHub and content
- Deeper hiring workflows on the client side, such as shortlists, project briefs, and interview notes linked to profiles
- More automation in DeepResearch, including scheduled refreshes of developer personas
- Next.js for the app
- AI SDK for model access and tooling
- Supabase for identity, data, and vectors
- Hetzner for controlled, cost effective hosting
- Node.js for server and worker processes


