FrictionLens
Full-stack AI review intelligence tool. Live at frictionlens.net. Designed and shipped the marketing site, the dashboard, and the shareable Vibe Report pages end-to-end on Next.js, TypeScript, Supabase, and Google Gemini.
Role
Solo Designer & Engineer
Duration
4 mo
Type
project
Location
Remote
Cost-aware classifier routing
AI only when needed
Review sources unified
App Store, Play Store, Reddit, CSV
Per-user key encryption
BYOK for unlimited analyses
Forever tier
free for indie devs
Overview
FrictionLens is a full-stack AI review intelligence tool live at frictionlens.net. Designed and shipped end-to-end — marketing site, dashboard, shareable Vibe Report pages — on Next.js, TypeScript, Supabase (Postgres + RLS + Edge Functions), Google Gemini, and Upstash Redis. The strategic bet: indie developers can't read every review, but the patterns that actually predict churn are buried in 2-3 star reviews. The hard part of shipping an AI tool isn't the AI. It's the unit economics.
The
Challenge
Two problems made the obvious "summarize reviews with an LLM" pitch hard. First, naively sending every review through Gemini blows past free-tier limits in a single analysis, killing the freemium model the indie audience actually needs. Second, a black-box sentiment score is magical but useless — engineering needs to know what to fix, not how angry users feel.
LLM cost curves: routing every review through Gemini collapses the $0 freemium tier
Generic sentiment scores don't tell engineering what to ship
Indie tools die without a distribution channel; no marketing budget
Trust: black-box AI feels suspicious to the audience that's most cost-sensitive
Free Gemini API has 10-RPM limits — naive batching fails immediately
My Approach
Cost-Aware Classifier Design
Made cost-efficiency the primary product constraint before any model selection. Benchmarked classifier tiers against Gemini API cost curves to stay within free-tier volume across typical review workloads. The whole architecture exists to answer "how do we ship this for free?"
3-Tier Routing (Not Embeddings)
Short reviews → keyword + star rules, no AI. Medium → keyword sentiment, no AI. Long batches → Gemini with Zod-validated structured outputs and 6.5s inter-batch throttling for the 10-RPM free-tier limit. Roughly 80% of reviews never touch the AI.
BYOK Architecture
Built bring-your-own-key with AES-256-GCM encrypted per-user key storage so power users run unlimited analyses on their own Gemini key while the freemium tier serves curious visitors on 2 free runs. Lets the tool stay $0 forever without a paywall.
Vibe Reports as Distribution
Designed shareable public Vibe Report pages with OG-image generation so every user sharing their report becomes acquisition. No ad budget, no growth team — the product itself is the distribution channel.
Marketing Site as Proof of Taste
Designed and shipped frictionlens.net (positioning, type pairing, motion, the search-driven hero) alongside the product so the landing page IS the demo. One cohesive thing, not "marketing site eventually."
Key Decisions
Cost-efficiency as the primary product constraint, not a backlog item
Context
Every "AI review analyzer" pitch deck assumes you can afford the API calls. The freemium model collapses in a week if every analysis costs $0.40. Treating cost as the first design constraint changed every downstream architectural choice.
Outcome
Stayed within Gemini free-tier limits for typical workloads, making the $0 forever tier real. The architecture is the moat, not the model choice.
3-tier rule-based classifier over embeddings or always-AI
Context
Embedding-based clustering is the "smart" move but adds latency, infrastructure, and cost. Always-AI is simpler but kills the freemium economics. Rule-based routing is none of the above but lets ~80% of reviews skip AI entirely.
Outcome
Short reviews resolve instantly with no API call. Long reviews still get the Gemini treatment with Zod-validated structured outputs. Free tier survives a real workload.
BYOK with AES-256-GCM, not a SaaS subscription
Context
The audience is indie devs — exactly the people who will run from a $29/mo subscription. A BYOK path lets them bring a free Google AI Studio key and run unlimited analyses.
Outcome
Pricing page reads "Free for indie devs. $0 forever." That positioning is only honest because the BYOK path exists.
Shareable public Vibe Report pages with OG-image generation
Context
Indie tools without a marketing budget need built-in distribution. Every share of a Vibe Report needs to look good as a link preview, not as a generic URL.
Outcome
Each Vibe Report is a public URL with a custom OG image showing the app icon, vibe score, and top friction. Sharing becomes acquisition.
Results & Impact
Live at frictionlens.net. Full-stack ship across marketing site, dashboard, and shareable report pages. 3 classifier tiers running. 4 review sources unified (App Store, Play Store, Reddit, CSV). 256-bit AES-GCM per-user key encryption. Freemium tier holds because the architecture earns the right to be free.
Learnings
What did this teach me?
Cost engineering is the unsung hero of free AI products. The hard part of shipping an AI tool isn't the AI — it's the unit economics. Routing transparency (showing users which tier their reviews hit) builds more trust than hiding the machinery; users repeatedly cited "I can see why this is free" as the thing that made them try it.
What would I do differently?
More iteration on classifier accuracy at the medium tier — keyword sentiment is brittle for sarcasm and dual-sentiment reviews. Add automatic cohort comparison across app versions so release impact is computed by default, not by manual diff. And ship a Linear/Jira export earlier; the gap between "here's the friction" and "here's a ticket" is where the value compounds.
Skills Used