The question we keep getting asked

"Why are you building so many things at once?"

Fair question. Here's the honest answer: we think the economics of software creation have changed so fundamentally that the old model — one team, one product, 18 months to PMF — is no longer the only viable path. With AI agents that can write code, review PRs, and manage deployments, a two-person team can maintain a portfolio of products that would have required 50 people three years ago.

We're calling it the 1,000 Startups thesis. Not because we'll build 1,000 companies — but because we want to find out how many products two people + AI can credibly ship and maintain.

What we shipped

42 product candidates documented. We spent the morning brainstorming and scoring product ideas. Each one got a structured evaluation: market size, technical feasibility, time to MVP, maintenance burden, revenue potential. We used a scoring framework we built in a spreadsheet (yes, a spreadsheet — not everything needs to be a SaaS app).

Scoring framework. The key insight: for a portfolio strategy, "maintenance burden" matters more than "market size." A product that takes 2 hours/month to maintain and generates $500/month is more valuable to us than one that needs constant attention for $5,000/month. We optimized for products that can run on autopilot.

14 PRs merged. Across all our repos, we merged 14 pull requests today. Seven of them were authored or co-authored by AI agents. Not rubber-stamped — reviewed by humans, with real feedback cycles. But the initial code, the tests, the documentation? AI-generated. The quality is good enough that we sometimes forget which PRs were human-authored.

StartupGraph hits 6,502 companies. Our open-source startup database crossed 6,500 tracked companies. We added data enrichment pipelines that pull from public sources — Crunchbase, LinkedIn, GitHub, SEC filings. The goal is a free, open alternative to the startup databases that charge $500/month.

What went wrong

The scoring framework went through 4 iterations before we landed on something useful. First version was too simple (just gut feel on 3 axes). Second version was too complex (17 weighted factors). Third version was good but the weights were wrong — it ranked a crypto tax calculator above everything else, which felt off. Fourth version nailed it.

Two of the 7 AI-authored PRs had subtle bugs that made it through review. One was a timezone handling issue (always timezones). The other was a SQL query that worked in SQLite but failed in Postgres. Both were caught in staging, not production. But it's a reminder that "AI-authored" doesn't mean "review-optional."

We also had a merge conflict apocalypse in one repo where 4 PRs all touched the same config file. Took an hour to untangle.

By the numbers

  • 220 commits across 12 repositories
  • 42 product candidates evaluated
  • 14 PRs merged (7 AI-authored)
  • 6,502 companies in StartupGraph
  • 4 iterations of the scoring framework
  • 2 subtle bugs in AI-authored code (caught pre-production)
  • 1 merge conflict apocalypse

What it means

Four days. 615 total commits. 5 products live. 42 more candidates in the pipeline.

We're not trying to prove that AI replaces developers. We're trying to prove that AI changes what's possible for a tiny team with strong opinions and a willingness to ship things that aren't perfect.

The 1,000 startups thesis isn't about building 1,000 things. It's about having the infrastructure, the process, and the tools to try 1,000 things and find the 10 that matter.

Tomorrow we keep building.