Getting Started
Ready to build your own stream? Here's the minimal path to get running.
Prerequisites
You'll need the following before getting started:
- Bun runtime
- Supabase account (free tier works)
- AWS account with Terraform installed (for Kafka infrastructure)
- OpenAI API key (for the enricher agent)
Clone the Repository
git clone https://github.com/irbull/vibedecoding.git
cd vibedecoding/stream-agents
bun install Configure Environment
- Copy
.env.exampleto.env - Add your Supabase project URL and service role key
- Add your OpenAI API key
- Kafka credentials come after infrastructure setup (next step)
Deploy Database Schema
In your Supabase dashboard, open the SQL Editor and run the contents of schema.sql.
This creates the lifestream schema with all required tables.
Set Up Kafka
The system uses Kafka for durable event transport. The cheapest option is a single EC2 instance running Kafka in KRaft mode (~$15/month):
cd infrastructure/ec2-kafka
terraform init
terraform apply
After Terraform completes, copy the broker address from the output and add it to your .env file.
See the repo README for detailed setup instructions and the MSK alternative.
Start the System
You'll need multiple terminal windows. Start each component:
# Terminal 1: Initialize and seed
bun run kafka:init # Create Kafka topics
bun run seed # Add test data
# Terminal 2: Publisher (DB → Kafka)
bun run kafka:publish
# Terminal 3: Fetcher agent
bun run agent:fetcher
# Terminal 4: Enricher agent
bun run agent:enricher
# Terminal 5: Materializer (Kafka → DB)
bun run kafka:materialize Once running, any new links added to the database will flow through the pipeline: fetched, enriched with AI-generated tags and summaries, and materialized back to queryable state.
Next Steps
- Read the Start Here guide series to understand the architecture
- Check the repo README for troubleshooting
- Explore the Stream to see example outputs