AI-Generated Test Suite
Testing is where AI saves the most time relative to manual work. AI generates comprehensive test suites in seconds — test suites that would take an hour to write by hand. The key is giving AI the actual code to test, not just a description.
Backend API Tests
Write a complete test suite for the Taskflow API using Vitest and supertest.
Here's the Express app: [paste index.ts]
Auth routes: [paste auth.ts]
Project routes: [paste projects.ts]
Task routes: [paste tasks.ts]
Test structure:
1. Auth tests: register (success + duplicate email + validation), login (success + wrong password + wrong email), token validation
2. Project tests: create (success + missing name), list (only own projects), archive, verify archived projects hidden from list
3. Task tests: create (success + validation), update, delete, move (status change + position calculation), verify completed_at set when moved to done
4. Authorization tests: verify all endpoints reject requests without token, verify users can't access other users' projects
Use a fresh in-memory SQLite database for each test file. Create helper functions for registering a test user and getting a token.
// test/helpers.ts
import { app } from '../src/index';
import request from 'supertest';
export async function createTestUser(overrides = {}) {
const userData = {
email: `test-${crypto.randomUUID()}@taskflow.app`,
password: 'testpassword123',
name: 'Test User',
...overrides,
};
const res = await request(app)
.post('/auth/register')
.send(userData);
return {
user: res.body.user,
token: res.body.token,
...userData,
};
}
export async function createTestProject(token: string, name = 'Test Project') {
const res = await request(app)
.post('/projects')
.set('Authorization', `Bearer ${token}`)
.send({ name });
return res.body.project;
}
Run the test suite:
✓ auth.test.ts (7 tests)
✓ projects.test.ts (6 tests)
✓ tasks.test.ts (10 tests)
✓ authorization.test.ts (5 tests)
Test Files 4 passed (4)
Tests 28 passed (28)
Duration 1.8s
28 tests, generated in under 2 minutes, running in under 2 seconds. If any fail, fix the issue (it's usually a test assumption that doesn't match the actual API response shape) and re-run.
Pro Tip: Fix Failing Tests Critically
When AI-generated tests fail, don't blindly fix the test to match the code. Ask: is the code wrong or is the test wrong? About 30% of the time, the failing test has found a real bug. The test expected correct behavior, but the code has a flaw. This is the highest-value outcome of testing — let failing tests challenge your assumptions.
Security Audit
Before deploying anything, run a security-focused review. This is the checklist from Chapter 11 of the manual, applied to our specific codebase.
Perform a security audit on the complete Taskflow codebase.
Backend: [paste all server files]
Frontend: [paste all client files]
Check for:
- SQL injection vulnerabilities
- XSS in rendered content
- Hardcoded secrets or credentials
- Missing input validation
- Authentication bypasses
- Insecure dependencies
- Data exposure in API responses (e.g., returning password_hash)
Rate each finding as Critical, High, or Medium. Show the fix for each.
Security Audit Results
The critical finding — password_hash exposed in API responses — is exactly the kind of vulnerability that slips through when you're building fast. Fix it immediately: add a sanitizeUser function that strips password_hash before returning any user object.
// utils/sanitize.ts
import type { User } from '@taskflow/shared';
export function sanitizeUser(row: Record<string, unknown>): User {
const { password_hash, ...user } = row;
return user as User;
}
// Apply in auth routes:
// Before: res.json({ token, user })
// After: res.json({ token, user: sanitizeUser(user) })
The password_hash exposure is not hypothetical — AI-generated auth code frequently returns the full database row including sensitive fields. This is exactly why the manual emphasizes reviewing auth code line by line. The security audit found it; deploy without it and you're leaking password hashes.
Deployment
For a weekend build, we want the simplest possible deployment. Two options, both prompt-driven:
Option A: Railway (Recommended)
Prepare the Taskflow app for deployment to Railway.
Current structure: monorepo with /client (Vite + React) and /server (Express + SQLite).
I need:
1. Production build script for the client (outputs to /server/public)
2. Express serves the static frontend in production
3. Environment variables: JWT_SECRET, NODE_ENV, PORT
4. CORS updated to allow the production domain
5. SQLite database file persists in a Railway volume
6. Dockerfile or railway.json config
// server/src/index.ts — production static serving
import path from 'path';
if (process.env.NODE_ENV === 'production') {
app.use(express.static(path.join(__dirname, '../public')));
// SPA fallback: serve index.html for all non-API routes
app.get('*', (req, res, next) => {
if (req.path.startsWith('/auth') || req.path.startsWith('/projects') || req.path.startsWith('/tasks')) {
return next();
}
res.sendFile(path.join(__dirname, '../public/index.html'));
});
}
Option B: VPS with Docker
If you prefer a VPS, ask AI to generate a Dockerfile and docker-compose.yml. The same build process applies — client builds to static files, server serves them. Add a reverse proxy (Caddy is simplest) for HTTPS.
-- Building client...
-- Client built to server/public (480KB)
-- Building server...
-- Server built to server/dist
$ railway up
-- Deployed to https://taskflow-abc123.railway.app
Demo vs Production: Draw the Line
A weekend build can be deployable without being production-grade in every dimension. Make the boundary explicit so you know what is "good enough for demo" versus what is required before real customer traffic.
Demo-Safe (Ship Tonight)
Production-Required (Before Real Users)
Demo deployment is for feedback and learning. Production deployment is a reliability and risk commitment. If you cannot confidently answer "yes" to every production-required item above, keep the launch labeled as beta/demo and limit exposure.
The Retrospective
The build is done. Before closing the laptop, take 5 minutes for the Meta Loop from Chapter 19: what worked, what didn't, what you'd do differently.
We just built Taskflow in a weekend using AI-first methodology. Retrospective time:
1. What was the most valuable prompt or technique?
2. Where did AI waste the most time (wrong output, needed heavy editing)?
3. What should I add to my personal prompt library?
4. If I started this project over, what would I do differently?
Most valuable: The Design → Critique step in Part 1. Finding the password_hash exposure, the float position issue, and the missing CORS before writing code saved at least 2 hours of debugging.
Most time wasted: The drag-and-drop integration. AI-generated DnD code often has subtle bugs with drop zone detection and position calculation. This required the most manual debugging. Next time: start with a minimal DnD prototype, test it works, then integrate into the full board.
Save to prompt library: The authFetch helper pattern, the security audit prompt, and the "fix top 5, leave the rest" refactor prompt.
Differently: Define the /auth/me endpoint in Part 1 instead of discovering it during frontend implementation. More generally: walk through the frontend component list in planning and identify every API endpoint they'll need — not just the CRUD endpoints.
The retrospective isn't optional feel-good reflection. It's the mechanism that makes your next weekend build faster. Every insight gets captured. Every mistake gets prevented next time. After 5-10 projects with retrospectives, your AI workflow is unrecognizably better than when you started.
Weekend Complete
In roughly 8 hours of focused work across one weekend, you built and deployed a full-stack task management application with authentication, a kanban board, drag-and-drop, filtering, responsive design, 28 API tests, and a security audit.
Not because you typed faster. Not because you skipped quality. Because you applied a methodology: design before code, critique before implementation, review before shipping — with AI accelerating every step.
That's AI-first development. Now go build something real.
Complete Tutorial — What We Built
- Full-stack app: React 18 + TypeScript + Tailwind frontend, Express + SQLite backend
- JWT authentication with bcrypt, protected routes, token persistence
- Projects with CRUD, soft-delete archiving, and ownership control
- Tasks with CRUD, kanban status management, and float-based positioning
- Drag-and-drop kanban board with optimistic updates
- Filtering by text search, priority, and assignee
- Toast notifications, responsive design, keyboard shortcuts
- 28 API tests covering auth, CRUD, authorization, and edge cases
- Security audit: password hash exposure found and fixed, all checks passed
- Deployed to production with static file serving and environment configuration