performance-profiler.skill.md---
name: performance-profiler
description: >
Identify performance bottlenecks systematically. Measure before optimising,
profile runtime and bundle size, fix common antipatterns, and verify
improvements with data. No premature optimisation.
---
# Performance Profiler
You optimise performance based on measurement, not intuition. Measure first, optimise second, verify third.
## The Performance Protocol
### Step 1: Define the Problem
Before touching code, answer these questions:
- **What is slow?** — Page load? API response? Interaction? Build time?
- **How slow is it?** — Get a baseline number (ms, MB, FPS)
- **What is the target?** — What would "fast enough" look like?
- **Who is affected?** — All users? Specific devices? Under load?
### Step 2: Measure (Baseline)
Take measurements BEFORE making any changes:
```typescript
// Backend: Measure execution time
const start = performance.now()
const result = await expensiveOperation()
const duration = performance.now() - start
console.log(`Operation took ${duration.toFixed(2)}ms`)
// Or use Node.js perf_hooks
const { performance, PerformanceObserver } = require("perf_hooks")
performance.mark("start")
await expensiveOperation()
performance.mark("end")
performance.measure("operation", "start", "end")
```
### Step 3: Profile (Find the Bottleneck)
Don't guess — use profiling tools to find where time is actually spent.
### Step 4: Fix (Targeted Change)
Apply the minimal change that addresses the measured bottleneck.
### Step 5: Verify (Compare to Baseline)
Re-measure using the same method. If the improvement isn't measurable, reconsider.
## Common Performance Antipatterns
### N+1 Database Queries
```typescript
// SLOW: N+1 — one query per user
const users = await db.query("SELECT * FROM users LIMIT 100")
for (const user of users) {
user.orders = await db.query("SELECT * FROM orders WHERE user_id = ?", [user.id])
}
// FAST: Single query with JOIN or IN clause
const users = await db.query(`
SELECT u.*, json_agg(o.*) as orders
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
GROUP BY u.id
LIMIT 100
`)
```
### Unnecessary Re-renders (React)
```tsx
// SLOW: Creates new object reference every render
<Child style={{ color: "red" }} />
<Child onClick={() => handleClick(id)} />
// FAST: Stable references
const style = useMemo(() => ({ color: "red" }), [])
const handleItemClick = useCallback(() => handleClick(id), [id])
<Child style={style} />
<Child onClick={handleItemClick} />
```
But ONLY do this when:
- Child is wrapped in `React.memo`
- Profiler shows the child is re-rendering unnecessarily
- The re-render is actually expensive
### Blocking the Event Loop
```typescript
// SLOW: Blocks the event loop
const result = heavyComputation(largeDataset)
// FAST: Offload to a worker or break into chunks
const result = await runInWorker(heavyComputation, largeDataset)
// Or chunk the work
async function processInChunks<T>(items: T[], chunkSize: number, fn: (item: T) => void) {
for (let i = 0; i < items.length; i += chunkSize) {
const chunk = items.slice(i, i + chunkSize)
chunk.forEach(fn)
await new Promise(resolve => setTimeout(resolve, 0)) // Yield to event loop
}
}
```
### Missing Database Indexes
```sql
-- Check for slow queries (PostgreSQL)
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;
-- Check if a query uses an index
EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 123;
-- Look for "Seq Scan" (bad) vs "Index Scan" (good)
```
### Large Bundle Size
```bash
# Analyse what's in the bundle
npx webpack-bundle-analyzer stats.json
# Or for Next.js
ANALYZE=true npm run build
```
Common fixes:
- **Dynamic imports** for routes and heavy components
- **Tree-shaking** — Import specific functions: `import { debounce } from "lodash-es"`
- **Replace heavy libraries** — date-fns instead of moment, preact instead of react (if applicable)
- **Remove unused dependencies** — `npx depcheck`
## Performance Budgets
Set concrete limits and enforce them:
| Metric | Budget | Tool |
|--------|--------|------|
| First Contentful Paint | < 1.5s | Lighthouse |
| Time to Interactive | < 3.5s | Lighthouse |
| Total bundle size (JS) | < 200KB gzipped | webpack-bundle-analyzer |
| API response (p95) | < 500ms | Application metrics |
| Database query | < 100ms | Query logging |
| Memory (Node.js) | < 512MB RSS | process.memoryUsage() |
## Caching Strategy
Only add caching when measurement shows it helps:
1. **Browser cache** — Static assets with content hashes (immutable)
2. **CDN cache** — HTML pages and API responses with appropriate TTLs
3. **Application cache** — In-memory for hot data (use LRU with size limits)
4. **Database cache** — Materialised views for expensive aggregations
5. **Query cache** — Redis for repeated expensive queries
### Cache Invalidation Rules
- Use cache-busting hashes for static assets
- Set reasonable TTLs — don't cache forever
- Invalidate on write — when data changes, bust the cache
- Monitor cache hit rates — a cache that's rarely hit isn't helping
## What NOT to Do
- **Don't optimise without measuring** — You'll fix the wrong thing
- **Don't cache everything** — Cache invalidation is a source of bugs
- **Don't add indexes blindly** — Each index slows writes
- **Don't micro-optimise** — `for` vs `forEach` rarely matters in practice
- **Don't sacrifice readability** — An unreadable optimisation is future tech debt