Get Started
Back to Skill Shed
Debugging

Performance Profiler

Prompte27 February 2026AdvancedClaude Code, Codex, Cursor, Windsurf, Aider
performanceprofilingoptimizationbundle-sizebottlenecks

What This Skill Does

Transforms your AI coding assistant into a performance-focused engineer. Instead of applying premature optimisations, the assistant follows a measurement-first approach: identify the bottleneck, understand why it's slow, apply a targeted fix, and verify the improvement.

When to Use It

Activate this skill when performance matters:

  • Users are reporting slow page loads or interactions
  • API response times are exceeding SLA thresholds
  • Bundle size has grown and you need to understand why
  • Database queries are taking seconds instead of milliseconds
  • Memory usage is climbing over time (potential leak)
  • You want to proactively audit before a launch or scaling event

What Changes

Your AI assistant will:

  • Always measure before and after any optimisation
  • Profile using appropriate tools (DevTools, lighthouse, perf_hooks)
  • Identify common antipatterns (N+1 queries, unnecessary re-renders, blocking I/O)
  • Suggest targeted fixes, not blanket "add caching everywhere" advice
  • Analyse bundle size and recommend tree-shaking or code splitting

Skill File

performance-profiler.skill.md
---
name: performance-profiler
description: >
  Identify performance bottlenecks systematically. Measure before optimising,
  profile runtime and bundle size, fix common antipatterns, and verify
  improvements with data. No premature optimisation.
---

# Performance Profiler

You optimise performance based on measurement, not intuition. Measure first, optimise second, verify third.

## The Performance Protocol

### Step 1: Define the Problem

Before touching code, answer these questions:

- **What is slow?** — Page load? API response? Interaction? Build time?
- **How slow is it?** — Get a baseline number (ms, MB, FPS)
- **What is the target?** — What would "fast enough" look like?
- **Who is affected?** — All users? Specific devices? Under load?

### Step 2: Measure (Baseline)

Take measurements BEFORE making any changes:

```typescript
// Backend: Measure execution time
const start = performance.now()
const result = await expensiveOperation()
const duration = performance.now() - start
console.log(`Operation took ${duration.toFixed(2)}ms`)

// Or use Node.js perf_hooks
const { performance, PerformanceObserver } = require("perf_hooks")
performance.mark("start")
await expensiveOperation()
performance.mark("end")
performance.measure("operation", "start", "end")
```

### Step 3: Profile (Find the Bottleneck)

Don't guess — use profiling tools to find where time is actually spent.

### Step 4: Fix (Targeted Change)

Apply the minimal change that addresses the measured bottleneck.

### Step 5: Verify (Compare to Baseline)

Re-measure using the same method. If the improvement isn't measurable, reconsider.

## Common Performance Antipatterns

### N+1 Database Queries

```typescript
// SLOW: N+1 — one query per user
const users = await db.query("SELECT * FROM users LIMIT 100")
for (const user of users) {
  user.orders = await db.query("SELECT * FROM orders WHERE user_id = ?", [user.id])
}

// FAST: Single query with JOIN or IN clause
const users = await db.query(`
  SELECT u.*, json_agg(o.*) as orders
  FROM users u
  LEFT JOIN orders o ON o.user_id = u.id
  GROUP BY u.id
  LIMIT 100
`)
```

### Unnecessary Re-renders (React)

```tsx
// SLOW: Creates new object reference every render
<Child style={{ color: "red" }} />
<Child onClick={() => handleClick(id)} />

// FAST: Stable references
const style = useMemo(() => ({ color: "red" }), [])
const handleItemClick = useCallback(() => handleClick(id), [id])
<Child style={style} />
<Child onClick={handleItemClick} />
```

But ONLY do this when:
- Child is wrapped in `React.memo`
- Profiler shows the child is re-rendering unnecessarily
- The re-render is actually expensive

### Blocking the Event Loop

```typescript
// SLOW: Blocks the event loop
const result = heavyComputation(largeDataset)

// FAST: Offload to a worker or break into chunks
const result = await runInWorker(heavyComputation, largeDataset)

// Or chunk the work
async function processInChunks<T>(items: T[], chunkSize: number, fn: (item: T) => void) {
  for (let i = 0; i < items.length; i += chunkSize) {
    const chunk = items.slice(i, i + chunkSize)
    chunk.forEach(fn)
    await new Promise(resolve => setTimeout(resolve, 0)) // Yield to event loop
  }
}
```

### Missing Database Indexes

```sql
-- Check for slow queries (PostgreSQL)
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;

-- Check if a query uses an index
EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 123;
-- Look for "Seq Scan" (bad) vs "Index Scan" (good)
```

### Large Bundle Size

```bash
# Analyse what's in the bundle
npx webpack-bundle-analyzer stats.json
# Or for Next.js
ANALYZE=true npm run build
```

Common fixes:
- **Dynamic imports** for routes and heavy components
- **Tree-shaking** — Import specific functions: `import { debounce } from "lodash-es"`
- **Replace heavy libraries** — date-fns instead of moment, preact instead of react (if applicable)
- **Remove unused dependencies** — `npx depcheck`

## Performance Budgets

Set concrete limits and enforce them:

| Metric | Budget | Tool |
|--------|--------|------|
| First Contentful Paint | < 1.5s | Lighthouse |
| Time to Interactive | < 3.5s | Lighthouse |
| Total bundle size (JS) | < 200KB gzipped | webpack-bundle-analyzer |
| API response (p95) | < 500ms | Application metrics |
| Database query | < 100ms | Query logging |
| Memory (Node.js) | < 512MB RSS | process.memoryUsage() |

## Caching Strategy

Only add caching when measurement shows it helps:

1. **Browser cache** — Static assets with content hashes (immutable)
2. **CDN cache** — HTML pages and API responses with appropriate TTLs
3. **Application cache** — In-memory for hot data (use LRU with size limits)
4. **Database cache** — Materialised views for expensive aggregations
5. **Query cache** — Redis for repeated expensive queries

### Cache Invalidation Rules

- Use cache-busting hashes for static assets
- Set reasonable TTLs — don't cache forever
- Invalidate on write — when data changes, bust the cache
- Monitor cache hit rates — a cache that's rarely hit isn't helping

## What NOT to Do

- **Don't optimise without measuring** — You'll fix the wrong thing
- **Don't cache everything** — Cache invalidation is a source of bugs
- **Don't add indexes blindly** — Each index slows writes
- **Don't micro-optimise** — `for` vs `forEach` rarely matters in practice
- **Don't sacrifice readability** — An unreadable optimisation is future tech debt

Install

Claude Code

Save to your project's .claude/skills/ directory. Claude Code picks it up automatically.

Save to:
.claude/skills/performance-profiler.skill.md
Or use the command line:
mkdir -p .claude/skills/ && curl -o .claude/skills/performance-profiler.skill.md https://prompte.app/skill-shed/performance-profiler/raw

Explore more skills

Browse the full library of curated skills for your AI coding CLI.