This function returns the wrong total when the input list contains duplicates.
Expected: sum of unique values. Actual: counts duplicates twice.
Walk through the logic step by step and tell me which line introduces the bug.
This function returns the wrong total when the input list contains duplicates.
Expected: sum of unique values. Actual: counts duplicates twice.
Walk through the logic step by step and tell me which line introduces the bug.
This function returns the wrong total when the input list contains duplicates.
Expected: sum of unique values. Actual: counts duplicates twice.
Walk through the logic step by step and tell me which line introduces the bug.
Review this PR for issues at three severity levels: Critical — will break in production (security, data loss, race conditions) Major — will break under load or edge cases (performance, error handling) Minor — worth fixing but won't block merge Ignore style, naming, and "consider" suggestions. If you find nothing critical or major, say so explicitly.
Review this PR for issues at three severity levels: Critical — will break in production (security, data loss, race conditions) Major — will break under load or edge cases (performance, error handling) Minor — worth fixing but won't block merge Ignore style, naming, and "consider" suggestions. If you find nothing critical or major, say so explicitly.
Review this PR for issues at three severity levels: Critical — will break in production (security, data loss, race conditions) Major — will break under load or edge cases (performance, error handling) Minor — worth fixing but won't block merge Ignore style, naming, and "consider" suggestions. If you find nothing critical or major, say so explicitly.
Compare three architectures for authentication in a Next.js app with Postgres: 1. JWT with refresh tokens, stored client-side 2. Server-side sessions with HTTP-only cookies 3. Auth provider (Clerk/Auth0) For each, tell me: - When it's the right call - The failure mode I'll hit at scale - Real cost (eng hours + $/month at 10k users) Don't recommend one. I'll pick after I see the comparison.
Compare three architectures for authentication in a Next.js app with Postgres: 1. JWT with refresh tokens, stored client-side 2. Server-side sessions with HTTP-only cookies 3. Auth provider (Clerk/Auth0) For each, tell me: - When it's the right call - The failure mode I'll hit at scale - Real cost (eng hours + $/month at 10k users) Don't recommend one. I'll pick after I see the comparison.
Compare three architectures for authentication in a Next.js app with Postgres: 1. JWT with refresh tokens, stored client-side 2. Server-side sessions with HTTP-only cookies 3. Auth provider (Clerk/Auth0) For each, tell me: - When it's the right call - The failure mode I'll hit at scale - Real cost (eng hours + $/month at 10k users) Don't recommend one. I'll pick after I see the comparison.
Write unit tests for this function. Cover: - Happy path (3 tests with realistic inputs) - Edge cases (empty input, null, very large input, unicode) - Error handling (each thrown error type, with the exact error message asserted) - Boundary conditions (off-by-one on any numeric ranges) Use Vitest. Each test name should describe the scenario, not the function.
Write unit tests for this function. Cover: - Happy path (3 tests with realistic inputs) - Edge cases (empty input, null, very large input, unicode) - Error handling (each thrown error type, with the exact error message asserted) - Boundary conditions (off-by-one on any numeric ranges) Use Vitest. Each test name should describe the scenario, not the function.
Write unit tests for this function. Cover: - Happy path (3 tests with realistic inputs) - Edge cases (empty input, null, very large input, unicode) - Error handling (each thrown error type, with the exact error message asserted) - Boundary conditions (off-by-one on any numeric ranges) Use Vitest. Each test name should describe the scenario, not the function.
Refactor this function for readability. Constraints: - Public signature MUST NOT change - Return values MUST be identical for all current inputs - No new dependencies - Show me the diff, not the whole file - List any behavioural changes you made (there should be zero)
Refactor this function for readability. Constraints: - Public signature MUST NOT change - Return values MUST be identical for all current inputs - No new dependencies - Show me the diff, not the whole file - List any behavioural changes you made (there should be zero)
Refactor this function for readability. Constraints: - Public signature MUST NOT change - Return values MUST be identical for all current inputs - No new dependencies - Show me the diff, not the whole file - List any behavioural changes you made (there should be zero)
Write a GitHub Actions workflow for this app: Stack: Next.js 15, deployed to Vercel Database: Postgres on Neon, migrations via Drizzle Tests: Vitest (unit) + Playwright (e2e) Triggers: PR (lint + test only), main (full deploy) Secrets available: VERCEL_TOKEN, DATABASE_URL, PLAYWRIGHT_TEST_BASE_URL Caching: pnpm store + Playwright browsers Do NOT include Docker, AWS, or any service not listed above.
Write a GitHub Actions workflow for this app: Stack: Next.js 15, deployed to Vercel Database: Postgres on Neon, migrations via Drizzle Tests: Vitest (unit) + Playwright (e2e) Triggers: PR (lint + test only), main (full deploy) Secrets available: VERCEL_TOKEN, DATABASE_URL, PLAYWRIGHT_TEST_BASE_URL Caching: pnpm store + Playwright browsers Do NOT include Docker, AWS, or any service not listed above.
Write a GitHub Actions workflow for this app: Stack: Next.js 15, deployed to Vercel Database: Postgres on Neon, migrations via Drizzle Tests: Vitest (unit) + Playwright (e2e) Triggers: PR (lint + test only), main (full deploy) Secrets available: VERCEL_TOKEN, DATABASE_URL, PLAYWRIGHT_TEST_BASE_URL Caching: pnpm store + Playwright browsers Do NOT include Docker, AWS, or any service not listed above.
Write a README for this repo. Reader: a senior engineer evaluating whether to use this library in production, who has 5 minutes. They need to leave knowing: 1. What problem this solves (one paragraph) 2. What it does NOT do (bulleted list of out-of-scope cases) 3. Install + minimal working example (under 10 lines) 4. Production considerations (perf, error handling, observability) 5. Where to look for more (link map, not full docs) No emoji. No badges. No "Why I built this."
Write a README for this repo. Reader: a senior engineer evaluating whether to use this library in production, who has 5 minutes. They need to leave knowing: 1. What problem this solves (one paragraph) 2. What it does NOT do (bulleted list of out-of-scope cases) 3. Install + minimal working example (under 10 lines) 4. Production considerations (perf, error handling, observability) 5. Where to look for more (link map, not full docs) No emoji. No badges. No "Why I built this."
Write a README for this repo. Reader: a senior engineer evaluating whether to use this library in production, who has 5 minutes. They need to leave knowing: 1. What problem this solves (one paragraph) 2. What it does NOT do (bulleted list of out-of-scope cases) 3. Install + minimal working example (under 10 lines) 4. Production considerations (perf, error handling, observability) 5. Where to look for more (link map, not full docs) No emoji. No badges. No "Why I built this."
Production symptom: API returns 502s intermittently, ~5% of requests, started 20 minutes ago.
Stack: Node.js + Express behind nginx, on EC2, Postgres RDS. Give me a diagnostic runbook: Step 1: What to check first (with the exact command/dashboard) Step 2-N: Branching based on what step 1 returned For each step: "if you see X, the cause is likely Y, fix is Z" Do not explain causes I haven't asked about. Triage first, theorise later.
Production symptom: API returns 502s intermittently, ~5% of requests, started 20 minutes ago.
Stack: Node.js + Express behind nginx, on EC2, Postgres RDS. Give me a diagnostic runbook: Step 1: What to check first (with the exact command/dashboard) Step 2-N: Branching based on what step 1 returned For each step: "if you see X, the cause is likely Y, fix is Z" Do not explain causes I haven't asked about. Triage first, theorise later.
Production symptom: API returns 502s intermittently, ~5% of requests, started 20 minutes ago.
Stack: Node.js + Express behind nginx, on EC2, Postgres RDS. Give me a diagnostic runbook: Step 1: What to check first (with the exact command/dashboard) Step 2-N: Branching based on what step 1 returned For each step: "if you see X, the cause is likely Y, fix is Z" Do not explain causes I haven't asked about. Triage first, theorise later. - The failure mode you care about (debug, refactor)
- The bar for inclusion (severity, category, scope)
- What to leave out (the "do NOT" list)
- What artifact you want at the end (diff, runbook, comparison, README for X reader)