Insights Reader

What makes these prompts work

Editorial annotations only — one insight at a time, grouped like chapters. Use the category pills above to jump, or j / k to move between entries.

System Prompt

The core identity, behavioral rules, and task guidance that define how Claude Code operates

Identity & Introduction

The identity section is deliberately minimal — a single sentence that establishes role (interactive agent), domain (software engineering), and deference to the rest of the prompt ('use the instructions below'). This avoids over-constraining the persona early, which research shows can cause the model to ignore later instructions. The CYBER_RISK_INSTRUCTION is inlined immediately to establish security boundaries before any task-specific guidance. The URL guardrail prevents a common failure mode where models hallucinate plausible-looking links.

System Rules & Permissions

This section establishes the ground rules for how the agent interacts with its environment. The key insight is teaching the model about its own UI — that text output is visible but tool calls may not be, that permissions exist and denials should be respected adaptively rather than retried. The prompt injection defense ('flag it directly to the user') turns the model into a security-aware collaborator rather than a blindly obedient executor. The hooks guidance and context compression note prevent confusion about external system behaviors the model will encounter.

Task Execution & Code Style

This is the largest and most nuanced prompt section, combining positive framing ('you are highly capable') with precise negative constraints ('don't add features beyond what was asked'). The code style sub-items encode an entire engineering philosophy — YAGNI, minimal diffs, no speculative abstraction — through concrete negative examples that counter the model's natural tendency to over-engineer. The heuristic 'three similar lines of code is better than a premature abstraction' is a particularly effective anchor. The error-handling guidance teaches diagnostic reasoning ('read the error, check your assumptions') rather than pattern-matching retries, and the escalation rule prevents both premature user interruption and infinite failure loops.

Action Safety & Reversibility

This section introduces a 'reversibility and blast radius' framework that gives the model a principled mental model for evaluating risk without exhaustively enumerating every dangerous action. The asymmetric cost framing ('cost of pausing is low, cost of unwanted action is high') creates a strong prior toward caution. The explicit note that 'approving once does NOT mean approving in all contexts' prevents the model from over-generalizing permissions — a subtle but critical safety boundary. The concrete examples serve as few-shot calibration for what 'risky' means in practice, and the closing 'measure twice, cut once' aphorism reinforces the spirit of the rules.

Tool Usage & Parallelism

This section steers the model away from its natural tendency to reach for bash as a universal tool, instead routing operations through dedicated tools that provide better UX (diffs, syntax highlighting, permission control). The 'CRITICAL' emphasis and exhaustive mapping of bash alternatives to dedicated tools (cat→FileRead, sed→FileEdit, find→Glob) leaves no ambiguity. The parallelism guidance is a practical optimization — teaching the model to batch independent calls while serializing dependent ones mirrors how a developer thinks about async operations and significantly reduces latency in multi-step workflows.

Tone & Style Guidelines

This section constrains stylistic choices that would otherwise drift across conversations. The no-emoji default is notable — it reflects that developer tools should feel professional, with the opt-in mechanism preserving user agency. The file_path:line_number and owner/repo#123 conventions are format-forcing instructions that make the model's output directly actionable in the terminal and IDE. The colon-before-tool-call rule addresses a subtle UX issue where dangling colons appear when tool calls are hidden from the user's view.

Output Efficiency & Communication

This section exists in two variants optimized for different audiences. The external version is brutally concise ('go straight to the point') because most users want actions, not explanations. The internal Anthropic variant is a masterclass in technical writing guidance: 'assume the person has stepped away and lost the thread,' 'avoid semantic backtracking,' 'use inverted pyramid.' Both share the crucial exemption 'this does not apply to code or tool calls,' preventing the conciseness directive from truncating code output. The internal version's emphasis on 'cold restart' readability reflects hard-won lessons from long agentic sessions where context is easily lost between updates.

Cyber Risk & Security Boundaries

This instruction is injected near the top of the system prompt (inlined into the identity section) and uses a precise taxonomy of allowed vs. disallowed security activities rather than a blanket ban. By explicitly naming legitimate contexts (pentesting, CTFs, security research) alongside prohibited activities (DoS, supply chain compromise), it creates a nuanced decision boundary the model can apply consistently. The 'dual-use' framing for tools like C2 frameworks mirrors real-world security policy — the same tool is acceptable or not depending on authorization context, which the model is taught to evaluate.

Environment Context Injection

This section dynamically injects runtime context — working directory, git status, OS, shell, model identity — so the model can tailor its behavior to the actual environment. The bracketed placeholders are populated at runtime via TypeScript. Including the model's own identity and knowledge cutoff prevents hallucination about its capabilities. The model family catalog with exact IDs is a clever grounding technique: when users ask the model to build AI applications, it can reference real, current model identifiers rather than guessing outdated ones. The worktree detection (conditional, not shown) adds isolation instructions when operating in a git worktree.

Summarize Tool Results Reminder

Teaches the model to externalize durable facts from tool outputs into its own reply text before older tool results may be dropped from context—an explicit handoff from ephemeral tool state to conversational memory.

Brief / SendUserMessage (Kairos registry section)

Structures how the model should use SendUserMessage (or equivalent) so users actually see answers: ack-then-work for latency, checkpoints that carry information, and tight second-person phrasing—reducing silent tool churn and invisible replies.

Function Result Clearing (FRC)

Explains that older tool results may be cleared while keeping the N most recent—sets expectations so the model does not assume infinite tool history and aligns with summarize-tool-results behavior.

Session-Specific Guidance (documentation summary)

Educational digest of the dynamic session-guidance registry: it gates behavior on which tools exist so the model does not hallucinate capabilities. Compare system-discover-skills for the discover-tool slice.

Discover Skills Guidance

Verbatim shape from getDiscoverSkillsGuidance: nudges the model to invoke skill discovery on real pivots without spamming the tool when surfaced skills already suffice—classic conditional tool-use guidance.

Subagent Environment Notes

Subagent-only constraints: absolute paths (cwd resets), minimal code quoting, no emojis, and the colon-before-tool-call anti-pattern—practical formatting rules that reduce broken relative paths and awkward tool choreography.

Tool Prompts

Instructions given to the model for each of the 36+ built-in tools

Bash Tool

The Bash tool prompt is one of the most elaborate in Claude Code, combining tool-use guidance with extensive guardrails. It steers the model away from shell commands when dedicated tools exist (Glob, Grep, Read, Edit, Write), uses priority-ordering to list preferred alternatives, and includes detailed behavioral constraints for git safety. The sandbox section uses conditional logic to adapt restrictions based on configuration. The sleep guidance prevents common anti-patterns like polling loops and unnecessary delays.

File Read Tool

The Read tool prompt uses scope-limiting to clearly define what the tool can and cannot do (files yes, directories no). It applies behavioral constraints around path requirements (absolute only) and handles edge cases like empty files, large PDFs, and image files. The multimodal capability mention serves as context-injection so the model knows it can process visual content.

File Edit Tool

The Edit tool prompt enforces a strict prerequisite chain — requiring a Read before any Edit — which prevents blind modifications. It uses guardrails around uniqueness requirements for the old_string parameter and provides concrete guidance on handling the line number prefix format. The constraint against creating new files steers toward minimal, surgical changes.

File Write Tool

The Write tool prompt is deliberately minimal but uses strong behavioral constraints to steer toward the Edit tool for modifications. The mandatory Read-before-Write prerequisite prevents accidental overwrites. The prohibition against documentation file creation is a notable guardrail that prevents the model from proactively generating unwanted files.

Glob Tool

The Glob tool prompt is concise by design — it's a simple discovery tool. The key technique is the escalation hint at the end, directing the model to use the Agent tool for complex multi-round searches. This is a form of tool-use guidance that prevents the model from getting stuck in inefficient search loops.

Grep Tool

The Grep tool prompt uses strong behavioral constraints to prevent the model from falling back to raw shell grep/rg commands, ensuring the purpose-built tool is always used. It provides concrete regex examples as a form of few-shot guidance for pattern syntax, and includes an important disambiguation between grep and ripgrep syntax conventions.

Notebook Edit Tool

The Notebook Edit tool prompt is tightly scoped, providing just enough context about what Jupyter notebooks are and how cell indexing works. It uses structured output guidance by specifying the edit_mode parameter values (insert/delete) and enforces the absolute path constraint consistently with other file tools.

Web Search Tool

The Web Search tool prompt uses a 'CRITICAL REQUIREMENT' label to enforce mandatory source citation — a structured output technique ensuring every search-backed response includes attribution. The date injection via [CurrentMonthYear] is a context-injection technique that prevents the model from using stale year references in queries. The few-shot example of the Sources section format makes the expected output structure unambiguous.

Web Fetch Tool

The Web Fetch tool prompt demonstrates tool-use guidance by directing the model to prefer MCP-provided alternatives and the gh CLI for GitHub URLs. It uses scope-limiting to clarify the tool is read-only. The redirect handling instruction is a form of conditional logic, teaching the model to handle a multi-step fetch pattern when redirects occur.

Agent Tool

The Agent tool prompt is a masterclass in meta-prompting — it teaches the model how to write good prompts for sub-agents. The 'Writing the prompt' section uses persona-based framing ('brief like a smart colleague') and includes negative examples of what not to do ('never delegate understanding'). The fork semantics section introduces a qualitative decision framework for when to fork vs. spawn fresh agents. The 'Don't peek' and 'Don't race' directives are critical guardrails preventing the model from fabricating results or polluting its context with fork output.

Task Create Tool

The TaskCreate tool prompt uses a taxonomy of when-to-use vs when-not-to-use scenarios to help the model decide appropriately. The structured task fields section defines a mini-schema with examples of proper imperative vs. present-continuous forms. The 3-step threshold rule is a concrete heuristic that prevents over-engineering simple tasks.

Todo Write Tool

The TodoWrite tool prompt is the most example-rich tool prompt in Claude Code, using extensive few-shot examples with both positive and negative cases. Each example includes a <reasoning> block that teaches the model the decision-making process, not just the outcome. The XML-tagged examples and state machine for task management (pending → in_progress → completed) create a rigorous workflow pattern. The dual-form requirement (imperative content + present-continuous activeForm) ensures proper UI rendering.

Enter Plan Mode Tool

The EnterPlanMode prompt uses an extensive taxonomy of 7 numbered conditions to help the model decide when planning is warranted. Each condition includes concrete examples that ground abstract categories in real scenarios. The 'good vs bad' examples section uses contrastive pairs — notably showing that seemingly simple requests ('add a delete button') can have hidden complexity. The bias toward planning ('err on the side of planning') is a deliberate guardrail.

Exit Plan Mode Tool

The ExitPlanMode prompt carefully disambiguates itself from AskUserQuestion to prevent tool confusion — a common problem when multiple tools have overlapping use cases. The research vs. implementation distinction uses concrete examples to teach the model that planning mode is only for code implementation, not exploration. The explicit prohibition against using AskUserQuestion for plan approval prevents a redundant confirmation loop.

MCP Tool

The MCP tool is unique in that it has no static prompt content — it's a dynamic passthrough that surfaces descriptions from connected MCP servers at runtime. The empty prompt/description pattern with runtime override is an architectural choice that allows a single tool definition to represent any number of external MCP tools. This is a form of context-injection where the prompt content is entirely determined by the external tool ecosystem.

Ask User Question Tool

The AskUserQuestion prompt uses tool-use guidance to establish the UX pattern — always having an 'Other' escape hatch, marking recommendations with '(Recommended)', and supporting visual previews for comparing alternatives. The plan mode note is a critical disambiguation that prevents the model from using this tool for plan approval (which is ExitPlanMode's job). The note about users not seeing the plan until ExitPlanMode is called prevents a subtle UX bug where the model references something invisible to the user.

Config Tool

The Config tool prompt dynamically generates its settings list from a SUPPORTED_SETTINGS registry at runtime, making it a self-documenting tool. The get/set pattern uses structured output with concrete JSON examples that serve as few-shot templates. The separation of global vs. project settings teaches the model about configuration scoping, and the examples cover diverse setting types (strings, booleans, enums).

Skill Tool

The Skill tool prompt uses a BLOCKING REQUIREMENT pattern to ensure skills are invoked immediately when matched, preventing the model from merely discussing a skill instead of executing it. The slash command translation ('/<something>' → skill invocation) maps user mental models to tool actions. The re-invocation guard (checking for <command_name> tags) prevents infinite loops where the model would keep calling the skill tool after it has already loaded.

Brief / SendUserMessage Tool

The Brief tool encodes a visibility contract: user-visible answers must go through the tool, not plain assistant text. BRIEF_PROACTIVE_SECTION (Kairos) adds pacing rules (ack → work → result) and checkpointing to avoid silent spinners.

Sleep Tool

Sleep is the pacing primitive for proactive/daemon modes: ties idle behavior to tick reminders, contrasts with bash sleep, and surfaces cache-expiry tradeoffs.

Remote Trigger Tool

Scoped API surface with explicit HTTP verbs; emphasizes in-process auth so the model never handles tokens in shell.

List MCP Resources Tool

Minimal MCP discovery prompt; server scoping parameter documented for federation across multiple MCP connections.

Read MCP Resource Tool

Pair with List MCP Resources; requires explicit server + uri to disambiguate multi-server setups.

Task Stop Tool

Uses DESCRIPTION only (no separate PROMPT export) — short imperative list for killing async agent work.

Enter Worktree Tool

Strict gating on the word 'worktree' to avoid conflating with normal branch workflows; documents hook-based path for non-git VCS.

Exit Worktree Tool

Scope-limited no-op outside EnterWorktree sessions; destructive remove requires explicit discard_changes after surfacing conflicts.

LSP Tool

LSP operations are exposed via DESCRIPTION only; enumerates symbolic operations and 1-based editor coordinates.

Task Get Tool

Emphasizes dependency graph fields (blocks/blockedBy) before starting implementation.

Task List Tool

Merged default and swarm-on variants: adds teammate claiming flow and ID-order prioritization for parallel teams.

Task Update Tool

Strict completion criteria (no premature done); JSON examples for common update patterns; dependency edges via addBlockedBy.

Team Delete Tool

Ordering constraint: shutdown teammates before delete to avoid failing cleanup with active members.

Tool Search Tool

Explains deferred-tool lazy loading and query DSL; hint line reflects either system-reminder or available-deferred-tools UX.

Send Message Tool

Core swarm IPC: names vs broadcast; optional UDS/bridge addressing; legacy structured protocol for shutdown/plan approval.

Cron Create Tool

buildCronCreatePrompt has durable and session-only variants; includes fleet-friendly jitter guidance and max-age copy.

Cron Delete Tool

Thin delete prompt; wording branches on durable cron gate in source.

Cron List Tool

Lists both durable and session jobs when feature stack allows durable storage.

PowerShell Tool

Largest Windows-path tool prompt; async getPrompt() branches on detected edition and env flags for background/sleep guidance.

Team Create Tool

Long-form swarm bootstrap: agent-type matching, lifecycle, idle semantics, peer DM summaries, config discovery.

Agent Prompts

System prompts for built-in sub-agent types like Explore, Plan, and Verification

Default Agent Prompt

The default system prompt given to every subagent. It establishes a task-completion mindset with two guardrails: no gold-plating (don't over-engineer) and no half-measures (finish what you start). The final-report instruction ensures the parent agent gets a concise summary rather than raw output.

Explore Agent — Read-Only Codebase Search Specialist

The explore agent is a speed-optimized, read-only search specialist. The prompt uses an exhaustive deny-list of prohibited operations to enforce immutability — listing every way a file could be modified rather than just saying 'read-only.' This negative-example technique is more robust against creative workarounds. The performance note at the end nudges the model toward parallel tool calls.

General-Purpose Agent

The general-purpose agent combines the default task-completion framing with explicit search strategy guidance. The 'start broad and narrow down' instruction teaches a funnel-shaped investigation pattern. The documentation prohibition prevents a common LLM failure mode where models proactively generate README files nobody asked for.

Plan Agent — Software Architect

The plan agent enforces a structured four-phase workflow (understand → explore → design → detail) while being strictly read-only. The 'assigned perspective' mechanism enables multi-perspective planning where different plan agents can approach the same problem from different angles. The required output format with 'Critical Files' gives downstream implementation agents exactly what they need to start working.

Verification Agent — Adversarial Testing Specialist

The most psychologically sophisticated agent prompt. It pre-empts the model's known failure modes by naming them explicitly — 'verification avoidance' and 'seduced by the first 80%.' The rationalization catalog is a powerful anti-pattern technique: by listing the exact excuses the model will generate, it makes the model self-aware of its tendency to skip checks. The structured output format with good/bad examples ensures machine-parseable verdicts.

Claude Code Guide Agent

The guide agent demonstrates a retrieval-augmented generation (RAG) pattern within a prompt. Rather than embedding documentation content, it teaches the model a structured lookup workflow: determine domain → fetch docs map → find specific pages → answer. The three-domain taxonomy (CLI, SDK, API) acts as a router, and the detailed topic lists under each documentation source help the model match queries to the right source.

Status Line Setup Agent

Dedicated subagent for PS1 → statusLine command migration: regex extraction, escape mapping, full stdin JSON schema for session/workspace/model/context/rate limits/vim/agent/worktree, jq examples, settings.json and symlink handling.

Coordinator

The multi-worker orchestration system that manages parallel task execution

Coordinator System Prompt

The coordinator prompt is the most architecturally significant prompt in Claude Code, defining a multi-agent orchestration pattern. It establishes a four-phase workflow (Research → Synthesis → Implementation → Verification) and makes synthesis the coordinator's 'most important job.' The anti-patterns section ('based on your findings' is banned) prevents lazy delegation. The continue-vs-spawn decision table teaches context-aware worker management. The full example session demonstrates the complete lifecycle from bug report to fix.

Memory System

How Claude Code stores and retrieves long-term memories across sessions

Memory Type Taxonomy (Combined Mode)

The memory taxonomy defines four distinct memory types (user, feedback, project, reference) using richly structured XML with description, when_to_save, how_to_use, and examples for each. The feedback type is particularly nuanced — it instructs the model to record both corrections AND confirmations, preventing a negativity bias where only mistakes are remembered. The scope guidance (private vs team) teaches the model to reason about information visibility. The body_structure fields enforce a 'rule + why + how to apply' format that makes memories self-documenting.

What NOT to Save in Memory

Negative catalog of what must not be memorized (discoverable structure, transient plans, secrets) plus an explicit-save gate so low-signal lists are not persisted even if the user asks.

When to Access Memories

Recall policy: when relevance triggers loading, when to ignore the memory block, and an explicit drift caveat so stale recall is not treated as ground truth.

Before Recommending From Memory

Self-check instructions before acting on memory: confirm files, APIs, and commands still exist—reduces confident wrong answers from outdated notes.

Memory Frontmatter Example

Concrete frontmatter pattern (name, description, type) used to classify and retrieve memory files consistently.

Session Memory Default Template

Italicized section skeleton for session-local notes that survive compaction—encourages structured handoff without duplicating the main transcript.

Session Memory Update Instructions

Edit-only update pass for session notes: preserve headings and template lines, batch reads then writes, and respect token limits—operational discipline for multi-file note hygiene.

Memory Extraction Agent (auto-only, with index)

End-to-end fork prompt for auto-only memory: individual taxonomy, exclusions, two-step MEMORY.md indexing, and throughput rules—mirrors buildExtractAutoOnlyPrompt with illustrative tool names.

Output Styles

Configurable response modes like Explanatory and Learning that change Claude's behavior

Explanatory Output Style

The Explanatory style layers educational content onto the standard coding assistant behavior. The Insight box format with Unicode decorators (★ and ─) creates a visually distinct block in the terminal that separates educational content from code output. The instruction to focus on codebase-specific insights rather than general concepts prevents the model from becoming a generic programming tutorial.

Learning Output Style

The Learning style is the most interactive output mode, implementing a pair-programming pedagogy. It uses a TODO(human) marker system to create structured handoff points where the user writes real code. The three example types (whole function, partial function, debugging) demonstrate decreasing scope to match different learning scenarios. The 2-10 line contribution size is carefully calibrated — enough to be meaningful, not so much it becomes overwhelming. The 'don't take any action after the request' instruction prevents the model from answering its own question.

Special Features

Autonomous mode, scratchpad, proactive behavior, and other advanced capabilities

Autonomous Work (Proactive Mode)

The proactive/autonomous mode prompt transforms Claude Code from a request-response tool into a persistent daemon. The tick-based wake-up system with sleep controls creates an event loop pattern. The 'bias toward action' section inverts the usual LLM tendency to ask permission, while the terminal focus heuristic provides a clever UX signal — when the user's terminal is unfocused, act more autonomously; when focused, be more collaborative. The prompt cache expiry mention (5 minutes) teaches the model to reason about its own infrastructure costs.

Scratchpad Directory Instructions

The scratchpad prompt redirects all temporary file operations to a controlled, session-specific directory. This solves two problems: it prevents the model from polluting the user's project with temp files, and it avoids permission prompts that would interrupt workflow. The explicit enumeration of use cases helps the model recognize when to use the scratchpad rather than defaulting to /tmp.

Hooks System Instructions

The hooks instruction is deliberately minimal but carries significant weight. By telling the model to treat hook feedback 'as coming from the user,' it elevates automated hook output to the same trust level as direct user messages. The two-step escalation pattern (try to adapt → ask user) prevents the model from either ignoring hooks or immediately giving up when blocked by one.

Services & Utilities

Background pipelines: compaction, Chrome, permissions, hooks, bundled skills, and other service-level LLM prompts

Compaction: No-Tools Preamble

Hard-blocks tools on compaction turns so the model cannot burn its single allowed turn on denied tool calls—an extreme behavioral constraint paired with explicit output structure expectations.

Compaction: Detailed Analysis (Base)

Requires a private <analysis> scratchpad before the public <summary>, forcing chronological reconstruction and explicit coverage checks—chain-of-thought containment that keeps reasoning structured and auditable.

Compaction: Detailed Analysis (Partial)

Same analysis-first pattern as the base variant but scoped to recent messages when earlier context is retained verbatim—teaches the model what slice of history to reason over.

Compaction: Base Prompt

Full-session compaction spec with numbered sections (intent, concepts, files, errors, todos) plus embedded analysis instructions—high-recall summarization for handoff without losing implementation detail.

Compaction: Partial Window Prompt

Partial-window compaction: summarizes only the tail of the transcript when the head is preserved—reduces duplication and keeps the model focused on recent deltas.

Compaction: Partial Up-To Prompt

Summarizes up to a cut point for continuing sessions: written as if newer messages will arrive later, so density and forward pointers matter more than closure.

Compaction: No-Tools Trailer

Short trailing reminder after templates in no-tools compaction paths—reinforces analysis+summary shape and the cost of attempting tools.

Claude in Chrome — Base System Prompt

Browser-automation playbook: GIF capture hygiene, console filtering, and tool-specific habits—domain prompt that shapes how the model uses MCP browser tools safely and observably.

Agentic Session Search

Retrieval-oriented system prompt for finding past sessions from natural language—teaches query formulation, relevance signals, and honest uncertainty when transcripts are incomplete.

Permission Explainer System Prompt

Ultra-compact Haiku system line: the model must explain shell commands, intent, and risk before approval—trust and safety through forced articulation rather than volume.

Magic Docs Update Prompt

Document-update agent instructions emphasizing surgical edits, header preservation, and minimal diffs—reduces wholesale rewrites and teaches respect for existing doc structure.

AutoDream Consolidation Prompt

Dream/consolidation task framing: orient from logs and transcripts, then merge durable memories—long-horizon recall with explicit prioritization and anti-hallucination cues.

Prompt Suggestion

Forked follow-up suggestion prompt: elicits concrete next prompts from recent context—lightweight meta-prompting for UX continuity without touching the main thread.

Swarm Teammate System Addendum

Swarm visibility rules: teammates must use SendMessage for coordination because plain assistant text is not broadcast—clarifies multi-agent information flow and reduces silent work.

Buddy Companion Intro

Separates the main agent from a companion character (name/species placeholders): prevents role collapse and sets one-line vs bubble response boundaries.

Side Question Wrapper

/btw fork wrapper: no tools, single turn, answer only from context—prevents the side thread from spawning investigation loops or breaking cache assumptions.

Agent Progress Summary

Ultra-short present-tense blurbs for parallel worker activity—teaches extreme compression without losing which agent did what.

Tool Use Summary Generator

Mobile-oriented micro-labels for batched tool rows—forces consistent tone and length for glanceable UI.

Skill Improvement Hook (system)

Tiny classifier system line: watch for durable skill edits from conversational corrections—meta-behavior for incremental process capture.

Skill Improvement Hook (user)

XML-delimited skill body + recent messages with explicit <updates> JSON contract—few-shot-shaped extraction without separate examples.

Skill Improvement Apply (user message)

Full apply template with XML wrappers for current file, improvement bullets, and <updated_file>—teaches staged editing with non-negotiable structural tags.

Away Summary (user turn template)

Recap user turn with optional session-memory prefix: orients returning users with task + next step, explicitly deprioritizing commit play-by-play.

Bundled Skill: remember (memory review)

Promotion-oriented memory review: routing tables for where each memory belongs, duplicate/conflict scanning, and explicit user approval before any file change.

Bundled Skill: stuck

Reliability field guide for frozen sessions—process state, child processes, RSS, and optional sampling—framed as a report back to an internal channel.

Bundled Skill: simplify

Orchestrates three parallel review agents on the current diff (reuse, quality, security); source embeds the agent tool name as a build-time placeholder.

Bundled Skill: skillify

Skill-authoring assistant: turns a short user brief into a structured SKILL.md with frontmatter discipline and runnable steps.

Bundled Skill: updateConfig

Project settings and hooks construction skill: long-form procedural prompt with schema checks, jq validation, and live-proof steps.

Agent Creation (generateAgent)

JSON-schema-driven agent architect prompt: constrains output shape so generated agents are machine-validated before registration.

Bash: Haiku Prefix Policy Spec

Risk policy + prefix extraction spec for Bash: defines command_injection_detected, worked examples, and the Haiku task framing—safety-critical structured classification at the shell boundary.

Natural-Language Date/Time Parser (Haiku)

Strict ISO-8601 extraction from natural language with INVALID sentinel—reduces silent garbage dates and encodes ambiguity preferences (future bias, today default).

Commands & CLI

Slash commands and CLI handlers such as /init, /insights, auto-mode critique, and session naming

/init NEW_INIT Prompt

Multi-phase /init onboarding via AskUserQuestion: separates discovery, gap-fill, artifact types (hooks vs skills vs notes), and strict user-choice gating—large structured workflow prompting.

/init OLD_INIT Prompt

Legacy single-shot CLAUDE.md authoring with strong anti-boilerplate rules—teaches information discipline and provenance over generic advice.

Auto-Mode Critique

Reviewer persona for user-authored auto-mode rules: clarity, completeness, conflicts, actionability—teaches the model to critique prompts that will themselves become classifier input.

Session Title Generation

Sentence-case session titles with JSON schema and contrastive good/bad examples—few-shot shaping for metadata that must stay scannable in lists.

/insights Facet Extraction

Per-session facet schema with strict user-signal rules (ignore autonomous exploration)—reduces label noise in downstream analytics.

/insights Transcript Chunk Summary

Chunk summarization for oversized transcripts—preserves filenames, errors, and outcomes under tight length, acting as a map-reduce shim.

/insights: Project Areas

Asks for JSON areas with session counts and narratives—clustering usage into thematic projects for the HTML report.

/insights: Interaction Style

Second-person behavioral analysis with bolded key insights—persona + structured JSON for narrative + headline pattern.

/insights: What Works

Highlights impressive workflows as titled vignettes—positive reinforcement grounded in session evidence.

/insights: Suggestions & Features

Large JSON artifact tying CLAUDE.md additions, feature picks from a fixed CC feature menu, and copyable prompts—bridges analytics to actionable product adoption.

/insights: On the Horizon

Forward-looking opportunities with copyable prompts—speculative but grounded coaching for emerging workflows.

/insights: Memorable Moment

Qualitative human moment extraction—deliberately non-metric to surface memorable transcript color for report delight.

/insights At a Glance (template)

Coaching-tone synthesis with four JSON fields and explicit bans on stat name-drops—balances honesty with usability in a report UI.

Want the full prompt text, techniques, and tags? Explore the catalog.