Chapter 1 The Architecture
Every AI setup you've tried fails for one reason: no persistent context. Each session starts from zero. The fix is three things working together:
LLM vs. Wrapper
The LLM (Claude, GPT, Gemini) is the brain — no memory, no tool access, no idea who you are. The wrapper (Claude Code, Cursor, Windsurf) gives the brain access to your files, terminal, and APIs. Same brain, different wrapper = different capabilities.
At my company, teams run "Code Puppy" — same Claude model, wrapped with access to Confluence and Jira. I run my own with Slack and meeting transcripts. The value is in the workspace structure, not the model. Set it up right and you can swap Claude for GPT or Gemini.
Why this works
A brain file teaches the AI who you are at session start. An intelligence hub gives it reference files to read before answering. Cron jobs pull fresh data without you asking. The system compounds — more use, better output.
More depth: beginner brain file guide | full masterclass
Chapter 2 Git + Folder Organization
The repo IS the workspace. Not a code project — this is where your brain file, intelligence hub, meeting notes, cron scripts, and project folders all live. When you run claude from this folder, the AI reads everything in it.
Git gives you:
- History — see how team status changed week over week, diff decisions over time
- Backup — GitHub is the remote. Laptop dies, nothing lost.
- Portability — clone on a new machine, run
claude, same AI experience instantly - Rollback — bad brain file rule breaks things?
git revertand you're back
One-time setup
mkdir ~/work-hub && cd ~/work-hub git init gh repo create work-hub --private --source=. --push
Folder organization — pick your style
Option A: PARA Method (what I use)
work-hub/ 00-inbox/ # Quick capture, unprocessed items 01-projects/ # Active work with deadlines 02-areas/ # Ongoing responsibilities (teams, countries, functions) 03-resources/ # Reference (templates, intelligence-hub, scripts) 04-archive/ # Completed or inactive work CLAUDE.md # Your brain file (Chapter 3) .gitignore
My workspace has 39 intelligence files, 66 custom skills, and 17 AI agents. It started with exactly this structure and five empty folders. Start simple.
Option B: Domain-First
work-hub/ brazil-team/ india-team/ us-team/ shared-resources/ intelligence-hub/ CLAUDE.md
Option C: Timeline-First
work-hub/
2026-Q2/
active-initiatives/
team-status/
decisions/
2026-Q1/
intelligence-hub/
CLAUDE.md
Key rule: The parent folder is the workspace root. The AI reads everything from that root down.
The .gitignore
.DS_Store .env *.key personal/ node_modules/
First commit
git add -A && git commit -m "Initial workspace structure" && git push
Chapter 3 The Brain File
Sits at workspace root. AI reads it every session. Claude Code calls it CLAUDE.md, Cursor uses .cursorrules, Windsurf uses .windsurfrules.
What goes in it
- Your role — title, responsibilities, what you own
- Your team — names, roles, what each person manages
- Your priorities — what matters this quarter, what's on fire
- Your tools — what systems you use (Slack, Jira, Confluence, etc.)
- Your conventions — how you name files, how you structure documents, how you communicate
- Routing rules — "when I ask about X, look in folder Y first"
Starter template for a 16-person team manager
# Workspace Context ## Role - Title: [Your Title] - Team Size: 16 people across [countries/regions] - Reports To: [Manager Name, Title] - Key Tools: Slack, [Project Tracker], [Meeting Notes Tool] ## Team Structure | # | Name | Role | Location | Focus Area | |---|------------|----------------|----------|--------------------| | 1 | [Name] | [Role] | [City] | [What they own] | | 2 | [Name] | [Role] | [City] | [What they own] | ... ## Current Priorities (This Quarter) 1. [Priority 1 - what and why] 2. [Priority 2 - what and why] 3. [Priority 3 - what and why] ## Intelligence Hub When answering questions, check `intelligence-hub/` first: - Team status questions → `intelligence-hub/team-status/` - Decision history → `intelligence-hub/decisions/` - Meeting context → `intelligence-hub/meeting-notes/` - Slack context → `intelligence-hub/slack-digest/` ## Conventions - Documents: UPPER_SNAKE_CASE.md - Meeting notes: YYYY-MM-DD_topic.md - Status updates: weekly, stored in team-status/
How it grows
Start at 20 lines. When the AI gets something wrong, add a rule. Mine is now 163 lines core + 500 lines of skills and routing. Or let Claude build it for you:
Interview me and build a CLAUDE.md brain file for this workspace. Ask me about my role, team structure, priorities, tools, and communication preferences. Build the file section by section.
Skills, hooks, and memory (intermediate)
Once the brain file is working, you can add:
- Skills — reusable prompts that trigger on keywords. Example: typing "draft email" auto-loads your writing style and communication rules.
- Hooks — scripts that run before or after AI actions. Example: auto-validate every document against your formatting standards.
- Memory — the AI stores corrections between sessions. Tell it "never use passive voice in emails" once, and it remembers for every future session.
You don't need any of this on day one. The brain file alone gets you 80% of the value.
Chapter 4 The Intelligence Hub
A single folder the AI reads before making any recommendation. Synthesized summaries, not raw data.
| File | What It Contains | Update Frequency |
|---|---|---|
team-status.md | What each person is working on, blockers, capacity | Weekly |
decisions.md | Recent decisions, who made them, rationale | As they happen |
meeting-notes/ | Key takeaways from important meetings | After each meeting |
slack-digest.md | Important threads, decisions made in channels | Weekly |
priorities.md | This quarter's focus, what's changed | Monthly |
country-context/ | Per-country nuances, regulations, local contacts | As needed |
Auto-query routing
Add routing rules to your brain file — the AI reads the right file based on question type.
Example routing rule (from my CLAUDE.md)
## Intelligence Hub Query Routing When answering questions, check intelligence-hub/ FIRST: | Question Type | Read This File First | |-----------------------------|-----------------------------------| | Team capacity / who's free | intelligence-hub/team-status.md | | Why we decided X | intelligence-hub/decisions.md | | Meeting follow-ups | intelligence-hub/meeting-notes/ | | Slack threads / context | intelligence-hub/slack-digest.md | | Country-specific questions | intelligence-hub/country-context/ | | Initiative status | intelligence-hub/priorities.md |
Project-level context override
Put a smaller brain file inside individual project folders. "Brazil market expansion" gets its own CLAUDE.md pointing to intelligence-hub/country-context/brazil.md instead of global team status.
Start with five files
team-status.md— one paragraph per person, updated weeklydecisions.md— a running log of decisions with dates and rationalepriorities.md— this quarter's top 5meeting-notes/— a folder for important meeting summariesslack-digest.md— a weekly summary of important Slack activity
NotebookLM as a research engine
NotebookLM (free, Google) — upload intelligence hub files, get citation-backed answers without consuming the AI's context window.
Chapter 5 Connecting Your Tools
MCP (Model Context Protocol) is an open standard that lets AI models talk to external tools. If a tool has an MCP server, Claude Code can read from and write to it directly.
Available MCPs
| Tool Category | MCP Server | What It Does |
|---|---|---|
| Google Workspace | google-workspace | Read/write Gmail, Calendar, Drive, Sheets, Docs |
| Slack | slack-mcp | Read channels, search messages, post updates |
| Jira / Linear / Asana | Various | Read tickets, update status, create issues |
| Meeting transcripts | Various | Search meeting recordings and transcripts |
| Confluence / Notion | Various | Read/write knowledge base pages |
| GitHub | mcp-github | Repos, issues, PRs, code search |
How to add an MCP
# Example: add Slack MCP claude mcp add slack npx -y @anthropic/slack-mcp # Example: add Google Workspace claude mcp add google-workspace npx -y google-workspace-mcp
Where to find MCPs: github.com/modelcontextprotocol/servers has the official registry. Search for "[your tool] MCP server" and you'll usually find one.
CLI-Anything
No MCP exists for your tool? Doesn't matter. Claude Code runs in a terminal. If a tool has a CLI, the AI can call it directly — no MCP wrapper needed. This is the escape hatch that makes the system work for any stack.
How it works
Claude Code has access to your shell. When you describe what you want, it writes and executes the CLI commands for you. You can also teach it reusable patterns by adding them to your brain file or creating a skill (a reusable prompt template that fires on keywords).
Examples
| Tool | CLI Command the AI Runs | What It Does for You |
|---|---|---|
| Jira | jira issue list --project TEAM --status "In Progress" | Pulls active tickets into your intelligence hub |
| Slack | curl -H "Authorization: Bearer $TOKEN" https://slack.com/api/conversations.history | Reads channel messages, feeds into slack-digest.md |
| AWS | aws cloudwatch get-metric-data ... | Pulls system metrics for incident reports |
| Internal API | curl https://internal.company.com/api/team-status | Fetches live team data, updates team-status.md |
| GitHub | gh pr list --repo your-org/repo --state open | Tracks open PRs across your team's repos |
| kubectl | kubectl get pods -n production | Checks deployment status for incident triage |
Best way to use it
- Start manual — run the CLI command yourself first. Make sure it returns what you need.
- Teach the AI — add the command pattern to your brain file: "When I ask about team tickets, run
jira issue list --project TEAMand summarize the output." - Automate it — wrap it in a cron job (Chapter 6) that writes the output to your intelligence hub on a schedule.
## CLI Tools Available When I ask about active tickets, run: jira issue list --project TEAM --status "In Progress" -o json When I ask about Slack activity, run: python3 scripts/slack-digest.py --channel general --days 7 When I ask about deployment status, run: kubectl get deployments -n production -o wide Always summarize the output and update the relevant intelligence-hub file.
The power move: combine CLI-anything with cron jobs. A script that runs jira issue list + slack-digest.py every Friday, writes the results to your intelligence hub, and your AI reads it Monday morning. You walk in knowing what your 16 people did last week without asking anyone.
Meeting transcripts
- Otter.ai — integrates with Zoom/Teams/Meet
- Fireflies.ai — auto-joins calls, transcribes and summarizes
- Built-in — Teams, Zoom, and Meet all have native transcription
Pattern: record, transcribe, pull into intelligence hub, let AI search and synthesize.
Chapter 6 Automated Intelligence
Manual intelligence hub is good. Automated is a force multiplier. crontab -e on Mac/Linux, Task Scheduler on Windows. Claude Code also has built-in CronCreate.
Patterns I run
| What | When | What It Does |
|---|---|---|
| Weekly team status aggregation | Friday 2pm | Pulls Slack + meeting notes, synthesizes into team-status.md |
| Decision health report | Sunday 9am | Scans decisions.md for stalled items, surfaces them |
| Morning briefing | Monday 7am | Generates a digest of what happened over the weekend |
| Initiative status monitor | Daily 5pm | Checks if any initiative has gone silent for >5 days |
How to set up a cron job
# Open the cron scheduler crontab -e # Add a weekly Friday 2pm job: 0 14 * * 5 cd ~/work-hub && python3 scripts/weekly_status.py # Add a daily 5pm check: 0 17 * * * cd ~/work-hub && python3 scripts/initiative_monitor.py
Starter crons for your situation
With 16 people across multiple countries, I'd start with three:
- Weekly team status aggregation — Friday afternoon, pull Slack activity per person, update team-status.md
- Daily decision log — End of day, scan for decisions made in meetings/Slack, append to decisions.md
- Monthly country review — First of each month, generate a summary per country from the past 30 days
Get the folders and brain file working first. Use it manually for a few weeks, then automate what's worth automating.
Remote triggers
Claude Code can run scheduled agents even when you're not in a session:
# Inside Claude Code, create a scheduled task: /schedule create --cron "0 14 * * 5" --prompt "Generate the weekly team status report and update intelligence-hub/team-status.md"
Chapter 7 Skills, Agents & the Learning Loop
This is where the system stops being a static setup and starts improving itself. Three concepts: skills (reusable prompts), agents (subprocesses with their own context), and the learning loop (the AI gets better from your corrections).
Skills — why and when
A skill is a reusable prompt that triggers on keywords. Instead of typing the same complex instruction every time, you write it once and it fires automatically.
| You Say | Skill Fires | What It Does |
|---|---|---|
| "weekly digest" | weekly-digest | Reads intel hub, generates team summary |
| "draft email to [name]" | draft-email | Loads your writing style + communication rules, drafts email |
| "meeting prep for [topic]" | meeting-prep | Pulls relevant intel hub files, generates briefing doc |
| "blocker report" | blocker-report | Scans team-status.md for red/yellow items, surfaces them |
How to pick what becomes a skill
If you've typed the same kind of request 3+ times, it's a skill. The test: "Would I explain this the same way to a new hire?" If yes, write it down once and let the AI reuse it.
--- name: weekly-digest description: | Generate weekly team status digest from intelligence hub files. Use when: "weekly digest", "team status", "what happened this week". --- # Weekly Digest 1. Read intelligence-hub/team-status.md 2. Read intelligence-hub/slack-digest.md 3. Read intelligence-hub/decisions.md 4. Read any meeting notes from past 7 days 5. Generate summary: Highlights, Blockers, Decisions, Action Items
Agents — why a research agent matters
Your main conversation has a finite context window. Long research pollutes it — you burn tokens on investigation and have less room for the actual work. An agent is a subprocess with its own context window. It does the research, synthesizes a brief, and returns just the result.
When to use an agent vs. asking directly
| Situation | Approach |
|---|---|
| Quick factual question | Ask Claude directly |
| Need 3+ sources compared | Research agent |
| Need current web data | Research agent (can search the web) |
| Building a recommendation or case | Research agent |
| Parallel independent tasks | Multiple agents (each gets own context) |
--- name: deep-research description: Research agent — investigates a topic across multiple sources, returns a synthesized brief with citations. model: opus maxTurns: 25 --- # Deep Research Agent 1. Take the research question 2. Search web, read documents, query APIs 3. Cross-reference multiple sources 4. Produce brief: Key Findings, Analysis, Recommendations, Sources 5. Return brief to main session
The learning loop
This is what makes the system compound over time instead of staying flat.
- You correct the AI — "don't use passive voice in emails" or "always check team-status.md before recommending priorities"
- The AI stores the correction — in its memory system, persisted between sessions
- Next session it applies it — without you repeating yourself
- You review periodically — a
/reflectcommand surfaces stored corrections so you can promote them to permanent brain file rules
The brain file is manual. The learning loop is automatic. Together, the AI gets measurably better every week you use it. My workspace has 918 logged corrections that turned into rules, skills, and routing changes over 5 months.
Start simple. You don't need skills or agents on day one. Use the brain file and intelligence hub for a few weeks. When you notice yourself repeating instructions, that's when you write a skill. When research starts eating your context, that's when you spawn an agent.
Chapter 8 The Walk-Through
Checklist for our call. ~2 hours total. Start by forking the template repo:
gh repo create my-workspace --template HR-AR/ai-workspace-starter --private --clone cd my-workspace
Step 1: Install Claude Code CLI
Requires Node.js first, then:
npm install -g @anthropic-ai/claude-code claude
Step 2: Review folder structure
The template comes with PARA (Chapter 2). Rename or restructure if you prefer Domain-First or Timeline-First.
Step 3: Customize your CLAUDE.md brain file
The template has a starter. Customize it, or let Claude interview you:
Interview me and build a CLAUDE.md brain file. I manage 16 people across multiple countries. Ask me about my role, team structure, priorities, tools, and how I want to communicate with you.
Step 4: First real conversation — verify the AI knows you
Ask it something that requires context. If it answers generically, the brain file needs more detail.
What are the top three things I should focus on this week based on my priorities and team structure?
Step 5: Fill in the intelligence hub
The template has empty files in intelligence-hub/. Fill in team-status.md, decisions.md, priorities.md with real data. See Chapter 4.
Step 6: Connect your first MCP
Start with whatever tool leaks the most context. Usually Slack or Google Workspace. See Chapter 5.
Step 7: Set up your first cron
A weekly team status aggregation is a good first one. See Chapter 6.
Step 8: Iterate
Use it for a week. Add rules when the AI gets something wrong. Add files when you notice gaps. It compounds.
Quick reference
| Action | Command |
|---|---|
| Start Claude Code | claude |
| Save your work | git add -A && git commit -m "message" && git push |
| Add an MCP | claude mcp add [name] [command] |
| List MCPs | claude mcp list |
| Open Claude Code docs | docs.anthropic.com/en/docs/claude-code |
| MCP registry | github.com/modelcontextprotocol/servers |
| NotebookLM | notebooklm.google.com |
Pricing changes. Claude Code requires an Anthropic API key or a Claude Max subscription (~$100-200/month depending on plan). Check anthropic.com/pricing for current rates. NotebookLM is free.