A Leadership Guide

THE 5 LEVELS
OF AI

From Chat to Autonomous Agents —
What Every Leader Needs to Know

Harish · April 2026

01 / 45

The Landscape

Making Sense of the Buzzwords

When people say "AI," they usually mean one specific layer. Here's how it all fits together — each ring builds on the one inside it.

What matters for us

Generative AI & LLMs are where the current revolution lives. Everything we discuss today sits in these two rings.

AI technology layers — concentric rings from AI to Agents
02 / 45

The Timeline

How We Got Here

The AI explosion wasn't one breakthrough — it was scale + data + a chat interface that made it accessible to everyone.

2017
Transformer paper "Attention Is All You Need"
2018
GPT-1 117M params
2020
GPT-3 175B params "It got useful"
Nov 2022
ChatGPT launches. 100M users in 2 months
2023
GPT-4, Claude, Gemini. Multimodal era
2024
Reasoning models, agent frameworks
2025-26
Autonomous agents. AI that does work

Key insight: Parameter count went from 117M to 1.8T in 5 years — a 15,000x increase. But the real unlock was RLHF (learning from human feedback) and the chat interface — making AI usable by non-engineers.

03 / 45

The Imperative

Why This Matters Now

$4.4T
Annual productivity potential from GenAI
— McKinsey, 2024
87%
of Fortune 500 CEOs say AI is a top-3 priority
— PwC CEO Survey, 2025
16%
of organizations are actually executing on AI strategy
— BCG Global AI Report

The gap between knowing and doing is where competitive advantage lives. Today we'll walk through 5 levels of AI adoption — from basics to autonomous agents.

04 / 45

Level 1

The Conversation

AI as your thinking partner. You ask, it answers.

This is where 90% of people are today.

05 / 45

Level 1

What It Looks Like

You type a question, you get an answer. It's Google that talks back — but understands context, nuance, and follow-ups.

ChatGPT — OpenAI
Gemini — Google
Claude — Anthropic
Perplexity — AI-powered search
ChatGPT conversation interface
06 / 45

Level 1 · Industry Example

Even Simple AI Has Massive ROI

Klarna

2/3

of all customer chats handled by AI in the first month. Equivalent to 700 full-time agents. Resolution time dropped from 11 minutes to 2 minutes.

Klarna AI Assistant, Feb 2024

Bain & Company

4+ hrs

saved per week per consultant using ChatGPT for email drafting, meeting prep, and research summaries. Across 18,000 employees, that's 3.7M hours/year.

Bain internal productivity study, 2024

The lesson: Level 1 isn't trivial. Companies that rolled out even basic chat AI saw measurable impact within weeks — not months.

07 / 45

Level 1 · How I Use This

"I Use This Every Day"

Daily thinking partner for brainstorming, summarizing long documents, drafting communications, and exploring ideas before committing time.

✓ Draft investor communications in minutes
✓ Summarize 50-page reports into 1-page briefs
✓ Brainstorm strategies before team meetings
✓ Get second opinions on decisions
Gemini AI conversation
08 / 45

Level 1 · The Skill That Matters Most

Prompting Best Practices

The quality of AI output is directly proportional to the quality of your input. A good prompt turns a generic chatbot into a domain expert.

1. Give Context & Role

Weak: "Write a market update."
Strong: "You are a senior equity analyst at an AMC. Write a 200-word market update for our IFA partners covering this week's Nifty movement and top 3 sector trends."

2. Be Specific & Structured

Tell AI the format, length, audience, and tone you want. Use bullet points for multi-part requests. The more precise your ask, the less you'll need to revise.

3. Iterate, Don't Start Over

AI remembers the conversation. Say "Make it more concise," "Add data points," or "Rewrite for a CXO audience" — don't retype the entire prompt. Build on what you have.

4. Feed It Examples

Show AI what "good" looks like. Paste a previous report you liked and say "Write in this style." Few-shot examples dramatically improve output quality.

The 80/20 rule of prompting: Spending 2 extra minutes crafting a detailed prompt saves 20 minutes of back-and-forth editing. Prompting is the new business literacy.

09 / 45

Level 1 · Risk

What's at Stake

● MEDIUM RISK

Everything you type goes to the provider's servers

At Level 1, the AI provider (OpenAI, Google, Anthropic) processes your queries on their infrastructure. Free-tier conversations may be used for training.

✗ Don't paste confidential financials, client data, or trade secrets
✗ Don't share employee PII or internal strategy documents
✓ Safe for general research, public information, brainstorming
✓ Paid plans (ChatGPT Plus, Claude Pro) don't train on your data

Rule of thumb: If you wouldn't say it in a coffee shop, don't type it into a free AI chatbot.

10 / 45

Level 2

The Creator

AI as your research analyst and content engine. You stop just asking questions and start producing.

The output is no longer text in a chat — it's finished artifacts you can use.

11 / 45

Level 2 · The Toolkit

What Unlocks at Level 2

Deep Research

ChatGPT, Gemini, and Perplexity can read 50+ sources and produce cited research reports in minutes.

Image Generation

DALL-E, Midjourney, Imagen — create professional visuals, diagrams, marketing assets from text descriptions.

NotebookLM

Google's tool that turns documents into AI podcasts, study guides, and interactive Q&A — from your own data.

Document Drafting

Long-form reports, proposals, emails, and memos — not just suggestions, but complete first drafts.

Presentation AI

Tools like Gamma, Beautiful.ai, and Claude Artifacts create complete slide decks from bullet points.

Video & Audio

Sora, HeyGen, ElevenLabs — generate videos, voiceovers, and talking-head presentations from text.

12 / 45

Level 2 · Deep Research

Research in Minutes, Not Days

Deep Research modes read dozens of sources, cross-reference information, and produce structured reports with citations — the work of a junior analyst in an afternoon, done in 15 minutes.

What changed

Traditional: brief an analyst → 2-3 days → review → revise.
Now: describe what you need → 15 min → done.

ChatGPT Deep Research
13 / 45

Level 2 · Industry Example

Morgan Stanley: AI for 16,000 Advisors

98%

Advisor Adoption

Nearly every financial advisor uses the AI Assistant daily — trained on 100,000+ internal research reports.

$64B

Net New Assets

Q3 2024 net new assets — advisors armed with instant research spend more time with clients, less time searching.

30sec

Research Access

Questions that used to require calling the research desk now answered in 30 seconds — the firm's entire intellectual capital, instantly accessible.

The model: Morgan Stanley didn't build new AI — they gave advisors Level 2 access to their existing research via an AI interface. The AI is the distribution layer for institutional knowledge.

14 / 45

Level 2 · How I Use This

From Hours to Minutes

I use Level 2 tools for research, content creation, and document generation — work that used to take half a day now takes 20 minutes.

✓ Competitive analysis reports via Deep Research
✓ NotebookLM audio summaries of long strategy docs
✓ AI-generated images for presentations
✓ First drafts of board memos and investor updates
NotebookLM and AI content tools
15 / 45

Level 2 · Risk

The Data Exposure Escalates

● MEDIUM-HIGH RISK

Now you're uploading documents — PDFs, spreadsheets, internal reports

The AI provider processes your uploaded files to generate analysis. The more context you give AI, the more powerful it gets — but the more exposed your data becomes.

✗ Don't upload client lists, deal pipelines, or M&A documents
✗ Don't feed proprietary financial models into free-tier tools
✓ Use enterprise plans with data processing agreements
✓ NotebookLM processes locally within Google's enterprise boundary

The tension that runs through all 5 levels: more context = more power = more risk.

16 / 45

Level 3

The Power User

AI that knows your work, your files, and your tools. A persistent collaborator, not a one-off assistant.

17 / 45

Level 3

What Changes at Level 3

At Levels 1-2, every conversation starts from zero. At Level 3, AI has memory, projects, and connections.

Projects

Persistent context — upload company docs, set custom instructions. The AI "knows" your business across conversations. Like briefing a consultant once instead of every meeting.

Artifacts

AI builds things live — interactive charts, working applications, formatted documents, code — right inside the chat. Not just answers, but usable deliverables.

Plugins & MCP

AI connects to your tools — Google Drive, Slack, databases, APIs. Model Context Protocol (MCP) is like USB-C for AI: one standard adapter for everything.

Platform example: Claude Pro — with Projects, Artifacts, and MCP integrations. This is where AI transitions from tool to teammate.

18 / 45

Level 3 · In Action

Claude Projects & Artifacts

Upload your documents, set the context, and Claude responds with deep institutional knowledge — then builds live, interactive deliverables.

Claude Projects workspace
19 / 45

Level 3 · Connections

MCP: The Universal Adapter for AI

Model Context Protocol (MCP) lets AI plug into any tool — the way USB-C works for hardware. One standard, infinite connections.

Google Drive
Slack
Databases
GitHub
CRM
Claude with MCP
Research reports
Data analysis
Automated actions
20 / 45

Level 3 · How I Use This

My AI Workspace

I run Claude Projects loaded with company context — so every conversation starts with deep knowledge of our business, our data, and our strategy.

✓ Project loaded with all strategy documents
✓ Artifacts for live dashboards and presentations
✓ Connected to Google Sheets for real-time data
✓ Background agents that work while I'm in meetings
Claude Co-Work background tasks
21 / 45

Level 3 · Risk

Access Governance Enters the Picture

● MEDIUM-HIGH RISK

AI now has persistent memory and connects to live company systems

Plugins connect to real data sources. Projects remember past conversations. Co-work tasks run autonomously in the background. This isn't just data privacy anymore — it's access governance.

⚠ Who controls what the AI can access?
⚠ What happens when an employee leaves?
⚠ Can AI-connected plugins be exploited?
✓ Enterprise plans offer SSO, audit logs, and admin controls
22 / 45

Level 4

The Command Line

AI that can see your entire codebase, run programs, and make changes directly. This is where AI becomes a hands-on worker.

23 / 45

Level 4

The Key Shift: From Chat to Computer

Imagine the difference between describing a house to an architect over the phone vs. walking through the house together.

Levels 1-3: Chat Window

→ You describe the problem
→ AI suggests a solution in text
→ You copy-paste and implement manually
→ You go back to report errors
→ Repeat

Level 4: Full Computer Access

→ AI reads every file in the project
→ AI edits multiple files simultaneously
→ AI runs the code and tests it
→ AI sees errors and fixes them itself
You review the finished work

The tools: Claude Code (terminal), Cursor (IDE), GitHub Copilot (IDE), Windsurf (IDE). These give AI direct access to your file system, terminal, and development environment.

24 / 45

Level 4 · Claude Code

AI in the Terminal

Claude Code runs directly in your terminal. It reads files, understands context, makes changes, runs tests, and iterates — like a senior developer pair-programming with you.

Claude Code terminal session
25 / 45

Level 4 · IDE

AI Inside Your Editor

Cursor and VS Code put AI directly in the developer's workspace — reading one file while editing another, with the developer watching every change in real time.

The business case: Every developer in your company could have a tireless pair programmer who knows the entire codebase and never takes a break.

Cursor AI IDE
26 / 45

Level 4 · Example

Building a Complete Application

A real example: I asked Claude Code to build a full web application — from database to frontend to deployment.

Terminal — Building mutual-fund-dost
You: Build a complete mutual fund advisory app. Users can ask about funds, compare performance, and get AI-powered recommendations. Use Next.js, Supabase, and Gemini API.

Claude: I'll build this end-to-end. Here's my plan:
1. Set up Next.js project with Tailwind CSS
2. Create Supabase schema for funds, users, queries
3. Build API routes for fund data and AI recommendations
4. Create frontend components
5. Test and deploy

▶ Created 15 files across 4 directories
▶ Database schema: 4 tables, 3 indexes, 2 RLS policies
▶ API routes: /api/funds, /api/recommend, /api/compare
▶ Components: FundSearch, CompareView, AIAdvisor, FundCard

Running tests... ✓ 12/12 passed
Building... ✓ Build successful
Deploying to Vercel... ✓ Live at mutual-fund-dost.vercel.app

Total time: 47 minutes. Would have taken a team ~1 week.
27 / 45

Level 4 · Example

Automated Code Review

AI reviews every pull request — finding bugs, suggesting improvements, checking security vulnerabilities. It doesn't get tired, doesn't have ego, and reviews in seconds.

Business impact: Code review is a bottleneck in every engineering team. AI reviews free up senior engineers for architecture work instead of line-by-line review.

GitHub AI code review
28 / 45

Level 4 · Example

Multi-File Orchestration

The real power: AI working across an entire project simultaneously — reading docs, editing config, updating tests, and running builds in one session.

Terminal — claude (multi-file refactor)
You: Migrate our authentication from JWT to OAuth 2.0. Update all API routes, middleware, tests, and documentation.

Claude: Analyzing the codebase for all auth touchpoints...

📄 Reading 23 files with auth references
📄 Found: 4 middleware files, 12 API routes, 5 test files, 2 config files

▶ Edited src/middleware/auth.ts — replaced JWT verify with OAuth token validation
▶ Edited src/lib/session.ts — new session management with refresh tokens
▶ Edited 12 API routes — updated auth checks to use new middleware
▶ Edited 5 test files — updated mocks and assertions for OAuth flow
▶ Edited .env.example — added OAUTH_CLIENT_ID, OAUTH_SECRET
▶ Updated docs/authentication.md — new auth flow documentation

Running full test suite... ✓ 47/47 tests passed
✓ Migration complete across 23 files. Zero breaking changes.
29 / 45

Level 4 · Industry Example

This Isn't Coming — It's Here

Google

25%

of all new code at Google is now AI-generated, then reviewed by engineers. Across one of the world's largest codebases.

Google CEO Sundar Pichai, Q3 2024

GitHub Copilot

55%

faster task completion for developers using Copilot. Over 77,000 organizations have adopted it. 1.8M+ paid users.

GitHub Copilot Impact Study, 2024

Cognition (Devin)

End-to-end

AI software engineer handling complete engineering tickets autonomously — from reading the issue to writing code to submitting the PR.

Cognition Labs, 2024-25

The takeaway: The top tech companies have already shifted their development workflow. AI isn't replacing developers — it's making each one 2-3x more productive.

30 / 45

Level 4 · Risk

The Keys to the Server Room

● HIGH RISK

AI now has access to your terminal, files, source code, and can execute commands

At Level 4, AI reads environment variables, API keys, credentials, and can make system calls. It has the same access as the developer running it.

⚠ Can read API keys and database credentials
⚠ Can execute arbitrary shell commands
⚠ Can modify production configuration
✓ Permission models limit what AI can do without approval
✓ Sandboxing restricts file system and network access

Would you hand a contractor the keys to your server room on day one? Sandboxing and permissions are non-negotiable.

31 / 45

Level 5

The Autonomous Agent

AI that works while you sleep. You define the goal — it plans, executes, and reports back.

32 / 45

Level 5

The Fundamental Shift

Levels 1-4 are human-in-the-loop — you ask, AI does, you review. Level 5 is AI-in-the-loop — you define the goal, AI handles everything else.

Levels 1-4: Human Drives

🧑
instruction
🤖
result
🧑
Human reviews, gives next instruction

Level 5: AI Drives

🧑
single goal
🤖 🤖 🤖 🤖
plan → research → build → test → deploy
🧑
Human reviews finished work
33 / 45

Level 5 · Architecture

Multi-Agent Systems

Instead of one AI doing everything, specialized agents coordinate like a project team — each with a role.

🎓 Orchestrator Agent
🔎 Research Agent
✎ Writing Agent
✓ QA Agent
🚀 Deploy Agent
📦 Final Output

It's like hiring a department, not a person. Tools: Claude Agent SDK, LangGraph, CrewAI, AutoGen.

34 / 45

Level 5 · Example

Automated Daily Research Pipeline

An agent monitors 50+ industry sources every morning, identifies relevant news, cross-references with our portfolio, and emails a summary brief by 7 AM. Every day. No human involved.

50+ Sources
Scrape & Filter
Cross-Reference
Analyze & Rank
Draft Brief
Email by 7 AM

Sources

AMFI, SEBI circulars, Morningstar, Value Research, ET Markets, Bloomberg, competitor press releases, regulatory filings

Intelligence

Identifies trends, competitor moves, regulatory changes, and market shifts relevant to our specific portfolio and strategy

Output

Structured morning brief with priority flags, action items, and links to sources — in your inbox before your first coffee

35 / 45

Level 5 · Example

Agents That Never Sleep

Scheduled agents run on cron — daily, weekly, or continuously. Work that happens 24/7, not just during business hours.

Scheduled Agents Dashboard
Morning Brief Agent Daily 6:00 AM Last: ✓ 6:12 AM today
Security Scanner Daily 2:00 AM Last: ✓ 2:04 AM today
Competitor Monitor Weekly Mon 8:00 AM Last: ✓ Mon 8:07 AM
Compliance Checker Daily 11:00 PM Last: ⚠ 1 finding
Report Generator Weekly Fri 5:00 PM Last: ✓ Fri 5:23 PM
36 / 45

Level 5 · Example

Full Project Execution

The most impressive capability: give an agent a project brief and it breaks the work into tasks, spins up parallel sub-agents, and delivers the finished product.

Agent Orchestrator — Project Execution
Brief: "Create a complete competitive analysis presentation for the board meeting next week."

Orchestrator: Breaking into 8 tasks. Dispatching parallel agents...

Agent 1: Scraped 5 competitor websites Agent 2: Pulled financial data from filings Agent 3: Analyzed product feature matrices Agent 4: Researched market share data Agent 5: Drafted SWOT analysis Agent 6: Created data visualizations Agent 7: Built 32-slide presentation Agent 8: QA review and formatting

✓ Complete. 32-slide board-ready deck delivered in 1 hour 47 minutes.
Estimated manual effort: 3-5 person-days.
37 / 45

Level 5 · Example

The AI Assembly Line

Real workflows where multiple AI systems chain together — each step feeding the next. Not one clever chatbot, but industrial automation for knowledge work.

Data Ingestion
Analysis
Report Draft
Presentation
Review
Distribution

Ingest

Agent pulls raw data from APIs, databases, emails, and document stores automatically

Transform

Analysis agent cleans, structures, cross-references, and identifies insights

Deliver

Formatting agent creates reports, presentations, and emails — then distributes to the right people

Each step is a different agent or tool. The pipeline runs end-to-end without human intervention.

38 / 45

Level 5 · Industry Example

The Agentic Future Is Already Here

Salesforce Agentforce

380M+

customer interactions handled autonomously. Agents resolve cases, qualify leads, and manage commerce — without human handoff for routine tasks.

Bloomberg

Minutes

to process financial filings that took analysts days. AI agents extract data, generate summaries, and flag anomalies across thousands of filings simultaneously.

McKinsey Projection

"Agentic AI could automate 60-70% of current knowledge worker tasks by 2028." — McKinsey Global Institute, The State of AI 2025. The companies building these pipelines now will have a 3-5 year structural advantage.

39 / 45

Level 5 · Risk

The Highest Stakes

● CRITICAL RISK

Autonomous agents make decisions without human review

They chain actions — read data, make API calls, send emails, modify systems. One misconfigured agent can cause cascading damage at 3 AM when no one is watching.

⚠ What's the blast radius if an agent goes wrong?
⚠ Who is accountable for an AI's autonomous decision?
⚠ How do you audit what happened at 3 AM?
✓ Implement kill switches and human-approval gates
✓ Comprehensive logging and audit trails are mandatory

This is where governance stops being a "nice to have" and becomes the foundation.

40 / 45

Governance

The Three-Zone Data Framework

You don't need to pick one. You need a policy for when to use which.

🟢 Public AI (Green Zone)

What: ChatGPT, Gemini, Perplexity — public cloud. Provider processes your data.

Use for: General research, brainstorming, public info, personal productivity.

Risk: Low — but never paste confidential data.

Cost: $20-100/month per user.

🟠 Managed AI (Amber Zone)

What: AWS Bedrock, Azure OpenAI, Google Vertex — your cloud, your encryption.

Use for: Internal documents, customer data analysis, proprietary research.

Risk: Medium — data in your VPC, provider doesn't train on it.

Cost: Usage-based, typically $1-5K/month.

🔴 Self-Hosted (Red Zone)

What: Open-source models (Llama, Mistral, Qwen) on your own infrastructure.

Use for: Regulated data, trade secrets, compliance-critical tasks.

Risk: Lowest exposure — but highest ops burden.

Cost: GPU infra, $5-50K+/month.

41 / 45

Governance

The 10-Second Decision

Every employee should be able to answer "which zone?" in 10 seconds. Here's the flowchart.

Is the data confidential?
NO
🟢 Public AI — Go ahead
YES
Is it regulated data?
NO
🟠 Managed (Bedrock)
YES
🔴 Self-Hosted
42 / 45

The Path Forward

Where Should We Be?

Most orgs
L1
Chat
L2
Content
Advantage starts
L3
Power User
L4
CLI & IDE
Market leaders
L5
Agents

The question isn't whether to use AI. It's how fast we climb. Each level multiplies the productivity gains of the one below it. The gap between Level 1 and Level 5 isn't incremental — it's exponential.

43 / 45

Behind the Scenes

This Presentation Was Built by AI — In This Session

claude — ~/aidecks
harish: I want to talk about 5 levels of using AI... Level 1 simple ChatGPT... Level 5 agentic frameworks... real-life examples, not theory. Make this top-class.

claude: Who is your leadership team? CXO-level, non-technical
claude: How long? 45-60 minutes, ~35-45 slides
claude: Examples? Mix of industry + personal proof points
claude: Governance weight? Woven throughout, not a separate section
claude: Intro depth? Quick and sharp, 2-3 slides

claude: Presenting design in sections...
Section 1: Overall Structure & Narrative Arc ✓ approved
Section 2: The Landscape (Slides 1-4) ✓ approved
Section 3-8: Levels 1-5 + Guardrails ✓ approved

harish: for CLI and Agent, add more slides for examples
claude: Expanded L4 and L5 to 8-9 slides each.

▶ Building presentation...
Created ai-levels.html — 44 slides, 1800+ lines
Built 7 CSS mockups (ChatGPT, Claude, VS Code, terminal...)
Built SVG diagrams (onion ring, timeline, agent architecture...)
Zero external dependencies — everything inline
✓ Live at aidecks.xisunknown.com

harish: add a slide on prompting best practices
claude: ✓ Inserted slide 9, renumbered 44 → 45 slides

harish: save this session as a slide so I can show how this was made
claude: ✓ You're looking at it.

Total time from brief to finished 45-slide presentation: one conversation. This is Level 4.

44 / 45

The companies that win the next decade won't have the best AI —

They'll be the ones who learned to use it first.

Harish · April 2026

Let's discuss.

45 / 45