AI agents are LLMs that can take actions, not just generate text. This page explains agent types, architectural patterns, and the tool-use paradigm that transforms a chatbot into an autonomous system.

An AI agent is an LLM-powered system that can:

  1. Receive a goal from a user

  2. Plan how to achieve it (break it into steps)

  3. Use tools to interact with the real world (search, APIs, databases, code execution)

  4. Observe results and adjust its approach

  5. Complete the task with minimal human intervention

The key difference from a chatbot: a chatbot generates text in response to prompts. An agent takes actions – it reads files, queries databases, sends emails, writes code, and makes decisions about what to do next.

graph LR
    subgraph Chatbot
        A1["User Prompt"] --> A2["LLM"] --> A3["Text Response"]
    end
    subgraph Agent
        B1["User Goal"] --> B2["LLM (Reasoning)"]
        B2 --> B3["Tool Call"]
        B3 --> B4["Observation"]
        B4 --> B2
        B2 --> B5["Final Result"]
    end
    style A2 fill:#e3f2fd,stroke:#1976D2
    style B2 fill:#fff3e0,stroke:#FF9800
    style B3 fill:#e8f5e9,stroke:#4CAF50

Agent Architecture Patterns

The most widely used agent pattern. The LLM alternates between reasoning about what to do and acting by calling tools.

Loop:
  1. THINK: "I need to find the customer's order status"
  2. ACT: Call order_lookup(customer_id="12345")
  3. OBSERVE: Order #789 shipped on March 1, tracking: XYZ
  4. THINK: "I have the info, now I can respond"
  5. RESPOND: "Your order shipped on March 1..."

Strengths: Transparent reasoning, easy to debug, works with any tool set. Weaknesses: Can get stuck in loops, each step costs tokens. Used by: LangChain agents, n8n AI Agent node, most production agents.

The agent creates a full plan upfront, then executes each step sequentially. Better for complex, multi-step tasks where order matters.

1. PLAN:
   a. Search knowledge base for product specs
   b. Compare with competitor data
   c. Generate comparison table
   d. Draft recommendation email

2. EXECUTE: Steps a -> b -> c -> d in order

3. VERIFY: Check output against original goal

Strengths: More structured, better for complex workflows. Weaknesses: Less adaptive to unexpected results mid-execution. Used by: CrewAI, AutoGPT-style systems.

The simplest agent pattern. The LLM decides which function to call and with what parameters. The system executes the function and returns the result.

User: "What's the weather in Paris?"
LLM decides: call get_weather(city="Paris")
System executes: -> returns {temp: 18, condition: "cloudy"}
LLM responds: "It's 18C and cloudy in Paris."

Strengths: Simple, predictable, easy to implement. Weaknesses: Limited to single-step tool use. Used by: OpenAI function calling, Claude tool use, Gemini function calling.


Multi-Agent Systems

When tasks are too complex for a single agent, multiple specialized agents can collaborate:

graph TD
    U["User Request"] --> O["Orchestrator Agent"]
    O --> R["Research Agent<br/>(web search, documents)"]
    O --> A["Analysis Agent<br/>(data processing, math)"]
    O --> W["Writing Agent<br/>(content generation)"]
    R --> O
    A --> O
    W --> O
    O --> F["Final Output"]

    style O fill:#fff3e0,stroke:#FF9800
    style R fill:#e3f2fd,stroke:#1976D2
    style A fill:#e8f5e9,stroke:#4CAF50
    style W fill:#f3e5f5,stroke:#9C27B0
How Multi-Agent Systems Work

Each agent has a specific role, set of tools, and system prompt. An orchestrator agent receives the user’s goal, decides which specialist agents to invoke, coordinates their work, and synthesizes the final output.

Example: A market research request might trigger:

  1. Research Agent searches the web and retrieves competitor data

  2. Analysis Agent processes the data and identifies trends

  3. Writing Agent composes the final report

The orchestrator manages handoffs, resolves conflicts, and ensures coherence.

When to Use Multi-Agent vs. Single-Agent

Factor

Single Agent

Multi-Agent

Task complexity

Simple to moderate

Complex, multi-domain

Tool count

< 10 tools

10+ tools across domains

Required expertise

One domain

Multiple specialized domains

Latency tolerance

Low

Higher (more LLM calls)

Cost sensitivity

Budget-conscious

Can afford more tokens

Debugging ease

Simpler

More complex

Rule of thumb: Start with a single agent. Only split into multi-agent when a single agent consistently fails because the task requires too many different skills or the tool set is too large for one system prompt to manage effectively.


The Tool-Use Paradigm

Tools are what transform an LLM from a text generator into an agent that can interact with the world.

1

Define Available Tools

You define a set of tools the agent can use, each with a name, description, and parameter schema:

{
  "name": "search_knowledge_base",
  "description": "Search the company knowledge base for relevant documents",
  "parameters": {
    "query": "string - the search query",
    "max_results": "integer - number of results (default: 5)"
  }
}
2

LLM Decides Which Tool to Call

Based on the user’s request and the tool descriptions, the LLM decides which tool (if any) to invoke and generates the parameters.
3

System Executes the Tool

Your application (n8n, LangChain, custom code) receives the LLM’s tool call, validates it, and executes the actual function – making the API call, running the database query, etc.
4

Result Returns to LLM

The tool’s output is sent back to the LLM as an “observation.” The LLM can then generate a final response or decide to call another tool.

Common Agent Tools

Tool Category

Examples

What They Enable

Search

Web search, knowledge base search

Finding current information

Data

SQL queries, API calls, file reading

Accessing structured data

Action

Send email, create ticket, update CRM

Taking real-world actions

Code

Python execution, shell commands

Computation, data processing

Memory

Vector DB read/write

Persistent knowledge across sessions


Agentic AI in Production

Agent receives: Customer complaint about a billing issue Agent does:

  1. Looks up customer account (CRM tool)

  2. Retrieves billing history (database tool)

  3. Identifies the discrepancy

  4. Drafts a resolution email

  5. Creates a support ticket with resolution details

Result: Issue resolved without human intervention for 60-70% of common cases.

Agent receives: “Analyze last quarter’s sales by region” Agent does:

  1. Queries the sales database (SQL tool)

  2. Calculates regional breakdowns (code execution)

  3. Generates charts (visualization tool)

  4. Writes an executive summary

Result: Analysis that would take an analyst hours, completed in minutes.

Agent receives: Alert about server performance degradation Agent does:

  1. Checks server metrics (monitoring API)

  2. Reviews recent deployments (Git tool)

  3. Identifies the problematic deployment

  4. Rolls back the change (deployment tool)

  5. Verifies recovery

  6. Sends notification to the team (Slack tool)

Result: Automated incident response with documented reasoning.


Building Your First Agent

The fastest path to a working agent in 2025 is through n8n’s AI Agent node or LangChain/LangGraph in Python. Both support ReAct-style agents with tool use out of the box.

Agent Complexity Ladder

Level

What It Does

Tools Needed

Example

Level 0

Chat only, no tools

None

Basic chatbot

Level 1

Single tool use

1-2 tools

FAQ bot with knowledge base search

Level 2

Multi-step reasoning

3-5 tools

Support agent with CRM + KB + email

Level 3

Autonomous workflow

5-10 tools

Research agent that searches, analyzes, writes

Level 4

Multi-agent system

10+ tools, multiple agents

Full business process automation


Key Takeaways