Agent Architectures: ReAct, Planning, and Reasoning Patterns

Overview

A master architect doesn't just follow blueprints—they understand the structural principles that make buildings stable, functional, and beautiful. Similarly, AI agent architectures are not just code patterns but fundamental approaches to organizing perception, reasoning, and action that determine how intelligently an agent can behave.

In this lesson, we'll explore the key architectural patterns that have emerged in AI agents, from the elegant simplicity of ReAct (Reasoning + Acting) to sophisticated multi-layer systems that can handle complex, multi-step reasoning and planning.

Learning Objectives

After completing this lesson, you will be able to:

  • Understand and implement the ReAct (Reasoning + Acting) pattern
  • Design Plan-and-Execute architectures for complex tasks
  • Compare different reasoning patterns and choose appropriate ones for specific use cases
  • Implement reflection and self-correction mechanisms in agents
  • Build agents that can decompose complex goals into manageable subtasks

The ReAct Architecture: Thinking and Acting in Harmony

The Power of Interleaving Thought and Action

Traditional approaches either think completely before acting (planning) or act without thinking (reactive). ReAct revolutionizes this by interleaving reasoning and acting—thinking a bit, acting a bit, observing the results, then thinking some more.

Analogy: Think of a skilled detective solving a case. They don't plan every step in advance, nor do they act randomly. Instead, they:

  1. Think: "Based on the evidence, the suspect might be at the coffee shop"
  2. Act: Go to the coffee shop and ask questions
  3. Observe: "The barista says they haven't seen the suspect, but mentions they often go to the library"
  4. Think: "Let me check the library next"
  5. Act: Head to the library...

This iterative process allows for adaptive problem-solving that pure planning or pure reaction cannot achieve.

Architecture Pattern Comparison

Implementing ReAct: The Basic Pattern

class ReActAgent: def init(self, llm, tools): self.llm = llm self.tools = tools self.max_iterations = 10

def solve(self, task: str) -> str: """Main ReAct loop""" context = f"Task: {task}\\n\\n" for i in range(self.max_iterations): # THOUGHT: Let the LLM reason about what to do next thought_prompt = f"""

{context} Think step by step about what you should do next to complete this task. If you have enough information to provide a final answer, say "FINAL ANSWER: ..." Otherwise, choose an action from: {list(self.tools.keys())}

Thought:"""

thought = self.llm.complete(thought_prompt) context += f"Thought: {thought}\\n" # Check if we have a final answer if "FINAL ANSWER:" in thought: return thought.split("FINAL ANSWER:")[1].strip() # ACTION: Execute the chosen action action_prompt = f"""

{context} Based on your thought, what specific action should you take? Format: ACTION: tool_name(parameter)

Action:"""

action = self.llm.complete(action_prompt) context += f"Action: {action}\\n" # Execute the action try: tool_name, parameter = self.parse_action(action) result = self.tools[tool_name](parameter) context += f"Observation: {result}\\n\\n" except Exception as e: context += f"Observation: Error executing action: {e}\\n\\n"

Usage Example

tools = { "search_web": lambda query: f"Found results for: {query}", "calculate": lambda expr: f"Result: {eval(expr)}", "write_file": lambda content: f"File written with {len(content)} characters" }

agent = ReActAgent(llm, tools) result = agent.solve("Find the current price of Bitcoin and calculate 10% of it") print(result) `} />

ReAct Pattern Variations

VariationKey FeatureBest Use CaseComplexity
Basic ReActSimple reasoning-action cyclesWell-defined tasksLow
Chain-of-Thought ReActExtended reasoning stepsComplex problem solvingMedium
Multi-step ReActLong action sequencesMulti-stage workflowsHigh
Parallel ReActConcurrent reasoning pathsTime-sensitive decisionsVery High

Enhanced ReAct Patterns:

  • ReAct-SC (Self-Correction): Adds self-reflection steps where the agent evaluates its own reasoning and actions
  • ReAct-Memory: Incorporates long-term memory to remember useful patterns from previous tasks
  • Multi-Modal ReAct: Extends ReAct to handle images, audio, and other modalities beyond text

Plan-and-Execute: Deliberative Architecture

When You Need a Master Plan

Some tasks require comprehensive planning before execution—like organizing a conference or debugging a complex software system. Plan-and-Execute architectures first create a detailed plan, then execute it step by step.

Analogy: Building a house requires careful planning—you can't just start hammering and hope for the best. You need architectural drawings, permits, material lists, and a construction schedule before breaking ground.

Planning Process Visualization

Plan-and-Execute Implementation

class PlanAndExecuteAgent: def init(self, llm, tools): self.llm = llm self.tools = tools

def solve(self, task: str) -> str: # Phase 1: Planning plan = self.create_plan(task) # Phase 2: Execution results = [] for step in plan: result = self.execute_step(step, results) results.append(result)

Break down this task into specific, actionable steps: Task: {task}

Provide a numbered list of steps that can be executed sequentially. Each step should be clear, specific, and achievable. Consider dependencies between steps.

Plan:"""

response = self.llm.complete(prompt) # Parse the response into individual steps steps = [line.strip() for line in response.split('\\n') if line.strip() and line.strip()[0].isdigit()] return steps def execute_step(self, step: str, previous_results: List[str]) -> str: """Execute a single step of the plan""" context = "\\n".join(previous_results) if previous_results else ""

Previous context: {context}

Execute this step: {step}

Based on the available tools {list(self.tools.keys())}, determine what action to take and execute it.

Result:"""

return self.llm.complete(prompt) def should_replan(self, step: str, result: str) -> bool: """Determine if replanning is needed based on step result""" prompt = f"""

Step: {step} Result: {result}

Did this step succeed? Answer with just "SUCCESS" or "FAILURE". """ response = self.llm.complete(prompt).strip().upper() return "FAILURE" in response

def replan(self, original_task: str, completed_results: List[str], remaining_plan: List[str]) -> List[str]: """Create a new plan given partial completion""" context = "\\n".join(completed_results) remaining = "\\n".join(remaining_plan) prompt = f"""

Original task: {original_task} Completed so far: {context} Remaining plan: {remaining}

Given what has been completed and any failures, create a new plan for the remaining work.

New Plan:"""

response = self.llm.complete(prompt) return [line.strip() for line in response.split('\\n') if line.strip() and line.strip()[0].isdigit()] def synthesize_results(self, results: List[str]) -> str: """Combine all results into a final answer""" combined = "\\n".join(results) prompt = f"""

Synthesize these step results into a final answer: {combined}

Final Answer:"""

    return self.llm.complete(prompt)

Usage Example

agent = PlanAndExecuteAgent(llm, tools) result = agent.solve("Research and write a summary of renewable energy trends") print(result) `} />

Hybrid Architectures: Multi-Layer Intelligence

Combining the Best of All Worlds

Hybrid architectures combine multiple approaches in a layered system where different layers handle different types of reasoning and response.

Analogy: Think of a skilled emergency room doctor who operates on multiple levels:

  • Reflexive layer: Immediate life-saving responses (check airways, stop bleeding)
  • Diagnostic layer: Systematic analysis and planning (run tests, analyze symptoms)
  • Strategic layer: Long-term treatment planning (recovery plan, follow-up care)

Multi-Layer Architecture

Interactive Architecture Explorer

Agent Architecture Patterns

Different approaches to organizing agent intelligence

Reactive Agents
  • • Simple condition-action rules
  • • Fast response times
  • • No internal state
  • • Example: Thermostat, Alarm system
Deliberative Agents
  • • Plan before acting
  • • Complex reasoning
  • • Internal world model
  • • Example: Chess AI, Route planner
ReAct Agents
  • • Interleaved reasoning and acting
  • • Adaptive problem solving
  • • Tool integration
  • • Example: LLM-powered assistants
Hybrid Agents
  • • Multiple reasoning layers
  • • Best of all approaches
  • • Complex coordination
  • • Example: Autonomous vehicles

Hybrid Architecture Implementation

class HybridAgent: def init(self, llm, tools): self.llm = llm self.tools = tools self.memory = {}

# Layer configurations self.reactive_threshold = 0.1 # seconds self.tactical_threshold = 5.0 # seconds def solve(self, task: str, urgency: str = "normal") -> str: """Route task to appropriate layer based on urgency and complexity""" # Analyze task complexity and urgency analysis = self.analyze_task(task, urgency)

Analyze this task: Task: {task} Stated urgency: {urgency}

Rate the following (1-5 scale):

  • Complexity (1=simple, 5=very complex)
  • Urgency (1=can wait, 5=immediate)
  • Required reasoning (1=simple lookup, 5=multi-step analysis)

Format: complexity:X, urgency:Y, reasoning:Z """

response = self.llm.complete(prompt) # Simple parsing logic import re complexity_match = re.search(r'complexity:(\d)', response) urgency_match = re.search(r'urgency:(\d)', response) reasoning_match = re.search(r'reasoning:(\d)', response) return { "complexity": int(complexity_match.group(1)) if complexity_match else 3,

Provide an immediate response to: {task} Use simple, direct action. Available tools: {list(self.tools.keys())} Response:"""

return self.llm.complete(prompt) def tactical_layer(self, task: str) -> str: """Handle multi-step execution""" # Use ReAct pattern for tactical decisions react_agent = ReActAgent(self.llm, self.tools) return react_agent.solve(task) def strategic_layer(self, task: str) -> str: """Handle complex, long-term planning"""

Usage Examples

hybrid_agent = HybridAgent(llm, tools)

Emergency response (reactive layer)

result1 = hybrid_agent.solve("Stop the server immediately", urgency="critical")

Research task (tactical layer)

result2 = hybrid_agent.solve("Find recent papers on transformers", urgency="normal")

Complex project (strategic layer)

result3 = hybrid_agent.solve("Design and implement a complete ML pipeline", urgency="low") `} />

Advanced Reasoning Patterns

Tree of Thoughts

Tree of Thoughts extends chain-of-thought reasoning by exploring multiple reasoning paths simultaneously, like a chess player considering multiple moves ahead.

Self-Correction and Reflection

Choosing the Right Architecture

Architecture Selection Guide

Performance Comparison

ArchitectureSpeedQualityAdaptabilityComplexityBest For
Reactive⭐⭐⭐⭐⭐⭐⭐Simple, fast responses
ReAct⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐General problem solving
Plan-Execute⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐Complex, structured tasks
Hybrid⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐Variable, production systems

Summary and Next Steps

Key Architecture Principles

  1. Match Pattern to Problem: Reactive for speed, Plan-Execute for complexity, ReAct for adaptability
  2. Layer When Needed: Hybrid architectures handle diverse requirements
  3. Enable Self-Correction: All patterns benefit from reflection and revision
  4. Consider Trade-offs: Speed vs. quality vs. adaptability

Architecture Evolution Path

In our next lesson, we'll explore tool integration—the mechanisms that allow agents to extend their capabilities through external APIs, databases, and services. This is where agents truly become powerful by leveraging the vast ecosystem of available tools and services.

Practice Exercises

  1. Pattern Implementation: Implement each architecture pattern with a simple example
  2. Performance Testing: Compare response times and quality across patterns
  3. Hybrid Design: Design a hybrid system for a specific use case
  4. Self-Evaluation: Add reflection capabilities to any architecture

Additional Resources