Tool loop
Best for search, code, databases, and file operations where the model needs fresh observations before continuing.
Feedback-driven AI agents
An agent loop is the control pattern that lets an AI agent plan, act, observe, reflect, and improve until a real task is finished.
Definition
An agent loop is a repeated decision cycle for AI systems. Instead of producing one final answer, the agent keeps a compact state, chooses a next action, calls tools when useful, reads the result, and decides whether to continue or stop.
The pattern is useful when the work needs multiple steps: research, coding, data cleanup, customer operations, testing, or any task where the next move depends on what just happened.
Core workflow
Translate the goal into the next small decision, with constraints and a stopping condition.
Send a message, call a tool, query data, write code, or make another bounded change.
Read the output, error, page state, test result, or human response produced by the action.
Compare the observation against the goal and decide whether the loop is still on track.
Update state, narrow the next action, recover from failure, or finish with evidence.
Patterns
Showing 4 patterns
Best for search, code, databases, and file operations where the model needs fresh observations before continuing.
Best for payments, destructive operations, publishing, or any workflow where a person must confirm intent.
Best for drafts, generated plans, test output, and quality gates that improve with explicit review.
Best when failures are expected and the agent needs retries, narrower actions, or a fallback path.
Use cases
The agent loop pattern is strongest when the system can inspect reality after each step and improve the next move.
Checklist
Goal: State what done means before the loop starts.
State: Keep only the context needed for the next decision.
Tools: Give every action a clear input, output, and failure mode.
Limits: Use budgets, retries, and human approval where risk rises.
Evidence: End with tests, citations, logs, or another verifiable result.
FAQ
No. A chatbot usually answers a message. An agent loop keeps working across steps, using observations to choose the next action.
No. A direct model call is often better for simple classification, extraction, or one-shot writing. Use a loop when feedback changes the next step.
Clear state, narrow tool interfaces, observable outputs, retry limits, tests, and human approval for high-impact actions.