HubAI
Back to Engineering
Agentic Workflow Patterns

From Chaos to Pipeline: Choosing the Right Workflow for Every Task

Most AI tools default to maximum complexity. We learned that complexity must be earned, not assumed. Here's how HubAI uses three workflow patterns — prompt chaining, routing, and evaluator-optimizer — in a strict hierarchy.

Workflow Patterns Prompt Chaining Pipeline Design

The Problem We Found

When we first designed HubAI’s generation pipeline, we reached for the most powerful abstraction available: fully autonomous agents with planning, tool use, and self-directed execution.

It was a mistake.

The autonomous agents worked — sometimes. They would analyze requirements, decide on a database schema, generate frontend components, and validate the output. But their behavior was unpredictable. One run would produce a clean Apollo service. The next run, with identical input, would skip the validation step, reorder the generation phases, or spend 40% of its token budget on unnecessary re-analysis.

The root cause was a mismatch between the task and the abstraction. Fullstack code generation is not an open-ended research task. It is a well-defined pipeline with known phases, known inputs, and known outputs. Using fully autonomous agents for a structured pipeline is like using a search engine to read a book you already own.

We learned a principle that now governs every architectural decision in HubAI: complexity must be earned, not assumed. Anthropic’s own guidance on building effective agents echoes this: start with the simplest solution, and add complexity only when it demonstrably improves outcomes.

The Complexity Decision Framework

Before building any component, we now ask three questions:

  1. Can a single LLM call with retrieval solve this? If yes, stop. Do not add complexity.
  2. Can a predefined workflow (fixed code paths) solve this? If the task decomposes into known subtasks with predictable outputs, use a workflow. Workflows trade flexibility for consistency and predictability.
  3. Does this require model-driven decision-making with dynamic tool use? Only then use a full agent. Agents trade latency and cost for flexibility.

Most components in HubAI are workflows, not agents. The AI makes architectural decisions (which require reasoning). The pipeline handles execution (which requires consistency).

Three Patterns That Drive the Pipeline

Pattern 1: Prompt Chaining

The core of HubAI’s generation pipeline is a prompt chain — a fixed sequence of steps where each step consumes the previous step’s structured output.

Requirements Analysis

       ▼ (structured requirements doc)
Database Schema Design

       ▼ (validated schema)
Frontend Architecture

       ▼ (component tree + layouts)
QA Validation

       ▼ (validated artifacts)
Code Generation

       ▼ (production code)

Each transition point is a gate — a programmatic check that validates the intermediate result before the pipeline advances. If the Database Architect produces a schema with unresolved foreign keys, the gate catches it before the Frontend Architect ever sees it.

This is not a conversation. It is a compiler pipeline. Each phase has:

  • Known input schema — The exact structure it expects from the previous phase
  • Known output schema — The exact structure it must produce for the next phase
  • Validation gate — A programmatic check that must pass before proceeding

The tradeoff is explicit: prompt chaining adds latency (multiple LLM calls instead of one). But each call operates with maximum precision because it handles exactly one domain with exactly the context it needs.

Pattern 2: Routing

Not every task requires the same treatment. A single-entity scaffold does not need the same pipeline depth as a multi-entity project with complex relationships.

HubAI uses routing to classify input and direct it to the appropriate pipeline:

  • Simple entity — Streamlined path with fewer agent calls
  • Complex relationships — Full pipeline with relationship analysis
  • Frontend-only — Skip database architecture, route directly to frontend agents
  • Backend-only — Skip frontend architecture, focus on API and data layer

The routing decision is made early — before any expensive LLM calls — based on structured input from the visual forms. This is not LLM-based classification; it is deterministic classification based on the project configuration.

Each route leads to specialized prompts, tools, and context tuned for that specific scenario. The Database Architect for a MongoDB project has different tools and examples than one for a SQL project. Routing ensures every agent operates in its optimal configuration.

Pattern 3: Evaluator-Optimizer

The QA Agent implements an evaluator-optimizer loop. It does not simply check the final output — it evaluates artifacts at each pipeline stage and provides structured feedback.

The loop works in two phases:

  1. Generate — A specialized agent produces an artifact (schema, component tree, configuration)
  2. Evaluate — The QA Agent scores the artifact against quality criteria (completeness, consistency, adherence to patterns, relationship integrity)

If evaluation fails, the pipeline can re-run the specific phase with the QA Agent’s feedback injected into the agent’s context. This is not a global retry — it is a targeted correction that preserves all the work from other phases.

The evaluation criteria are concrete and measurable:

  • Are all entity fields typed and validated?
  • Do all relationships reference existing entities?
  • Are naming conventions consistent across the schema?
  • Does the frontend component tree match the backend entity structure?

How the Patterns Compose

The power of HubAI’s architecture is not any single pattern — it is how they compose:

Routing (classify task complexity)

  ├─ Simple → Short prompt chain (2-3 phases)

  └─ Complex → Full prompt chain (5 phases)

                  └─ Each phase → Evaluator-optimizer loop

                       ├─ Pass → Next phase
                       └─ Fail → Re-run with feedback

Routing selects the right pipeline. Prompt chaining executes it. Evaluator-optimizer validates each step. Three patterns, cleanly composed, producing consistent results.

The Key Insight

The instinct when building AI systems is to reach for maximum abstraction. Autonomous agents. Dynamic planning. Flexible tool use. It feels like you are building something powerful.

But power without discipline is chaos. A fully autonomous agent is powerful — and unpredictable. A structured pipeline is less flexible — and completely reliable.

HubAI uses AI where AI excels: reasoning about architecture, understanding requirements, making design decisions. It uses deterministic workflows where determinism excels: sequencing phases, validating outputs, rendering code.

Start with the simplest solution. Add complexity only when it demonstrably improves outcomes. Measure first. For well-defined tasks, a structured workflow will always outperform an autonomous agent — because predictability is not a limitation. It is the feature.