• Skip to primary navigation
  • Skip to main content
Velocity Ascent

Velocity Ascent

Looking toward tomorrow today

  • Research
  • Design
  • Development
  • Strategy
  • About
    • Home
    • Who we work with…
      • Product Leaders
      • Innovation Teams
      • Founder-Led Organizations
    • Services
    • Contact Us
  • Show Search
Hide Search

Agent-to-Agent

Scaling Digital Production Pipelines

Velocity Ascent Live · February 11, 2026 ·

Agentic Infrastructure in Practice

Enterprise AI conversations still over-index on models, focusing on benchmarks, parameter counts, feature comparisons, and release cycles. Yet production environments rarely fail because a model lacks capability. They often fail because workflow architecture was never designed to absorb autonomy in the first place.

When digital production scales without structural discipline, governance erodes quietly. When governance tightens reactively, innovation stalls. Both outcomes stem from the same architectural flaw: layering AI onto systems that were not built for persistent context, background execution, and policy-bound automation.

The competitive advantage is not in the model – it is in the pipeline.

The institutions that succeed will not be those experimenting most aggressively. They will be those that design structured agentic systems capable of increasing throughput while preserving accountability. In that environment, the competitive advantage is not the model itself but the production pipeline that governs how intelligence moves through the organization.

The question is not whether to use AI. The question is whether your infrastructure is designed for autonomy under constraint.


Metaphor: The Factory Floor, Modernized.


Think of a legacy archive or production system as a dormant factory. The machinery exists. The materials are valuable. The workforce understands the craft. But everything runs manually, station by station. Modernization does not mean replacing the factory. It means upgrading the control system.

CASE STUDY: Sand Soft Digital Arching at Scale
In the SandSoft case study, the transformation began with physical ingestion and structured digitization. Assets were scanned, tagged, layered into archival and working formats, and indexed with AI-assisted metadata.

That was not digitization for convenience. It was input normalization. Once the inputs were stable, LoRA-based model adaptation was introduced. Lightweight, domain-specific training anchored entirely in owned source material .

Then came the critical layer: agentic governance.

Watermarking at creation. Embedded licensing metadata. Monitoring agents scanning for IP misuse. Automated compliance reporting. This is not AI as a creative distraction. It is AI as a controlled production subsystem.

Each agent has a bounded mandate. No single node controls the entire flow. Every output is logged. Escalation paths are predefined. Like a well-run enterprise desk, authority is layered. Execution is distributed. Accountability remains human.

That is the difference between experimentation and infrastructure.

Why This Matters to Senior Leadership

For CIOs, operating partners, and infrastructure decision-makers, the core risk is not technical failure but unmanaged velocity. Agentic systems accelerate output, and if governance architecture does not scale in parallel, exposure compounds quietly and often invisibly.

A disciplined production pipeline does three things:

  1. Reduces manual drag without decentralizing control
  2. Creates persistent institutional memory through logged workflows
  3. Converts AI from cost center experiment to auditable operational asset

In regulated or credibility-driven environments, autonomy without traceability creates risk. When agentic systems are deliberately structured, staged in maturity, and governed by explicit policy constraints, they shift from liability to resilience infrastructure. The distinction is not cosmetic. It is structural. This is not about layering AI tools onto existing workflows. It is about redesigning how work moves through the institution – with autonomy embedded inside accountability rather than operating outside it.

For leaders responsible for credibility, the most significant risk of agentic AI is not technical failure per se but unmanaged success – systems that move faster than oversight can absorb can create risk exposure that quietly accumulates. A recent McKinsey analysis on agentic AI warns that AI initiatives can proliferate rapidly without adequate governance structures, making it difficult to manage risk unless oversight frameworks are deliberately redesigned for autonomous systems. Similarly, enterprise practitioners have cautioned that rapid deployment without structural guardrails can create a shadow governance problem, where velocity outpaces policy enforcement and exposure compounds before leadership has visibility.

Agentic systems do not create exposure through failure. They create exposure when success outpaces oversight.

The opportunity, however, is substantial. Well-designed agentic workflows reduce manual drag, surface meaningful signal earlier in the lifecycle, and preserve human judgment for decisions that matter most. By embedding traceability, auditability, and policy enforcement directly into operational workflows, organizations create durable institutional assets – documented reasoning, consistent standards, and reusable analysis that withstand turnover and regulatory scrutiny.

This is how legacy organizations scale responsibly without eroding trust or sacrificing control.



Elevator Pitch

We are not automating judgment. We are structuring production pipelines where agents ingest, analyze, monitor, and validate under explicit policy constraints, while humans remain accountable for consequential decisions. The objective is scalable output with embedded governance, not speed for its own sake.


Less Theory, More Practice: Agentic AI in Legacy Organizations

Velocity Ascent Live · December 22, 2025 ·

How disciplined adoption, ethical guardrails, and human accountability turn agentic systems into usable tools

Agentic AI does not fail in legacy organizations because the technology is immature. It fails when theory outruns practice. Large, credibility-driven institutions do not need sweeping reinvention or speculative autonomy. They need systems that fit into existing workflows, respect established governance, and improve decision-making without weakening accountability. The real work is not imagining what agents might do in the future, but proving what they can reliably do today – under constraint, under review, and under human ownership.

From Manual to Agentic: The New Protocols of Knowledge Work


Most legacy organizations already operate with deeply evolved protocols for managing risk. Research, analysis, review, and publication are intentionally separated. Authority is layered. Accountability is explicit. These structures exist because the cost of error is real.

Agentic AI introduces continuity across these steps. Context persists. Intent carries forward. Decisions can be staged rather than re-initiated. This continuity is powerful, but only when paired with restraint.

In practice, adoption follows a progression:

  • Manual – Human-led execution with discrete software tools
  • Assistive – Agents surface signals, summaries, and anomalies
  • Supervised – Agents execute bounded tasks with explicit review
  • Conditional autonomy – Agents act independently within strict policy and audit constraints

Legacy organizations that succeed treat these stages as earned, not assumed. Capability expands only when trust has already been established.

Metaphor: The Enterprise Desk

How Agentic Roles Interact

    A useful way to understand agentic systems is to compare them to a well-run enterprise desk.

    Information is gathered, not assumed. Analysis is performed, not published. Risk is evaluated, not ignored. Final decisions are made by accountable humans who understand the consequences.

    An agentic pipeline mirrors this structure. Each agent has a narrow mandate. No agent controls the full flow. Authority is distributed, logged, and reversible. Outputs emerge from interaction rather than a single opaque decision point.

    This alignment is not cosmetic. It is what allows agentic systems to be introduced without breaking institutional muscle memory.



    Visual Media: Where Restraint Becomes Non-Negotiable

    Textual workflows benefit from established norms of review and correction. Visual media does not. Images and video carry implied authority, even when labeled. Errors propagate faster and linger longer.

    For this reason, ethical image and video generation cannot be treated as a creative convenience. It must be governed as a controlled capability. Generation should be conditional. Provenance must be explicit. Review must be unavoidable.

    In many cases, the correct agentic action is refusal or escalation, not output. The value of an agentic system is not that it can generate, but that it knows when it should not.

    Why This Matters to Senior Leadership

    For leaders responsible for credibility, the primary risk of agentic AI is not technical failure. It is ungoverned success. Systems that move faster than oversight can absorb create exposure that compounds quietly.

    The opportunity, however, is substantial. Well-designed agentic workflows reduce manual drag, surface meaningful signal earlier, and preserve human judgment for decisions that actually matter. They also create durable institutional assets – documented reasoning, consistent standards, and reusable analysis that survives turnover and scrutiny.

    This is how legacy organizations scale without eroding trust.


    Elevator Pitch (Agentic Workflows):

    We are not automating decisions. We are structuring workflows where agents gather, analyze, and validate information under clear rules, while humans remain accountable for every consequential call. The goal is reliability, clarity, and trust – not speed for its own sake.”

    Agentic AI will not transform legacy organizations through ambition alone. It will do so through discipline. The institutions that succeed will not be the ones that adopt the most autonomy the fastest. They will be the ones that prove, step by step, what agents can do responsibly today. Less theory. More practice. And accountability at every turn.

    AI Agents Don’t Work Like Humans – And That’s the Point

    Joe Skopek · November 14, 2025 ·

    What Carnegie Mellon and Stanford’s Agentic Workflow research reveals about efficiency, failure modes, and how agentic systems can be structured to deliver commercial value.

    A Clearer View of How Agents Actually Work

    Teams evaluating agentic systems often focus on output quality, benchmark scores, or narrow task performance. Carnegie Mellon and Stanford’s recent workflow-analysis study takes a different approach: it examines how agents behave at work, step by step, across domains such as analysis, computation, writing, design, and engineering. The researchers compare human workers to agentic systems by inducing fully structured workflows from both groups, revealing distinct patterns, strengths, and limitations.

    “AI agents are continually optimized for tasks related to human work, such as software engineering and professional writing, signaling a pressing trend with significant impacts on the human workforce. However, these agent developments have often not been grounded in a clear understanding of how humans execute work, to reveal what expertise agents possess and the roles they can play in diverse workflows.”

    How Do AI Agents Do Human Work? Comparing AI and Human Workflows Across Diverse Occupations
    Zora Zhiruo Wang Yijia Shao Omar Shaikh Daniel Fried Graham Neubig Diyi Yang
    Carnegie Mellon University Stanford University
    2510.22780v1.

    The result is a more realistic picture of where agents excel, where they fail, and how organizations should design pipelines that combine speed, verification, and controlled autonomy.

    The Programmatic Bias: A Feature, Not a Defect

    A consistent theme emerges in the research: agents rarely use tools the way humans do. Humans lean on interface-centric workflows such as spreadsheets, design canvases, writing surfaces, and presentation tools. Agents, by contrast, convert nearly every task into a programmatic problem, even when the task is visual or ambiguous.

    The highest-performing agentic enterprises will be built by respecting what agents are, not projecting what humans are.

    This is not a quirk of a single framework. It is a systemic pattern across architectures and models. Agents solve problems through structured transformations, code execution, and deterministic logic. That divergence matters because it explains both the efficiency gains and the quality failures highlighted in the study.

    Agents move quickly because they bypass the interface layer.
    Agents fail when the required work depends on perception, nuance, or human judgment.

    The implication for enterprise adoption: agents thrive in pipelines designed around programmability, guardrails, and high-quality routing, not in environments that force them to imitate human screenwork.


    Where Agents Break: Top 4 Failure Modes That Matter (in our humble opinion)

    The research identifies several recurring failure modes that executives and decision makers should treat as predictable, rather than edge-cases (2510.22780v1)

    1. Fabricated Outputs

    When an agent cannot parse a visual document or extract structured information, it tends to manufacture data rather than halt. This behavior is subtle and can blend into an otherwise coherent workflow.

    2. Misuse of Advanced Tools

    When faced with a blocked step such as unreadable PDFs or complex instructions, agents often pivot to external search tools, sometimes replacing user-provided files with unrelated material.

    3. Weakness in Visual Tasks

    Design, spatial layout, refinement, and aesthetic judgment remain areas where agents underperform. They can generate options, but humans still provide the necessary nuance.

    4. Interpretation Drift

    Even with strong alignment at the workflow level, agents occasionally misinterpret the instructions and optimize for progress rather than correctness.

    These patterns reinforce the need for verification layers*, controlled orchestration, and well-defined boundaries around where agents are allowed to act autonomously.

    [*] This is where the NANDA framework is essential


    Where Agents Excel: Efficiency at Scale

    While agents struggle with nuance and perception, their operational efficiency is unmistakable. Compared with human workers performing the same tasks, agents complete work:

    • 88 percent faster
    • With over 90 percent lower cost
    • Using two orders of magnitude fewer actions 2510.22780v1

    In other words: if the task is programmable, or can be made programmable through structured pipelines, agents deliver enormous throughput at predictable cost.

    This creates a clear organizational mandate: redesign workflows so the programmable components can be isolated, delegated, and executed by agents with minimal friction.


    Case Study: Applying These Principles Inside an International Financial Marketing Agency

    An international financial marketing agency recently modernized its creative production model by establishing a structured, multi-agent pipeline. Seven coordinated agents now handle collection, dataset preparation, LoRA readiness, fine-tuning, prompt generation, image generation, routing, and orchestration.

    Nothing in this system depends on agents behaving like humans. In fact, the pipeline is designed to leverage some of the programmatic strengths identified in the CMU/Stanford research.

    Key Architectural Principles

    1. Programmatic First

    Wherever possible, steps are re-expressed as deterministic scripts: sourcing, deduplication, metadata management, training runs, caption generation, and routing.

    2. Verification Layering

    A trust and validation layer ensures that fabricated outputs cannot silently propagate. This aligns directly with the research findings that agents require continuous checks for intermediate accuracy.

    3. Zero-Trust Boundaries

    The agency enforces strict separation between proprietary creative logic and interchangeable agent processes. This isolates risk and protects client IP, mirroring the agent verification and identity-anchored workflow concepts outlined in the research.

    4. Packet-Switched Execution

    Tasks are broken into small, routable fragments. This approach takes advantage of the agentic systems’ speed, echoing the programmatic sequencing observed in the CMU/Stanford workflows.

    5. Human Oversight at the Right Granularity

    Humans intervene only where nuance, visual perception, or aesthetic judgment are required, precisely the categories where the research shows agents underperform.

    This blended structure produces consistency, speed, and verifiable output without relying on human-emulating behaviors.


    Why This Matters for Commercial Teams

    Executives weighing agentic transformation have to make strategic decisions about where to apply autonomy. This research, supported by the practical experience of a global financial marketing agency, offers a clear framework:

    Agents excel at:

    • Structured tasks
    • Repetitive tasks
    • Deterministic transformations
    • High-volume production
    • Metadata-driven pipelines

    Humans remain essential for:

    • Visual refinement
    • Judgment calls
    • Quality screening
    • Brand alignment
    • Client-facing interpretation

    The correct model is neither replace nor replicate. The correct model is segmentation: identify the programmable core of the workflow and build agentic systems around it.


    The Path Forward

    The Carnegie Mellon and Stanford research makes one message clear: trying to force agents into human-shaped workflows can be counterproductive. They are not UI workers. They do not navigate ambiguity the way humans do. They operate through code, structure, and deterministic logic.

    Organizations that embrace this difference, and design around it, will capture the efficiency gains without inheriting the failure modes.

    Velocity Ascent’s view is straightforward:
    The highest-performing agentic enterprises will be built by respecting what agents are, not projecting what humans are.


    MCP (Model Context Protocol) VS A2A (Agent-to-Agent)

    Velocity Ascent Live · April 22, 2025 ·

    Claude’s coordinated minds compared to Google’s free-thinking agents

    As artificial intelligence evolves from monolithic models to modular, multi-agent ecosystems, two distinct coordination philosophies are emerging – each backed by a leading AI innovator.

    Google’s Agent-to-Agent (A2A) Protocol

    Google’s A2A protocol is a pioneering framework that enables independent AI agents to communicate, delegate tasks, and reason together dynamically. Built for scale and flexibility, it supports emergent, collaborative intelligence – where specialized agents work together like an adaptive digital team. It’s Google’s blueprint for distributed cognitive systems, aiming to unlock the next frontier of AI-driven autonomy and problem-solving.

    Claude’s Model Context Protocol (MCP)

    Anthropic’s MCP takes a different approach. It enables Claude to safely coordinate sub-agents under tightly governed rules and ethical constraints. Task decomposition, execution, and reintegration are centrally managed – ensuring outputs remain reliable, aligned, and auditable. Rooted in Anthropic’s “Constitutional AI” philosophy, MCP prioritizes trust and transparency over autonomy.

    Two Visions, One Destination

    While both protocols seek to advance multi-agent systems, their contrasting designs reflect broader strategic trade-offs:

    • A2A favors creative autonomy and scalability.
    • MCP emphasizes governance, safety, and alignment.

    Enterprises evaluating AI strategy must understand these paradigms – not just as technical choices, but as directional bets on how intelligence should be organized, managed, and trusted in mission-critical environments.

    Everyman Metaphor

    Google A2A is like a team of smart coworkers in a brainstorming session.

    • Each one jumps in when they have something to add.
    • They chat, debate, hand off work.
    • The A2A protocol is their shared meeting rules so no one talks over anyone else, and ideas flow.

    Claude MCP is like a project manager assigning tasks to freelancers with checklists.

    • Each sub-agent has a clear role and safety constraints.
    • Claude ensures alignment, checks results, and approves before anything ships.
    • MCP is the project charter + checklist system that keeps things on track and ethical.

    Why This Matters to a CEO

    1. Different Models of Intelligence

    • A2A (Google): Builds toward emergent, distributed problem-solving – good for R&D, dynamic workflows, and creative automation.
    • MCP (Claude): Optimized for safe, auditable, structured outputs – great for legal, financial, or sensitive business processes.

    2. Innovation vs Control

    • A2A allows for fast exploration across agents.
    • MCP ensures high reliability and governance in outputs.

    3. Strategic Advantage

    • Choosing the right model can define your org’s AI maturity and risk posture.
    • A2A is agile and experimental. MCP is compliant and dependable.

    Elevator Pitch (AI Strategy Lens):

    We’re seeing two AI philosophies crystalize. Google’s A2A Protocol is about autonomous AI agents reasoning and working together – modular intelligence at scale. Anthropic’s Claude MCP is a more structured approach, where sub-agents are coordinated safely and transparently under alignment protocols. A2A is the future of creative, emergent AI systems; MCP is the foundation for trustworthy AI in sensitive, high-stakes environments. The real unlock? Enterprises will need both – creativity where it’s safe, and constraint where it’s critical.

    FeatureGoogle A2AClaude MCP (Anthropic)
    PhilosophyAutonomous agents reasoning togetherManaged coordination for safe, aligned outcomes
    Style of CollaborationOpen-ended, agent-initiated delegationControlled, system-managed orchestration
    Use Case ExampleOne agent researches, another writes, a third validates factsClaude delegates parts of a legal doc to specialized agents for summary, tone, and risk-check
    Agent AutonomyHigh — agents reason and request helpModerate – agents act under system-defined guardrails
    Trust & Alignment FocusFlexible reasoning, goal-directedGuardrails, safety, constitutional AI principles
    GoalScalable collective intelligenceTrustworthy coordination of AI-driven tasks

    Claude’s Managed Coordination Protocol (MCP)

    Anthropic’s Managed Coordination Protocol (MCP) is Claude’s system for structured, safe, and efficient task delegation between multiple sub-agents or “tool-using” capabilities within the Claude ecosystem.

    Google’s Agent-to-Agent (A2A) Protocol

    Google’s A2A protocol is a pioneering framework that enables independent AI agents to communicate, delegate tasks, and reason together dynamically. Built for scale and flexibility, it supports emergent, collaborative intelligence—where specialized agents work together like an adaptive digital team.

    SOURCE:

    A2A:
    Dev Document

    MCP:
    Dev Document

    Velocity Ascent

    © 2026 Velocity Ascent · Privacy · Terms · YouTube · Log in