• Skip to primary navigation
  • Skip to main content
Velocity Ascent

Velocity Ascent

Looking toward tomorrow today

  • Research
  • Design
  • Development
  • Strategy
  • About
    • Home
    • Who we work with…
      • Product Leaders
      • Innovation Teams
      • Founder-Led Organizations
    • Services
    • Contact Us
  • Show Search
Hide Search

Agentic AI

AI Agents Don’t Work Like Humans – And That’s the Point

Joe Skopek · November 14, 2025 ·

What Carnegie Mellon and Stanford’s Agentic Workflow research reveals about efficiency, failure modes, and how agentic systems can be structured to deliver commercial value.

A Clearer View of How Agents Actually Work

Teams evaluating agentic systems often focus on output quality, benchmark scores, or narrow task performance. Carnegie Mellon and Stanford’s recent workflow-analysis study takes a different approach: it examines how agents behave at work, step by step, across domains such as analysis, computation, writing, design, and engineering. The researchers compare human workers to agentic systems by inducing fully structured workflows from both groups, revealing distinct patterns, strengths, and limitations.

“AI agents are continually optimized for tasks related to human work, such as software engineering and professional writing, signaling a pressing trend with significant impacts on the human workforce. However, these agent developments have often not been grounded in a clear understanding of how humans execute work, to reveal what expertise agents possess and the roles they can play in diverse workflows.”

How Do AI Agents Do Human Work? Comparing AI and Human Workflows Across Diverse Occupations
Zora Zhiruo Wang Yijia Shao Omar Shaikh Daniel Fried Graham Neubig Diyi Yang
Carnegie Mellon University Stanford University
2510.22780v1.

The result is a more realistic picture of where agents excel, where they fail, and how organizations should design pipelines that combine speed, verification, and controlled autonomy.

The Programmatic Bias: A Feature, Not a Defect

A consistent theme emerges in the research: agents rarely use tools the way humans do. Humans lean on interface-centric workflows such as spreadsheets, design canvases, writing surfaces, and presentation tools. Agents, by contrast, convert nearly every task into a programmatic problem, even when the task is visual or ambiguous.

The highest-performing agentic enterprises will be built by respecting what agents are, not projecting what humans are.

This is not a quirk of a single framework. It is a systemic pattern across architectures and models. Agents solve problems through structured transformations, code execution, and deterministic logic. That divergence matters because it explains both the efficiency gains and the quality failures highlighted in the study.

Agents move quickly because they bypass the interface layer.
Agents fail when the required work depends on perception, nuance, or human judgment.

The implication for enterprise adoption: agents thrive in pipelines designed around programmability, guardrails, and high-quality routing, not in environments that force them to imitate human screenwork.


Where Agents Break: Top 4 Failure Modes That Matter (in our humble opinion)

The research identifies several recurring failure modes that executives and decision makers should treat as predictable, rather than edge-cases (2510.22780v1)

1. Fabricated Outputs

When an agent cannot parse a visual document or extract structured information, it tends to manufacture data rather than halt. This behavior is subtle and can blend into an otherwise coherent workflow.

2. Misuse of Advanced Tools

When faced with a blocked step such as unreadable PDFs or complex instructions, agents often pivot to external search tools, sometimes replacing user-provided files with unrelated material.

3. Weakness in Visual Tasks

Design, spatial layout, refinement, and aesthetic judgment remain areas where agents underperform. They can generate options, but humans still provide the necessary nuance.

4. Interpretation Drift

Even with strong alignment at the workflow level, agents occasionally misinterpret the instructions and optimize for progress rather than correctness.

These patterns reinforce the need for verification layers*, controlled orchestration, and well-defined boundaries around where agents are allowed to act autonomously.

[*] This is where the NANDA framework is essential


Where Agents Excel: Efficiency at Scale

While agents struggle with nuance and perception, their operational efficiency is unmistakable. Compared with human workers performing the same tasks, agents complete work:

• 88 percent faster
• With over 90 percent lower cost
• Using two orders of magnitude fewer actions 2510.22780v1

In other words: if the task is programmable, or can be made programmable through structured pipelines, agents deliver enormous throughput at predictable cost.

This creates a clear organizational mandate: redesign workflows so the programmable components can be isolated, delegated, and executed by agents with minimal friction.


Case Study: Applying These Principles Inside an International Financial Marketing Agency

An international financial marketing agency recently modernized its creative production model by establishing a structured, multi-agent pipeline. Seven coordinated agents now handle collection, dataset preparation, LoRA readiness, fine-tuning, prompt generation, image generation, routing, and orchestration.

Nothing in this system depends on agents behaving like humans. In fact, the pipeline is designed to leverage some of the programmatic strengths identified in the CMU/Stanford research.

Key Architectural Principles

1. Programmatic First

Wherever possible, steps are re-expressed as deterministic scripts: sourcing, deduplication, metadata management, training runs, caption generation, and routing.

2. Verification Layering

A trust and validation layer ensures that fabricated outputs cannot silently propagate. This aligns directly with the research findings that agents require continuous checks for intermediate accuracy.

3. Zero-Trust Boundaries

The agency enforces strict separation between proprietary creative logic and interchangeable agent processes. This isolates risk and protects client IP, mirroring the agent verification and identity-anchored workflow concepts outlined in the research.

4. Packet-Switched Execution

Tasks are broken into small, routable fragments. This approach takes advantage of the agentic systems’ speed, echoing the programmatic sequencing observed in the CMU/Stanford workflows.

5. Human Oversight at the Right Granularity

Humans intervene only where nuance, visual perception, or aesthetic judgment are required, precisely the categories where the research shows agents underperform.

This blended structure produces consistency, speed, and verifiable output without relying on human-emulating behaviors.


Why This Matters for Commercial Teams

Executives weighing agentic transformation have to make strategic decisions about where to apply autonomy. This research, supported by the practical experience of a global financial marketing agency, offers a clear framework:

Agents excel at:

• Structured tasks
• Repetitive tasks
• Deterministic transformations
• High-volume production
• Metadata-driven pipelines

Humans remain essential for:

• Visual refinement
• Judgment calls
• Quality screening
• Brand alignment
• Client-facing interpretation

The correct model is neither replace nor replicate. The correct model is segmentation: identify the programmable core of the workflow and build agentic systems around it.


The Path Forward

The Carnegie Mellon and Stanford research makes one message clear: trying to force agents into human-shaped workflows can be counterproductive. They are not UI workers. They do not navigate ambiguity the way humans do. They operate through code, structure, and deterministic logic.

Organizations that embrace this difference, and design around it, will capture the efficiency gains without inheriting the failure modes.

Velocity Ascent’s view is straightforward:
The highest-performing agentic enterprises will be built by respecting what agents are, not projecting what humans are.


NANDA: Networked Agents And Decentralized AI

Velocity Ascent Live · April 16, 2025 ·

Pioneering the Future of Decentralized Intelligence

Imagine a network of specialized AI agents working together across a secure, decentralized architecture. Each agent handles specific tasks, communicates effortlessly, and operates autonomously—enabling your business to innovate, streamline processes, and make data-driven decisions in real-time.

“Just as DNS revolutionized the internet by providing a neutral framework for web access, we need a similar infrastructure for the “Internet of Agents.” We’re launching NANDA – an open-protocol for registry, verification, and reputation among AI agents – in collaboration with national labs and global universities (decentralized across 8 time zones!)
NANDA will pave the way for seamless collaboration across diverse systems, fully compatible with enteprise protocols like MCP and A2A. This initiative is a step toward democratizing agentic AI, creating an ecosystem where specialized agents can work together to solve complex challenges—just like DNS did for the web.”

Ayush Chopra
PhD Candidate at MIT

This dynamic ecosystem operates within a secure, decentralized infrastructure that ensures privacy, trust, and accountability at every level. This concept is brought to life through the NANDA (Networked Agents And Decentralized AI) initiative, which aims to create a truly decentralized Internet of AI Agents.

The Internet of AI Agents

At the MIT Decentralized AI Summit, the Model Context Protocol (MCP) was introduced as a standardized method for enabling communication between AI agents, tools, and resources. While MCP serves as a foundational interaction protocol, NANDA goes beyond the basics by addressing the infrastructural challenges required to support a truly decentralized, large-scale network of AI agents.

NANDA builds upon MCP to provide the critical components needed for a distributed ecosystem where potentially billions of AI agents can collaborate across organizational and data boundaries. The protocol extends the capabilities of traditional AI systems, fostering seamless agent collaboration at scale—something that current centralized models struggle to achieve due to rigid data structures and lack of transparency.

“Enter the landscape of existing paradigms and the path towards decentralized AI. ML algorithms like foundation models excel in AI capabilities but remain centralized. Decentralized systems, like blockchains and volunteer computing, distribute storage and computation but lack intelligence. We argue that bringing the two capabilities together can have an outsized impact. We call upon the AI community to focus on the open challenges in the upper-right quadrant, where decentralized architectures can give rise to anew generation of AI systems that are both highly capable and aligned with the values of a decentralized society.”

A Perspective on Decentralizing AI
Abhishek Singh, Charles Lu, Gauri Gupta, Nikhil Behari, Ayush Chopra, Jonas Blanc, Tzofi Klinghoffer, Kushagra Tiwary, and Ramesh Raskar
MIT Media Lab

Everyman Metaphor

Imagine a vast coral reef ecosystem.

Each coral polyp, tiny but specialized, is like an individual AI agent in this massive decentralized network. Some filter nutrients, others build the reef, and still others host symbiotic relationships with fish, algae, and crustaceans—each with its unique role.

Similarly, AI agents in NANDA perform specific tasks—learning, navigating, transacting, and interacting—each contributing to the broader ecosystem.

The Model Context Protocol (MCP) is similar to the ocean currents that flow through the reef. These currents are consistent, structured, and essential—they allow nutrients, larvae, and signals to move through the system. In the same way, MCP ensures that information flows smoothly, securely and predictably between agents, tools, and resources.

But ocean currents alone don’t make a thriving reef.

That’s where NANDA comes in—it’s the reef structure itself, the intricate, interconnected framework built over time that supports the life within. NANDA provides the scaffolding—the decentralized architecture—where all these AI agents (like reef dwellers) can thrive together. It allows for scalability, resilience, and collaboration across countless agents, just like a healthy reef sustains an immense variety of life.

So in this metaphor:

  • AI agents = reef creatures and coral polyps
  • MCP = ocean currents and nutrient flows
  • NANDA = the coral reef’s skeleton, enabling life to flourish at scale

Together, they form a self-sustaining, adaptive ecosystem—an Internet of AI agents as vibrant and alive as a coral reef teeming with collaborative intelligence.


Why NANDA Matters to a CEO

Strategic Advantage
Adopting NANDA positions organizations to lead in AI-driven markets by supporting everything from R&D to regulated processes. Its infrastructure enables flexibility for creative automation and ensures reliability for mission-critical applications. By aligning AI maturity with business goals, NANDA facilitates a smoother path to AI adoption, providing long-term competitive advantages in industries where agility and scalability are key.

Decentralized Intelligence at Scale
NANDA’s approach transforms traditional AI systems by decentralizing both the data and control, enabling an intelligent ecosystem of agents that collaborate seamlessly. This enables secure, dynamic workflows across industries, from healthcare to finance. Unlike standalone AI systems, NANDA offers enhanced capabilities for discovery, search, authentication, and interaction traceability—ensuring secure and scalable intelligence for enterprise environments.

Innovation with Governance
With NANDA, organizations can embrace innovation while maintaining full control over security and compliance. NANDA balances rapid development with the need for governance by providing developers with tools for building secure, verifiable applications and agents. Secure authentication protocols and verifiable interaction logs (“Trace”) ensure that the system remains accountable, transparent, and aligned with regulatory standards for sensitive operations.

Why NANDA Will Quickly Provide a Secure Solution

For institutions like hospitals or financial organizations, adopting a decentralized system like NANDA may initially raise concerns regarding security and compliance. However, NANDA is designed to address these concerns head-on. Built from the ground up with trust and accountability at its core, NANDA leverages secure multi-layered encryption, authentication mechanisms, and immutable trace logs to ensure the integrity of data and interactions.

Additionally, NANDA’s infrastructure is designed to scale while meeting the most stringent privacy and regulatory requirements. By incorporating real-time verification, verifiable agent-to-agent interactions, and decentralized control, NANDA provides a robust security framework that enables organizations to trust its decentralized agents with mission-critical tasks while ensuring compliance with industry standards. The framework’s focus on decentralized trust eliminates the need for a single point of failure, further strengthening its suitability for high-security environments like healthcare or finance.

Core Value Proposition and Enabling Technology

NANDA explicitly positions itself as not just an interaction protocol (like MCP) but as a comprehensive infrastructure designed to support decentralized, large-scale AI collaboration. By providing a network fabric with critical components such as:

  • Registries for discovering agents, tools, and resources
  • Interaction databases for auditing and referencing agent interactions
  • Developer tools and SDKs to integrate third-party applications

NANDA creates the foundation for building a secure, scalable ecosystem where AI agents can collaborate across industries with confidence.

Key Differentiation Factors in the AI Agent Ecosystem

NANDA distinguishes itself from other AI agent frameworks through its explicit focus on decentralization, its large-scale infrastructure, and its strong academic foundation. By prioritizing decentralized trust, NANDA addresses the core limitations of centralized AI models and networks. Furthermore, NANDA’s traceable accountability systems ensure that every action is verifiable, creating a trustworthy environment for enterprise-scale applications.

Unlike frameworks like LangChain or AutoGen, which focus on individual agents or small-team coordination, NANDA aims to build the “interstate highway system” for decentralized AI—creating the infrastructure needed for billions of agents to collaborate seamlessly across the globe. This vision, coupled with a deep academic research foundation, positions NANDA as a true leader in the development of decentralized intelligence.


Elevator Pitch (AI Strategy Lens):

NANDA is a secure AI framework that helps businesses innovate with confidence. It connects specialized AI agents across a decentralized network, enabling them to collaborate, learn, and make decisions autonomously. Built on Anthropic’s MCP, NANDA offers strong security with encrypted communication, real-time authentication, and verifiable logs, ensuring that sensitive operations stay secure and compliant. With easy integration and developer tools, NANDA supports rapid innovation, making it a scalable and reliable solution for your business to harness AI safely and effectively.

NANDA Ecosystem

Discovery within the NANDA ecosystem enables agents to find and interact with one another efficiently across the network. This includes robust Search Functionality for querying distributed knowledge, and Authentication mechanisms to ensure secure and trustworthy agent interactions. A Trace layer supports verifiable accountability in agent-to-agent exchanges. The system is built on a modular architecture comprising a Protocol Layer that forms the foundation for AI communication, Developer Tools to empower builders within the ecosystem, an Infrastructure Layer maintaining a registry of agents, resources, and interactions, and a suite of Applications that support third-party integrations via SDKs, registries, and databases.

SOURCE:

General Information


Reality Check: Federated Agentic and Decentralized Artificial Intelligence

Joe Skopek · September 12, 2024 ·

AI is evolving rapidly, and a new approach is gaining momentum: agentic AI. Unlike current tools like ChatGPT, which require human input to operate, agentic AI is designed to act independently – monitoring competitors’ marketing efforts, scheduling real-time content updates, or predicting real-world equipment needs – without waiting for instructions. As these systems take on more autonomy, robust security measures become essential to ensure their actions remain safe, aligned, and trustworthy.

MIT Media Lab – Ramesh Raskar: A Perspective on Decentralizing AI

“A.I. co-pilots, assistants and agents promise to boost productivity with helpful suggestions and shortcuts. “

New York Times, September 2024

While this technology is still in the early stages, it’s being hyped as the next big thing in AI, promising to boost productivity and innovation. However, we’re not there yet – agentic AI is mostly a vision for the future that’s rapidly approaching.

Governance Challenges: Accountability, Regulation, and Security

Governance issues with Agentic AI and decentralized computing stem from a lack of centralized control, making regulation and enforcement difficult across jurisdictions. In decentralized systems, no single authority oversees operations, while in Agentic AI, autonomous decisions raise questions about accountability, such as who is responsible when things go wrong – developers, users, or the AI itself.

Ethical and legal compliance is a significant challenge, as both agentic AI and decentralized systems often operate beyond traditional frameworks, making it difficult to ensure they adhere to laws or ethical guidelines.

Security is another concern. Decentralized systems may suffer from vulnerabilities due to inconsistent protocols, while agentic AI can be manipulated or exhibit harmful behaviors. Existing regulatory frameworks are frequently outdated, creating oversight gaps for these emerging technologies.

Both technologies also face issues with coordination and standardization. Decentralized systems require consensus among many participants, which can slow progress, and agentic AI currently lacks widely accepted standards.

Finally, the lack of transparency in AI decision-making, combined with the difficulty of auditing decentralized systems, further complicates governance and accountability.

This is where Federated Machine Learning offers a compelling solution.

Federated Machine Learning (FedML)

FedML is an approach that enables organizations with limited data – so-called “small data” organizations – to collaboratively train and benefit from sophisticated machine learning models. The definition of “small data” depends on the complexity of the AI task being addressed.

In Pharma, for example, having access to a million annotated molecules for drug discovery is relatively small in view of the vast chemical space.

In Marketing that small data set might be in the form of brand specific visual data – brand guidelines scattered across PDFs, emails, and shared drives.

Image: Jing Jing Tsong/theispot.com

Is Federated Agentic AI the answer?

Federated Agentic AI refers to a blend of two advanced AI concepts: federated learning and agentic AI.

Federated learning enables AI models to be trained across decentralized devices or data sources while keeping the data local and secure, thereby enhancing privacy and scalability. Meanwhile, agentic AI refers to self-contained systems that, once implemented by humans, operate autonomously around the clock. These systems are capable of controlled decision-making and can adapt based on real-time data without further human intervention.

When combined, Federated Agentic AI allows multiple autonomous agents to collaborate across a secure distributed network. These agents can handle tasks independently while continuously learning from local data sources, without needing to share sensitive information across the network. This setup is particularly useful in environments like healthcare, finance, or IoT, where data privacy is critical but complex tasks still require intelligent automation.

For instance, a federated agentic system might be deployed in a network of smart devices where each device autonomously manages specific tasks (e.g., thermostats optimizing energy use) while learning from local data (e.g., weather conditions). These devices can also share insights without revealing user data, improving overall system efficiency and privacy​.

Final Thoughts: Designing for new technology is a completely different challenge from a traditional design project.

Typically, users already know how to interact with familiar products – like swiping a credit card at a payment terminal or using a TV remote to change channels. But with emerging technology, there are no familiar cues, making it harder for users to figure out how to engage with it effectively. Think back to when users first encountered smartphones – there was no clear precedent for touchscreens or gestures, making it challenging to learn entirely new interactions.

Adoption of new tech often lags behind confusion and speculation, so creating a seamless, intuitive user experience is essential for success.

That said, designing something entirely new isn’t easy – but it’s exactly where the team at Velocity Ascent excels. We navigate the space between excitement for emerging technology and the need to deliver real, secure, user-centered value. By following key principles, we transform unfamiliar tech into products that people and teams love to work with.

For CMOs, this means a potential shift in how to approach marketing and customer engagement. As AI becomes more autonomous, the question will be: how do we control and guide these powerful tools to enhance our strategies while still ensuring privacy? Ultimately it is about using AI in smarter, more effective ways to drive business growth.

Sources:

Analytics Vidhya, The GitHub Blog

MIT Media Lab: Decentralized AI Overview

A Perspective on Decentralizing AI

Velocity Ascent

© 2026 Velocity Ascent · Privacy · Terms · YouTube · Log in