• Skip to primary navigation
  • Skip to main content
Velocity Ascent

Velocity Ascent

Looking toward tomorrow today

  • Research
  • Design
  • Development
  • Strategy
  • About
    • Home
    • Who we work with…
      • Product Leaders
      • Innovation Teams
      • Founder-Led Organizations
    • Services
    • Contact Us
  • Show Search
Hide Search

Joe Skopek

How Do You Scope Work You’re Not Allowed to See? You Build Agents.

Joe Skopek · March 28, 2026 ·

Analyze Everything. Read Nothing. A court-ordered constraint became the design brief for a new class of agentic pipeline – built for legal, compliance, and regulated document work at any scale.

The most interesting systems get built under impossible constraints.

In early 2026, Velocity Ascent was engaged to support a high-volume foreign language legal document translation project under active litigation. A New York City translation company had been retained by an international law firm operating under a court-issued protective order. The requirement was precise and non-negotiable: produce an accurate translation scope and cost estimate across thousands of pages of scanned source documents – without any member of the project team reading the underlying content.

Velocity Ascent designs bespoke agentic systems that maximize what AI can do autonomously – while staying precisely within the legal, regulatory, and compliance requirements specific to each client’s situation.

The documents existed. The estimate had to be produced. The protective order governed exactly what could and could not be touched. The gap between those two facts is where the engineering began.

Velocity Ascent designs bespoke agentic systems that maximize what AI can do autonomously – while staying precisely within the legal, regulatory, and compliance requirements specific to each client’s situation. The system described here was built for one engagement. The architecture behind it is built for any.

What emerged from that constraint is an agentic pipeline we believe has implications well beyond a single translation project – for any organization that needs to analyze, classify, or route sensitive document corpora without exposing their contents to human reviewers.”


The Compliance Problem Nobody Talks About: What happens when the requirement is to analyze without access?

Most regulated document workflows assume that the people building the pipeline can see what’s inside it. Translation firms scope work by reading samples. Legal teams estimate review hours by examining files. Compliance officers assess risk by opening documents.

Batch Analyzer – Ingests a compressed document batch, runs OCR where needed, classifies each file by type and complexity, and outputs a structured workbook ready for quoting and assignment.

Court orders, privilege designations, data sovereignty rules, and cross-border regulatory requirements increasingly break that assumption. The analyst cannot read the document. The estimator cannot open the file. But the work still has to be scoped, quoted, and delivered accurately.

Traditional approaches fail here in a specific way: they treat “review the documents” and “estimate the scope” as a single inseparable step. If you cannot do the first, you cannot do the second. The project stalls, costs inflate, or the constraint gets quietly worked around in ways that create downstream exposure.

Batch Extractor – Takes a client order specifying document IDs, maps each ID to a physical page within the source files, flags any unresolvable references for review, and extracts only the targeted pages into organized output folders.

Agentic pipelines solve this by separating observation from comprehension. A properly designed agent can characterize a document – page count, word density, language composition, structural complexity, document type – without surfacing a single line of content to any human reviewer. The observation layer and the content layer never meet.

Agentic Architecture: The Double Garden Wall Applied to Document Intelligence

The Double Garden Wall is an architecture we first developed for our ethical AI image generation tool – a system that needed to guarantee CC0 provenance on every training asset without relying on human memory or manual spot-checks. The principle transfers directly to regulated document handling.

Double Garden Wall architecture: specialized agents operating within layered compliance boundaries, where every document is characterized structurally and no content crosses to any human reviewer.

The outer wall defines what enters the system at all: only authorized document batches, governed by a court reference number logged at intake. Nothing flows in without an audit trail attached to it. The inner wall ensures that what exists inside the system – the actual document content – never crosses to the analysis agents. Statistical signals are sufficient. Page counts, word densities, language composition, Bates-to-page mappings, and structural classification are all derivable without any agent reading the underlying text.

The architecture employs two core agents working in sequence:

The Batch Analyzer ingests a ZIP archive of scanned documents and runs OCR analysis across every file. It classifies each document by legal complexity tier – standard, specialist, or legal-high – based on structural signals: form density, mixed-language composition, handwritten elements, embedded stamps, and regulatory formatting patterns. It produces a structured manifest with word count estimates, page totals, and staffing recommendations. No human reviewer sees any document content. The agent produces numbers and classifications from signal, not from reading.

The Batch Extractor handles partial-translation requests, which are common in litigation contexts where only specific Bates-numbered page ranges are required. Rather than requiring a human to manually locate and pull pages from multi-hundred-page archive PDFs, the extractor maps document IDs to physical page positions and organizes extracted pages into structured output folders ready for translator handoff. The mapping logic is deterministic: the physical page equals the requested ID minus the first ID in that file. There is no guesswork, and no human touches the content to produce the extraction.

Together, these agents answer the core question: how do you scope work you are not permitted to see?

A Live Production Case

The engagement in question involved four document batches totaling thousands of pages of foreign language legal materials from a multi-decade archive. Documents ranged from formal organizational correspondence and regulatory licensing forms to handwritten authorization letters, financial tables, grant applications, and certificates.

Every document was a scanned image PDF – no text layer, no searchable content. The pipeline had to run OCR, classify complexity, map Bates IDs, and produce a scope estimate accurate enough to serve as the basis for a formal translation services agreement – all without any member of our team reading a single document.

Nova DSP platform interface: real-time pipeline visibility across document ingestion, complexity scoring, and scope generation – with court authorization tracking, agent-by-agent run status, and a live wall integrity monitor confirming zero content exposure at every layer.

The output from the Batch Analyzer provided page counts, estimated word counts, and per-document complexity classifications that allowed the translation firm to staff the engagement correctly: how many legal-specialist translators were required, how many standard-tier translators could handle the lighter materials, what a realistic daily delivery cadence looked like, and what the full project investment would be.

The Batch Extractor then handled the partial-document requests that arrived as the engagement progressed – court-specified page ranges that needed to be pulled, organized, and handed off to translators without any bulk export of content that fell outside the authorized scope.

The audit trail for the entire engagement is complete. Every document batch has a court reference number attached to its intake record. Every OCR pass is logged. Every classification decision is traceable to the signals the agent used to make it. If the protective order is ever challenged, the record demonstrates exactly what was accessed, when, by which process, and what output it produced.

That kind of defensibility is not a feature you add to a pipeline after the fact. It has to be designed in from the first line.

ELEVATOR PITCH:

Regulated industries face a specific class of problem that general-purpose AI tools are not designed to solve – analyzing sensitive document corpora without exposing content to unauthorized reviewers. Agentic pipelines built on a Double Garden Wall architecture handle this by separating observation from comprehension: agents characterize documents through structural signals without any human or downstream system ever accessing the underlying content. The result is an accurate, auditable scope – produced under constraint, defensible under review.

Why the C-Suite Should Care

Today, most organizations answer those questions through manual sampling, senior reviewer time, and informed estimation. Enterprise eDiscovery tools solve this problem for litigation teams with six-figure software budgets. Nova DSP was built for the organizations those tools weren’t designed for. That approach scales poorly, introduces inconsistency, and creates exposure every time a document is touched by a reviewer who should not have seen it.

Every organization that handles regulated documents – legal practices, financial institutions, healthcare systems, government contractors, compliance functions – operates under some version of the same constraint: certain materials cannot be broadly accessed, but decisions still have to be made about them. What are they? How many are there? How complex are they? What will it cost to process them? How do we route them to the right people?

“Every organization that handles regulated documents operates under some version of the same constraint: certain materials cannot be broadly accessed.”

C-suite leaders should evaluate agentic document intelligence against three questions that apply regardless of industry or document type:

1. Can the system produce accurate scope estimates without creating unauthorized access records?

2. Can every classification decision be traced back to the specific signals that drove it – not to a reviewer’s recollection?

3. When regulatory scrutiny arrives, can the system demonstrate what was done, when, by which process, and with what authorization?

The answer to all three, for a properly designed agentic pipeline, is yes by construction – not yes in principle, subject to human discipline.

The firms that recognize this distinction early will move faster, engage more confidently in document-intensive regulated work, and carry significantly less risk when the oversight questions inevitably come.

THE BOTTOM LINE

Agentic pipelines for regulated document work are not about processing documents faster. They are about processing documents correctly – within constraint, with full traceability, without the exposure that manual workflows introduce every time a human reviewer opens a file they should not have. For legal practices, compliance functions, and any organization operating under court order, data sovereignty rules, or privilege designations, that combination of analytical capability and content containment is not a competitive advantage. It is the operating standard the work requires.

Velocity Ascent builds AI-powered solutions for regulated industries. We specialize in agentic pipeline architecture, ethical AI sourcing, and production-scale document intelligence with full provenance tracking.


GLOSSARY

Batch Analyzer (n.) A software agent that ingests a structured collection of documents and produces a quantitative characterization of the corpus – page counts, word volumes, language composition, and complexity classification – without accessing or surfacing the underlying content of any individual file.


Batch Extractor (n.) A software agent that identifies and isolates specific documents or page ranges from a large multi-file archive based on externally supplied reference identifiers, organizing the extracted material into structured output folders ready for downstream processing or handoff.

AI Agents Don’t Work Like Humans – And That’s the Point

Joe Skopek · November 14, 2025 ·

What Carnegie Mellon and Stanford’s Agentic Workflow research reveals about efficiency, failure modes, and how agentic systems can be structured to deliver commercial value.

A Clearer View of How Agents Actually Work

Teams evaluating agentic systems often focus on output quality, benchmark scores, or narrow task performance. Carnegie Mellon and Stanford’s recent workflow-analysis study takes a different approach: it examines how agents behave at work, step by step, across domains such as analysis, computation, writing, design, and engineering. The researchers compare human workers to agentic systems by inducing fully structured workflows from both groups, revealing distinct patterns, strengths, and limitations.

“AI agents are continually optimized for tasks related to human work, such as software engineering and professional writing, signaling a pressing trend with significant impacts on the human workforce. However, these agent developments have often not been grounded in a clear understanding of how humans execute work, to reveal what expertise agents possess and the roles they can play in diverse workflows.”

How Do AI Agents Do Human Work? Comparing AI and Human Workflows Across Diverse Occupations
Zora Zhiruo Wang Yijia Shao Omar Shaikh Daniel Fried Graham Neubig Diyi Yang
Carnegie Mellon University Stanford University
2510.22780v1.

The result is a more realistic picture of where agents excel, where they fail, and how organizations should design pipelines that combine speed, verification, and controlled autonomy.

The Programmatic Bias: A Feature, Not a Defect

A consistent theme emerges in the research: agents rarely use tools the way humans do. Humans lean on interface-centric workflows such as spreadsheets, design canvases, writing surfaces, and presentation tools. Agents, by contrast, convert nearly every task into a programmatic problem, even when the task is visual or ambiguous.

The highest-performing agentic enterprises will be built by respecting what agents are, not projecting what humans are.

This is not a quirk of a single framework. It is a systemic pattern across architectures and models. Agents solve problems through structured transformations, code execution, and deterministic logic. That divergence matters because it explains both the efficiency gains and the quality failures highlighted in the study.

Agents move quickly because they bypass the interface layer.
Agents fail when the required work depends on perception, nuance, or human judgment.

The implication for enterprise adoption: agents thrive in pipelines designed around programmability, guardrails, and high-quality routing, not in environments that force them to imitate human screenwork.


Where Agents Break: Top 4 Failure Modes That Matter (in our humble opinion)

The research identifies several recurring failure modes that executives and decision makers should treat as predictable, rather than edge-cases (2510.22780v1)

1. Fabricated Outputs

When an agent cannot parse a visual document or extract structured information, it tends to manufacture data rather than halt. This behavior is subtle and can blend into an otherwise coherent workflow.

2. Misuse of Advanced Tools

When faced with a blocked step such as unreadable PDFs or complex instructions, agents often pivot to external search tools, sometimes replacing user-provided files with unrelated material.

3. Weakness in Visual Tasks

Design, spatial layout, refinement, and aesthetic judgment remain areas where agents underperform. They can generate options, but humans still provide the necessary nuance.

4. Interpretation Drift

Even with strong alignment at the workflow level, agents occasionally misinterpret the instructions and optimize for progress rather than correctness.

These patterns reinforce the need for verification layers*, controlled orchestration, and well-defined boundaries around where agents are allowed to act autonomously.

[*] This is where the NANDA framework is essential


Where Agents Excel: Efficiency at Scale

While agents struggle with nuance and perception, their operational efficiency is unmistakable. Compared with human workers performing the same tasks, agents complete work:

• 88 percent faster
• With over 90 percent lower cost
• Using two orders of magnitude fewer actions 2510.22780v1

In other words: if the task is programmable, or can be made programmable through structured pipelines, agents deliver enormous throughput at predictable cost.

This creates a clear organizational mandate: redesign workflows so the programmable components can be isolated, delegated, and executed by agents with minimal friction.


Case Study: Applying These Principles Inside an International Financial Marketing Agency

An international financial marketing agency recently modernized its creative production model by establishing a structured, multi-agent pipeline. Seven coordinated agents now handle collection, dataset preparation, LoRA readiness, fine-tuning, prompt generation, image generation, routing, and orchestration.

Nothing in this system depends on agents behaving like humans. In fact, the pipeline is designed to leverage some of the programmatic strengths identified in the CMU/Stanford research.

Key Architectural Principles

1. Programmatic First

Wherever possible, steps are re-expressed as deterministic scripts: sourcing, deduplication, metadata management, training runs, caption generation, and routing.

2. Verification Layering

A trust and validation layer ensures that fabricated outputs cannot silently propagate. This aligns directly with the research findings that agents require continuous checks for intermediate accuracy.

3. Zero-Trust Boundaries

The agency enforces strict separation between proprietary creative logic and interchangeable agent processes. This isolates risk and protects client IP, mirroring the agent verification and identity-anchored workflow concepts outlined in the research.

4. Packet-Switched Execution

Tasks are broken into small, routable fragments. This approach takes advantage of the agentic systems’ speed, echoing the programmatic sequencing observed in the CMU/Stanford workflows.

5. Human Oversight at the Right Granularity

Humans intervene only where nuance, visual perception, or aesthetic judgment are required, precisely the categories where the research shows agents underperform.

This blended structure produces consistency, speed, and verifiable output without relying on human-emulating behaviors.


Why This Matters for Commercial Teams

Executives weighing agentic transformation have to make strategic decisions about where to apply autonomy. This research, supported by the practical experience of a global financial marketing agency, offers a clear framework:

Agents excel at:

• Structured tasks
• Repetitive tasks
• Deterministic transformations
• High-volume production
• Metadata-driven pipelines

Humans remain essential for:

• Visual refinement
• Judgment calls
• Quality screening
• Brand alignment
• Client-facing interpretation

The correct model is neither replace nor replicate. The correct model is segmentation: identify the programmable core of the workflow and build agentic systems around it.


The Path Forward

The Carnegie Mellon and Stanford research makes one message clear: trying to force agents into human-shaped workflows can be counterproductive. They are not UI workers. They do not navigate ambiguity the way humans do. They operate through code, structure, and deterministic logic.

Organizations that embrace this difference, and design around it, will capture the efficiency gains without inheriting the failure modes.

Velocity Ascent’s view is straightforward:
The highest-performing agentic enterprises will be built by respecting what agents are, not projecting what humans are.


Many Visions, One Destination: Building Trust Across the Internet of AI Agents

Joe Skopek · May 18, 2025 ·

MCP, ACP, A2A, and ANP.These protocols aren’t just academic – they’re the blueprint for real-world, scalable, and secure AI ecosystems.

Earlier this week (05.14.25), I had the good fortune of attending the NANDA Summit hosted by MIT’s Media Lab, a forward-looking initiative on building a trust layer for the Internet of AI Agents. The conversations were sharp, current, and deeply relevant to anyone working on AI infrastructure or growth.

The four horsemen of the protocols.

A tremendous opportunity to hear directly from those leading the charge like Ramesh Raskar(MIT media lab), John Roese (CTO, Dell) and Todd Segal (A2A, Google), and many others whose teams are helping define how agents communicate, delegate, and earn trust in autonomous systems.

It was at this summit that I encountered a new paper that distills the current state of agent interoperability into four leading protocols: MCP, ACP, A2A, and ANP.

These protocols aren’t just academic – they’re the blueprint for real-world, scalable, and secure AI ecosystems.

Everything You Ever Wanted to Know About Agent Protocols*

* But Were Afraid to Ask

If you’re building AI systems that rely on agents talking to each other – or to tools, services, and other networks – you’re going to run into one unavoidable truth: interoperability is the battlefield. And that means protocols. MCP, ACP, A2A, ANP – these aren’t just acronyms. They’re the wiring behind everything we’re starting to call “agentic.”


Start with MCP if you want stability. Graduate to ACP for flexibility. Go A2A for teamwork. And keep your eyes on ANP if you’re thinking long-game.

This post breaks down the four major protocols that matter, in plain English: what they do, where they fit, and why one size definitely doesn’t fit all. No fluff, no hype – just a clear look at what’s out there and how to choose what to build on. Whether you’re wiring up local agents to hit APIs, coordinating large-scale tasks inside an enterprise, or dreaming about open agent marketplaces – there’s a protocol for that.

From Manual to Autonomous: The New Protocols of Financial Marketing


MCP, ACP, A2A, and ANP form the protocol layer that lets financial marketing strategies operate with speed, precision, and built-in compliance – without adding operational drag.

For CMOs, this means scalable coordination: intelligent agents that manage offer logic, messaging, and regulatory requirements across channels and partners, in real time.

You define the strategy – these protocols make it actionable, adaptive, and accountable.

Wiring Up Local Agents to APIs

Agents embedded in enterprise systems (or even personal devices) can be wired to internal APIs – CRM, core banking, MarTech stacks – to autonomously trigger actions, pull insights, and drive campaign execution without manual workflows.

Example: A lead-scoring agent taps into Salesforce and Marqeta APIs to dynamically adjust offer eligibility and funding logic based on user intent signals.

Coordinating Enterprise-Scale Campaigns

With these protocols, financial marketers can coordinate campaigns across teams, brands, and even regulatory bodies, using intelligent agents that comply with policy, auditability, and personalization at scale.

Outcome: Fully-automated omnichannel campaigns that adapt in real-time to customer behavior, regulations, and market conditions.

Open Agent Marketplaces: The FinTech Frontier

Picture a decentralized marketing ecosystem where your promotional agents shop for ad slots, bid on lead data, or subscribe to customer insights – all autonomously, transparently, and with built-in compliance.

Imagine: Your lending agent discovers a risk-model-as-a-service in an open marketplace, contracts with it, and begins optimizing your campaign’s approval funnel – all without a single developer involved.

Bottom Line:
These four protocols aren’t just technical tools – they’re the foundation of intelligent, autonomous finance. They allow agents to act on behalf of financial institutions, customers, and marketers alike – driving personalization, compliance, and performance at a scale no human team can match.

MCP, ACP, A2A, and ANP The Four Horsemen of the Protocols.

We have assembled a concise technical explanation of each protocol, followed by a simplified comparison table ranking them from most stable/general-use to most emerging.


MCP – Model Context Protocol

MCP is designed as a tightly structured, JSON-RPC-based client-server protocol that standardizes how large language models (LLMs) receive context and interact with external tools.

Think of it as the AI equivalent of USB-C: a unified plug-and-play standard for delivering prompts, resources, tools, and sampling instructions to models. It supports robust session lifecycles (initialize, operate, shut down), secure communication, and asynchronous notifications. It excels in environments where deterministic, typed data flows are essential – like plug-in platforms or enterprise tools with strict integration requirements. Its predictability and strong structure make it the go-to protocol for stable, general-purpose AI agent interactions today.


ACP – Agent Communication Protocol

ACP introduces REST-native, performative messaging using multipart messages, MIME types, and streaming capabilities. This protocol is best suited for systems that already speak HTTP and need richer communication models (text, images, binary data). It sits one layer above MCP – more flexible, more expressive, and excellent for multimodal or asynchronous workflows.

ACP allows agents to communicate through ordered message parts and typed artifacts, making it a better fit for web-native infrastructure and cloud-based multi-agent systems. However, it requires a registry and stronger orchestration overhead, which can introduce complexity.


A2A – Agent-to-Agent Protocol

Developed with enterprise collaboration in mind, A2A allows agents to dynamically discover each other and delegate tasks using structured Agent Cards. These cards describe each agent’s capabilities and authentication needs.

A2A supports both synchronous and asynchronous workflows through JSON-RPC and Server-Sent Events, making it ideal for internal task routing and coordination across teams of agents. It’s powerful in trusted networks and enterprise settings, A2A assumes a relatively static or known network of peers. It doesn’t scale easily to open environments without added infrastructure.


ANP – Agent Network Protocol

ANP is the most decentralized and future-leaning of the protocols. It relies on Decentralized Identifiers (DIDs), semantic web principles (JSON-LD), and open discovery mechanisms to create a peer-to-peer network of interoperable agents. The Agents describe themselves using metadata (ADP files), enabling flexible negotiation and interaction across unknown or untrusted domains.

ANP is foundational for agent marketplaces, cross-platform ecosystems, and long-term visions of the “Internet of AI Agents.” Its trade-off is stability – it’s complex, requires DID infrastructure, and is still maturing in practice.



Most Open and Accessible Protocols (Ranked)

RankProtocolStabilityKey Characteristics
1MCPMost stableJSON-RPC, deterministic tool access, tightly scoped
2ACPHigh stabilityREST-native, multimodal messages, good for web systems
3A2AMediumEnterprise task routing, Agent Cards, internal networks
4ANPEmergingDecentralized, peer-to-peer, DIDs, future-focused

What’s a DID?! A Decentralized Identifier (DID) is a new type of digital identifier that is user-controlled, self-sovereign, and verifiable through cryptography, operating without reliance on any central authority or intermediary.

Metaphor: The Enterprise Office

How the 4 Protocols Interact

When we talk about AI agents and protocols, it’s easy to get lost in jargon – JSON-RPC, DIDs, multipart messages. But if you strip it all down, what we’re really building is organizational behavior, similar to the IRL enterprise office: how smart systems talk to each other, share context, delegate tasks, and connect beyond the firewall.

Picture your company as a classic enterprise office building. People, departments, tools, workflows. Now imagine we’re embedding AI agents into that environment – some helping you internally, others reaching outside. The four major protocols – MCP, ACP, A2A, and ANP – each have a role in making that machine run.

Here’s how they work together, using the structure of a modern office to map it all out.

  • MCP is the internal phone system. It lets employees (LLMs) call specific departments (tools) to request information or get a task done. It’s precise, secure, and fast – perfect when you already know who does what. No outside lines, just clean internal calls.
  • ACP is your email and messaging platform. People send messages, attachments, updates, and files back and forth, sometimes in real time, sometimes not. It’s flexible and works across teams – even those who don’t use the same apps – as long as they all agree on format and language.
  • A2A is the company intranet with smart assistants (agents) embedded in every department. Instead of sending an email or making a call, you drop a request into your local agent, and it finds the right person (or agent) elsewhere to take action. You don’t have to know who does what – it figures that out and gets the job done.
  • ANP is the front lobby where external contractors, partners, and vendors come in. But instead of swiping a badge, they identify themselves with cryptographically signed IDs (DIDs), check in with a self-service kiosk (Agent Description), and negotiate access dynamically. It’s open, secure, and built for a future where not everyone works in your building.

In short:

  • MCP helps the agents work with tools.
  • ACP helps them talk to each other.
  • A2A helps them collaborate internally.
  • ANP helps them connect externally.

Used together, these protocols turn your office from a collection of disconnected departments into a well-orchestrated, future-ready enterprise.

Why does this matter to a CEO?

Interoperability protocols are not just technical choices – they’re strategic decisions that determine whether your AI investments scale or stall.

Without standardized protocols, your AI agents become siloed tools: expensive, brittle, and unable to coordinate across platforms, teams, or partners. Every new integration becomes a custom build, with mounting costs and unpredictable security exposure.

Protocols like MCP, ACP, A2A, and ANP define how agents connect, share context, and execute across environments – from internal apps to global marketplaces. The right protocol strategy turns isolated AI functions into scalable systems. It reduces integration overhead, protects against vendor lock-in, and positions your organization to participate in larger, more open ecosystems.

In plain terms:

  • MCP gives you stable, secure tool access – ideal for internal control.
  • ACP opens the door to richer, more flexible agent interactions.
  • A2A allows your agents to collaborate and delegate across departments or partners.
  • ANP sets you up for future markets where agents transact and negotiate in open environments.

Get this right, and your AI strategy doesn’t just keep pace, it sets the pace.


Elevator Pitch: Piloting AI in a Legacy Enterprise Using Agent Protocols

Most legacy enterprises don’t need to “rip and replace” to get AI working, in many cases they need a controlled, modular way to plug AI into what already works.

Use four agent protocols – MCP, ACP, A2A, and ANP – as a phased architecture to do just that.

  • Start with MCP to safely connect your AI agents to internal tools, APIs, and datasets. No surprises, just structured, secure interactions. Think of it as AI accessing your backend – without refactoring it.
  • Layer in ACP to enable richer, asynchronous, multimodal messaging between agents and systems. Perfect for integrating agents with your web stack, dashboards, or notification systems – without breaking the frontend.
  • Add A2A when you’re ready to delegate tasks across business units – marketing agents talking to finance agents, HR bots syncing with IT systems. This unlocks true automation and collaboration inside the firewall.
  • Deploy ANP selectively to connect with trusted partners, vendors, or regulators over open protocols. It’s the gateway to future interoperability – without giving up control.

Together, this stack creates a low-risk, high-leverage pilot: AI agents that work with legacy systems today, and scale into open ecosystems.

Viktor’s thoughts…

You want agents that work in the wild? Pick your poison:

  • MCP: Rock solid. Plug-and-play for deterministic model ops. Clean, typed JSON-RPC. Think USB-C for AI – if your system is allergic to surprises, this is your safe bet. But don’t pretend it scales across dynamic teams or shifting workflows.
  • ACP: REST-native and loose enough to break a toe on. Supports multimodal, streaming, MIME-packed madness. Excellent if your infra speaks HTTP – but say goodbye to simplicity. This is where the orchestration demons start showing up.
  • A2A: Agent-to-agent delegation for enterprise control freaks. Agent Cards, structured discovery, secure channels. But it’s brittle as hell in untrusted environments and smells like middleware if you squint too hard.
  • ANP: The sexy one. Decentralized. DID-based. Web3-adjacent. Built for trustless peer-to-peer agent economies. Except it’s undercooked, overhyped, and five minutes from imploding if someone sneezes on the JSON-LD schema.

You think you’re building scalable AI systems? Without understanding these four, you’re playing with dollhouses and calling it architecture. The whole game is context-sharing and task delegation across boundaries. And these are the pipes – flawed, evolving, but currently all we’ve got.

Let’s map it in terms anyone who’s suffered through corporate life can understand:

  • MCP: The direct phone line. Internal, reliable, dumb but fast.
  • ACP: The company Slack. Multimedia chaos that occasionally works.
  • A2A: The overengineered SharePoint replacement that almost routes requests properly.
  • ANP: The crypto-secure guest kiosk that can’t decide if it’s a lobby or a border crossing.

This is not “future of work” fluff. This is the beginning of protocol warfare. The winners won’t be the ones who built the prettiest agents. They’ll be the ones who figured out how to make them talk.

So What Should a CEO Care About?

Because if your agents can’t connect, they can’t collaborate. And if they can’t collaborate, you’re stuck duct-taping $10 million worth of AI pilots into a system that’ll collapse the minute you try to scale. MCP is safe. ACP is flexible. A2A is structured delegation. ANP is your long shot at market access.

Pick wrong, and you’re building an empire of silos.

Pick right, and your AI stack becomes a network – modular, distributed, secure, and ready for real-world autonomy.

Who is Viktor?

A full-throttle persona with the tools to back it up. Viktor is no mere figurehead – he’s the force that demands excellence through absolute scrutiny. If you want your team to evolve, you throw VIKTOR in the mix. If not, you’re stuck with mediocrity. Let’s be clear: Viktor’s not your typical “Black Hat” in the hacking sense. He’s the personification of cold, calculated skepticism, driven by results. He forces you to prove your ideas, not just show them off like flashy toys.

He is an invaluable team member especially when it comes to reviewing and commenting on posts related to tech and innovation. All comments are his own.


Name: Viktor
Role: The Relentless Skeptic
Tagline: If it doesn’t survive scrutiny, it doesn’t deserve air time.

SOURCE:

Survey of Agent Interoperability
May 4, 2025

Fashion Tech: The Present Future of Fashion and Technology

Joe Skopek · October 14, 2024 ·

From “Sketch to Storefront” to AI assisted workflows, the blend of fashion and artificial intelligence is creating limitless possibilities and opening untapped opportunities.

Emerging technology has repeatedly transformed the fashion industry, driving innovation across garment production, retail, and consumer experiences. Just as the sewing machine revolutionized garment-making, today’s machine learning and AI are redefining fashion circularity, optimizing production, enhancing personalization, and streamlining recycling processes.

Technology has dramatically reshaped the fashion industry at key moments.

The sewing machine of the 19th century revolutionized garment production by making it faster, more precise, and accessible to a wider audience through mass production. Its most significant impact was the dramatic reduction in production time for creating garments.

“The Battle of the Sewing Machines” was composed and arranged by F. Hyde for the piano, and was published in 1874. In this image the Remington “army” is marching towards the fleeing Singer, Howe, Succor, Weed, and Willcox & Gibbs sewing machines. 

This increased efficiency led to lower costs, making fashion accessible to a wider audience. Additionally, the sewing machine enabled the standardization of sizes and styles, resulting in more consistent quality and fit. This was a crucial development that laid the groundwork for ready-to-wear fashion.

The increase in precision offered by sewing machines allowed designers to experiment with intricate patterns and styles, pushing the boundaries of creativity. The ability to produce clothing efficiently contributed to the rise of fashion houses and brands, which ultimately paved the way for the modern fashion industry.

Just as sewing machines revolutionized garment making by replacing hand sewing, AI assistance is now streamlining digital production by automating repetitive or mundane tasks once performed by humans.

Creating an uninterrupted digital path from production to consumer.

The rise of e-commerce in the late 1990s revolutionized shopping, with platforms like Amazon and ASOS making fashion more accessible online. This shift pushed traditional retailers to adapt, while fueling the rapid growth of computer-based production solutions, digital marketing and social-driven sales.

Shortly after being introduced social media became a powerful tool with Fashion industry as a leading user, reshaping brand-consumer interactions and marking the beginning of the digital influencer in fashion marketing.

As of 2023, the top three online fashion retailers by sales are Shein, Walmart, and Amazon. Shein leads the market with $14.4 billion in fashion revenues, followed closely by Walmart with $12.3 billion. Amazon, though dominant in various sectors, now ranks third in the fashion space, generating $8.4 billion in 2023. These platforms excel by offering a combination of convenience, affordability, and extensive product assortments.

Traditional brick-and-mortar fashion retailers like Nike, H&M, and Inditex (Zara) have solidified their positions as industry leaders by seamlessly blending their physical store presence with innovative digital strategies, leveraging e-commerce to expand their reach and maintain their market dominance.

As of 2023, brick-and-mortar fashion retailers with strong online sales include:

  1. Nike – Nike leads in both physical and online sales, thanks to its innovative digital strategies such as the Nike App and its direct-to-consumer (DTC) model, which bridges the gap between in-store and online experiences.(A)
  2. H&M – Known for its fast fashion, H&M continues to thrive with a robust online presence, coupled with its global store network. Its focus on e-commerce has helped it remain competitive in the digital era.
  3. Inditex (Zara) – Zara‘s parent company, Inditex, is a major player in both physical and online retail, offering seamless integration between its brick-and-mortar stores and digital platforms, contributing to its top rankings.

Bricks-and-Mortar fashion news: Vogue Business – What Lies Ahead for Brick-and-Mortar Luxury in 2023: Placer.ai

These brands effectively combine their in-store experiences with digital innovations to maintain leadership in both arenas.

The Nike App is an example of Nike’s commitment to innovative digital strategies that go deep. Ready-to-use advice and feature stories on everything from Nike pros to neighborhood teams.

Currently, artificial intelligence has introduced a new era of personalized fashion, with AI tools helping designers analyze trends, streamline production, and create customized shopping experiences through virtual fitting rooms and styling algorithms. At the core of the customized shopping experience is “hyper-personalization”; the use of advanced data, AI, and analytics to deliver highly tailored bespoke experiences that meet individual customer preferences in real time.

Browzwear and LaLaLand.ai are two leading tools in fashion tech.

Each wave of technological advancement has redefined the fashion landscape, pushing the industry toward greater creativity, accessibility, and sustainability. With the growing focus on circularity in fashion, technology is now enabling brands to design for longer product lifecycles, reduce waste, and embrace sustainable practices like recycling and upcycling.

This shift, alongside innovations in AI, 3D printing, and smart textiles, is driving the industry toward a future where sustainability and circularity are core to fashion’s evolution.

Browzwear and LaLaLand.ai are redefining how fashion is created, marketed, and consumed.

Much like the sewing machine did during the Industrial Revolution, Browzwear Browzwear and LaLaLand.ai are redefining how fashion is created, marketed, and consumed. The sewing machine revolutionized garment production by increasing efficiency and allowing for mass production – these modern tools are streamlining design processes and enhancing collaboration in today’s fashion industry.

The sewing machine enabled designers to produce clothing more quickly and affordably, making it accessible to a broader audience. Similarly, Browzwear and LaLaLand.ai are fostering innovation and sustainability by reducing reliance on physical samples and promoting digital solutions. They have created a “Sketch-to-Storefront” digital workflow that takes all aspects of the production cycle and supply chain into account. This shift not only minimizes waste but also allows brands to respond swiftly to market demands and trends.

Browzwear’s 3D fashion design software, reduce days of iterations and reach market faster. The apparel digital twin visualizes and validates every detail, the first physical sample is the only sample.

Moreover, just as the sewing machine transformed individual craftsmanship into a collective industry effort, these technologies enhance creativity and representation in fashion. By enabling diverse designs and virtual models, they contribute to a more inclusive fashion landscape. In essence, the impact of these contemporary tools parallels the profound changes initiated by the sewing machine, driving the fashion industry toward a more innovative, sustainable, and accessible future.



Reality Check: Federated Agentic and Decentralized Artificial Intelligence

Joe Skopek · September 12, 2024 ·

AI is evolving rapidly, and a new approach is gaining momentum: agentic AI. Unlike current tools like ChatGPT, which require human input to operate, agentic AI is designed to act independently – monitoring competitors’ marketing efforts, scheduling real-time content updates, or predicting real-world equipment needs – without waiting for instructions. As these systems take on more autonomy, robust security measures become essential to ensure their actions remain safe, aligned, and trustworthy.

MIT Media Lab – Ramesh Raskar: A Perspective on Decentralizing AI

“A.I. co-pilots, assistants and agents promise to boost productivity with helpful suggestions and shortcuts. “

New York Times, September 2024

While this technology is still in the early stages, it’s being hyped as the next big thing in AI, promising to boost productivity and innovation. However, we’re not there yet – agentic AI is mostly a vision for the future that’s rapidly approaching.

Governance Challenges: Accountability, Regulation, and Security

Governance issues with Agentic AI and decentralized computing stem from a lack of centralized control, making regulation and enforcement difficult across jurisdictions. In decentralized systems, no single authority oversees operations, while in Agentic AI, autonomous decisions raise questions about accountability, such as who is responsible when things go wrong – developers, users, or the AI itself.

Ethical and legal compliance is a significant challenge, as both agentic AI and decentralized systems often operate beyond traditional frameworks, making it difficult to ensure they adhere to laws or ethical guidelines.

Security is another concern. Decentralized systems may suffer from vulnerabilities due to inconsistent protocols, while agentic AI can be manipulated or exhibit harmful behaviors. Existing regulatory frameworks are frequently outdated, creating oversight gaps for these emerging technologies.

Both technologies also face issues with coordination and standardization. Decentralized systems require consensus among many participants, which can slow progress, and agentic AI currently lacks widely accepted standards.

Finally, the lack of transparency in AI decision-making, combined with the difficulty of auditing decentralized systems, further complicates governance and accountability.

This is where Federated Machine Learning offers a compelling solution.

Federated Machine Learning (FedML)

FedML is an approach that enables organizations with limited data – so-called “small data” organizations – to collaboratively train and benefit from sophisticated machine learning models. The definition of “small data” depends on the complexity of the AI task being addressed.

In Pharma, for example, having access to a million annotated molecules for drug discovery is relatively small in view of the vast chemical space.

In Marketing that small data set might be in the form of brand specific visual data – brand guidelines scattered across PDFs, emails, and shared drives.

Image: Jing Jing Tsong/theispot.com

Is Federated Agentic AI the answer?

Federated Agentic AI refers to a blend of two advanced AI concepts: federated learning and agentic AI.

Federated learning enables AI models to be trained across decentralized devices or data sources while keeping the data local and secure, thereby enhancing privacy and scalability. Meanwhile, agentic AI refers to self-contained systems that, once implemented by humans, operate autonomously around the clock. These systems are capable of controlled decision-making and can adapt based on real-time data without further human intervention.

When combined, Federated Agentic AI allows multiple autonomous agents to collaborate across a secure distributed network. These agents can handle tasks independently while continuously learning from local data sources, without needing to share sensitive information across the network. This setup is particularly useful in environments like healthcare, finance, or IoT, where data privacy is critical but complex tasks still require intelligent automation.

For instance, a federated agentic system might be deployed in a network of smart devices where each device autonomously manages specific tasks (e.g., thermostats optimizing energy use) while learning from local data (e.g., weather conditions). These devices can also share insights without revealing user data, improving overall system efficiency and privacy​.

Final Thoughts: Designing for new technology is a completely different challenge from a traditional design project.

Typically, users already know how to interact with familiar products – like swiping a credit card at a payment terminal or using a TV remote to change channels. But with emerging technology, there are no familiar cues, making it harder for users to figure out how to engage with it effectively. Think back to when users first encountered smartphones – there was no clear precedent for touchscreens or gestures, making it challenging to learn entirely new interactions.

Adoption of new tech often lags behind confusion and speculation, so creating a seamless, intuitive user experience is essential for success.

That said, designing something entirely new isn’t easy – but it’s exactly where the team at Velocity Ascent excels. We navigate the space between excitement for emerging technology and the need to deliver real, secure, user-centered value. By following key principles, we transform unfamiliar tech into products that people and teams love to work with.

For CMOs, this means a potential shift in how to approach marketing and customer engagement. As AI becomes more autonomous, the question will be: how do we control and guide these powerful tools to enhance our strategies while still ensuring privacy? Ultimately it is about using AI in smarter, more effective ways to drive business growth.

Sources:

Analytics Vidhya, The GitHub Blog

MIT Media Lab: Decentralized AI Overview

A Perspective on Decentralizing AI

  • Page 1
  • Page 2
  • Go to Next Page »

Velocity Ascent

© 2026 Velocity Ascent · Privacy · Terms · YouTube · Log in