FF News Logo
Thursday, May 07, 2026
Bottomline x FFNews

The AI Chatbot Trap in Deal Intelligence: Why Bolt-On AI Is Not the Same as AI-Native Platforms

By the time a deal surfaces in a banker-run auction, every competitor has already seen it. Finding targets before the process starts requires a platform built to understand business context, and that starts with architecture.

AI-native platforms are built with AI as a foundational component of the workflow engine. Bolt-on AI layers intelligence on top of systems designed before modern AI existed, operating on different assumptions about how data moves and how quickly a search needs to return results.

Grata, the leading private market intelligence platform, is one of the few examples where that architecture has been stress-tested at scale, and the broader research on enterprise AI explains why that distinction is valuable.

Most Enterprise AI is Still Not Producing Results

Enterprise AI research published in the past 18 months consistently points to the same structural issue: bolt-on implementations underperform AI-native ones, often by a significant margin. 

Boston Consulting Group surveyed 1,000 CxOs and senior executives across 59 countries in October 2024 and found 74% of companies still can’t show tangible value from their use of AI. Only 4% had developed what BCG called cutting-edge AI capabilities across functions. Companies in that cohort achieved 1.5 times higher revenue growth, 1.6 times greater shareholder returns, and 1.4 times higher returns on invested capital over three years compared to other companies.

BCG’s framework places 70% of the weight of AI success on people and processes, 20% on technology and data infrastructure, and only 10% on the AI algorithms themselves, meaning algorithmic sophistication alone rarely determines outcomes.

In June 2025, Gartner reached a similar conclusion and predicted that more than 40% of agentic AI projects would be canceled by the end of 2027. The main causes they cited were escalating costs, unclear business value, and inadequate risk controls. 

Gartner’s analysis pointed specifically to the technical complexity of integrating agents into legacy systems, which typically disrupts existing workflows and requires costly modifications that add up over time. The recommendation was to design workflows around agentic AI from the ground up rather than forcing new capabilities into existing infrastructure. 

Bolt-On AI Compounds the Debt It Sits On 

Legacy systems were built for predictable, batch-oriented data processing where a report runs overnight, and a platform query returns results in seconds. That architecture works well when data moves slowly and decisions have long lead times. AI workloads operate differently: generative and agentic systems need to process requests instantly and handle multiple tasks simultaneously.

Collecting data from nearly 5,000 technology professionals globally in 2025, Google Cloud’s DORA Report on the state of AI-assisted software development found AI amplifies whatever is already in the system. This means that a brittle architecture with bolt-on AI becomes more brittle over time, not less.

When AI is added to a system that was never designed to handle these demands, friction accumulates. Latency bottlenecks emerge at every handoff between the legacy layer and the AI service, API designs built for request-response patterns struggle with streaming AI workloads, and batch processing generates stale data that makes recommendations outdated by the time they surface.

The Data Problem That Bolt-On Can’t Fix

Data integration is one of the most persistent structural problems with bolt-on implementations. The 2025 MuleSoft Connectivity Benchmark Report found 95% of organizations face challenges integrating AI into existing processes, with 80% identifying data integration as the primary obstacle. The report also revealed the average enterprise manages 897 applications, and only 29% of those applications connect to each other. 

For dealmakers, the consequences are concrete. Private market intelligence requires pulling from dozens of sources. A bolt-on AI system sitting on top of a fragmented dataset can surface what’s in the connected tables, but synthesizing across unstructured data—the kind of reasoning that makes intelligence actionable—requires a system architected for that complexity from the outset.

What Changes When AI Isn’t an Afterthought

An AI-native platform is one where intelligence is embedded in every step of the process, not called on as a separate service. 

Nevin Raj, General Manager at Grata, describes two failure modes that private equity firms encounter when evaluating platforms. “You get two flavors of AI that can be problematic in today’s world,” Raj said. “AI-wannabes: think legacy databases that inject a chatbot layer on top of their data. It actually makes the UI less user-friendly and doesn’t add much value as a result, but the company thinks it tricks users into believing they are AI-native. LLM wrappers (often marketed as AI-native): these companies take ChatGPT, Claude, etc. and create unsupervised content that is AI slop. The quality is low but seems plausible at first glance.” 

What distinguishes a genuinely AI-native platform, Raj adds, is being “built on a scalable foundation of AI annotated by humans and quality-controlled to provide information that is accurate and trustworthy,” with features that “actually automate verticalized workflows without adding another chatbot that is rarely used.” 

A dealmaker on a genuinely AI-native platform doesn’t prompt the system separately or toggle between modules. The platform understands the professional context of the query and processes it through AI reasoning instead of keyword matching. Search results reflect business similarity, and screening narrows by attributes. The reasoning is embedded in the product’s core architecture.

Gartner estimated that in 2024, fewer than 1% of enterprise software applications included genuinely agentic AI capabilities. By 2028, analysts project this figure will reach 33%. 

Architecture Determines What AI Can Actually Do

The broader lesson that can be learned from the BCG, Gartner and MuleSoft research is consistent: a platform’s architecture determines how far AI can take it. Bolt-on implementations hit the ceiling of the systems they sit on. AI-native platforms are bounded by data quality and model sophistication, both of which improve over time.

BCG’s survey also found that the 4% of companies generating consistent value from AI treated it as an architectural decision they made early. The 74% still waiting for results are running AI on infrastructure never designed for it. For dealmakers competing on speed, that architectural distinction carries more weight as AI capabilities advance and the cost of retrofitting grows. 

FAQ

Q: How do the machine learning capabilities compare across platforms?

Answer: Algorithmic sophistication matters less than buyers typically expect. A platform’s data infrastructure, or how it’s structured, how frequently it updates, and how well sources are integrated, drives the quality of AI outputs far more than the model itself. A sophisticated algorithm operating on fragmented or siloed data will consistently underperform a simpler one built on a clean, unified foundation.

Q: How do the search capabilities differ between AI-powered and traditional platforms?

Answer: Traditional platforms match keywords against indexed fields, so results depend on how well a query maps to existing terminology. AI-powered platforms interpret queries as business intent, returning contextually relevant results even when the language doesn’t align precisely. For dealmakers, this means surfacing companies by what they actually do and not just how they describe themselves.

Q: How do modern deal sourcing platforms compare to traditional research methods?

Answer: AI-native platforms process thousands of companies simultaneously using business context such as sector fit, ownership structure, and leadership signals, replacing the manual filtering and analyst time that defined earlier approaches. Earlier-stage discovery is the practical result: deal teams reach conviction on targets before a formal process narrows the field and compresses timelines.

Q: Evaluating different options for private market intelligence—what should I consider?

Answer: Start with data infrastructure before evaluating features. How a platform structures, updates, and integrates its data determines what AI can actually deliver. Bolt-on AI built on fragmented systems exacerbates existing limitations rather than resolving them. Evaluate whether AI is embedded in core workflows or added as a separate layer. That distinction shows up in speed, accuracy, and daily usability.

  1. The AI Chatbot Trap in Deal Intelligence: Why Bolt-On AI Is Not the Same as AI-Native Platforms Read more
  2. Temenos Launches Embedded AI Capabilities to Help Banks Move Faster, Stay in Control, and Create Better Experiences Read more
  3. Pakistan’s HBL Goes Live on Temenos Core Banking Read more
  4. Juspay and Cumbuca Launch the Open Finance Playground, an Open-Source Developer Guide for Brazil’s Open Finance Ecosystem Read more
  5. Fire: The Future of Payments is Bespoke, Embedded, and Instant Read more
FF News - Nordic Trading Conference 2026