The conversational analytics market is buzzing with players from data management platforms, BI and data visualization tools, as well as nimble AI startups — all offering simplified access to data insights, with chat-like interfaces. Using NLP, users can ask questions and get answers from their data—just like they would in a conversation.

For business users, this wave of innovation promises self-service capabilities for working with data analytics, albeit without the pain of grappling with technicalities, dashboard complexities, delays and an endless back-and-forth with IT teams for answers.

While the excitement is definitely justified, not all conversational analytics interfaces are created equal. What should conversational analytics be capable of delivering? How do we ensure that it’s not AI for AI’s sake – but delivers truly insightful answers for busy decision makers with agility and accuracy.

A Canvas of Products

While the use-case of conversational analytics is common across players, their individual approach paths to the solution are different – leading to strong capabilities in some areas, while they may be developing in others. Products like ThoughtSpot Spotter and Zing Data lead from the consumption layer down into the data foundations, leveraging agentic AI. Users can ask queries in natural language using mobile apps or web applications and get answers powered by LLMs. Their strength lies in their intuitive interfaces that offer features like real-time collaboration and location-aware responses, derived from a thoughtful UI/UX design. Plug and play interfaces connect the UI layer with common cloud databases like Snowflake, Databricks, BigQuery, AWS or HANA, among others.

PowerBI and Tableau, the key players in data analytics and visualization space, have also extended their features with a conversational interface. PowerBI already had a mobile app, which is now augmented with Q&A and voice input capabilities. Similarly, Tableau Agent adds an AI chat interface to play around with existing visualizations. This AI assistant is capable of using GenAI to streamline analysis, from data curation steps to exploration.

With data insights-on-tap becoming a critical capability, cloud-based data storage platforms are extending their reach upstream into the conversational analytics space. Databrick’s AI/BI Genie and Snowflake’s Cortex are two assistants that offer self-service analytics, with natural language querying. Predefined functions can be set in Genie with parameterized SQL queries to ensure users receive verified answers to commonly asked questions. Agentic reasoning and expert-curated instructions are utilized to improve the accuracy and context of data.

Semantics Possess the Key

Data analysts traditionally operated the data tables, knowing their structures and relations. They also understood the dimensions, metrics and calculations for the reports. Conversational analytics aims to hide this complexity – hence needs a strong translation mechanism between natural language queries and actual data operations.

More than strapping on a chat window or an NLQ/LLM interface, conversational analytics is effective and trusted only when it has a deep, structured understanding of business meaning hidden in the queries. In other words, how well the platform translates user intent into correct, optimized queries that effectively work on complex, enterprise-grade data.

For example, if a user asks, “Show me top customers by revenue growth this quarter” the system must:

Interpret “top customers” in terms of ranking logic.

Discern that “revenue growth” needs calculating percentage changes, over time.

Work out “this quarter” from the current date.

Stitch together relevant tables (customer data, revenue data, time periods) in the background.

Without this business-aware context and technical mapping, conversational interfaces would return incomplete or incorrect results, deflating trust and usability. The semantic layer is the hidden engine that is key to this process.  Semantics help translate business language into accurate SQL queries that run complex operations across multiple data tables. Products that lead from consumption layer, or from data layer need to add this conversion layer to make conversation analytics effective.

Zing leverages advanced algorithms to automatically extract meaning and structure from data fields, or it can plug-in an existing semantic model. Spotter tech stack includes an AI trust layer that verifies the SQL against the business query and also uses human-in-the-loop feedback to refine answers over time. Data analysts and domain experts set up Databrick’s Genie “spaces” with datasets, sample queries and semantic guidelines.  Genie uses these spaces to translate business questions into analytical queries. Similarly, Snowflake requires analysts to use the semantic model generator within Snowsight to accurately map business metrics to data tables and operations.

Advancing From Semantics to Conversations

For another group of companies that already offered semantic layer solutions—like Kyvos, AtScale and Denodo—adding a conversational analytics capability has been a logical and strategic evolution.

They have a head start, being already designed to make complex, multi-source, enterprise-scale data easy to consume through consistent, business-friendly models. Adding a conversational interface built on this foundation is an intuitive way to let business users dynamically tap into their data, without interacting with pre-built dashboards.

Kyvos has introduced Kyvos Dialogs, their conversational analytics product that works with the platform’s own semantic layer. Dialogs go beyond being a conversational interface providing context-aware answers — to adding two-way conversations for KPI generation capabilities on-the-fly. This allows users to create, refine and explore key business metrics that may not have been pre-defined.

With automatic generation of charts and visualizations, Dialogs further adds natural language summarization, building compelling data narratives instantly. This translates raw data into executive-ready insights, so business leaders no longer need to pour over dense reports.  Kyvos’ high-performance semantic layer uses a smart pre-aggregation engine to deliver insights with speed and accuracy of around 95%, while retaining context for follow-up questions and maintaining enterprise-wide governance.

AtScale also emphasizes the use of its own semantic layer to enhance the speed, accuracy and security of conversational analytics—strengths that are undeniably valuable. This foundation helps ensure consistent query interpretation, while maintaining data governance practices. They have introduced natural language query (NLQ) capabilities, connecting to their own robust semantic layer platform, which provides a centralized repository of business logic and metadata.

AtScale’s NLQ experience, however, currently lacks a native conversational interface. Instead, it draws on Databricks and Snowflake’s capabilities. Their demo relies on Snowflake’s Cortex AI rather than showcasing an in-house application. At present, this positions them as an enabler of conversational capabilities, and not as a standalone provider.

Traditionally focused on data virtualization, Denodo, too, has extended its capabilities with conversational BI features that tap into virtualized views through natural language interfaces. However, for a true conversation experience, additional items that Denodo must provide are- the ability to retain context for follow-up questions, natural language summaries of key insights and data visualization within conversations.

An Intuitive Experience

Analytics providers must realize that beyond the ease of text or voice input, business users expect a deeper, more intuitive experience that empowers them to explore and understand their data seamlessly. They should be capable of revising queries with ease, revisit previous responses and use context-aware prompts to dig deeper into their data.

Needless to say, the speed, accuracy and trust in the responses need to be maintained consistently, even when data scales up. The platform should also be capable of learning from feedback and be able to create, refine and generate business metrics as needed. Data stories should be presented using powerful visuals and well-articulated, human-like summaries for busy executives.

The expectation from a true conversational analytics application is thus nothing short of being an intelligent human-like data assistant, who discovers the right business insights, in time for enhancing decisions.