How AI Changed the Way We View BI


People love to say AI changes everything. Does that include one of tech’s most rigid domains, BI? At GoodData, we think it already has. Even if the AI hype dies tomorrow, the expectations it sets would remain: answers should be fast, contextual, and trustworthy. That pressure alone is forcing BI to shed its old skin.
For years, BI meant dashboards refreshed overnight. It was great for monthly reviews, not so great for Tuesday at 3:17 p.m. when something breaks. You needed engines that could chew through huge datasets and run serious SQL (Snowflake or Databricks). That foundation is still essential, but the job description of BI has changed.
Why AI changed the way we view BI
The majority of “AI for BI” demos remain very similar and leave you disappointed rather than excited. Not because AI isn’t useful, but because when you hand it the steering wheel and hope for the best, you essentially create a huge data casino. Slapping a chat box on top of a pile of data and calling it insightful is sub-optimal at best. AI is an interface, not an oracle.
When you want to understand the story behind your data, precision matters, and “Might be right” is not a strategy — especially when a decimal place can swing millions of dollars. Sending everything to a model and hoping it computes the math correctly is a gamble. The model might summarize, hypothesize, and guide, but the numbers themselves must come from deterministic, auditable computation.
For that reason, if you want to be successful when integrating AI, you have to be prepared for one that would lose any trivia quiz to a magic 8 ball. And no, I don’t say this because I don’t believe in AI, it’s because I don’t want to rely on AI not hallucinating with my own data. Why should your company be any different?
Although AI is not the main culprit for why BI has changed, it definitely helped speed things up. To understand what this means, let’s have a look at what has changed.
What has changed?
While there are many parts of BI that have changed, let's focus on one use case: creating visualizations. Through this, we can see four pivotal changes:
- Reliability
- Simplicity
- Speed
- Accessibility
Reliability
When AI first started making visualizations, people usually claimed (mostly on LinkedIn) that they could now easily talk to their data. By simply giving AI access to their database and thus having all the knowledge they needed at their fingertips.
While this seems like a great idea, have you tried connecting your database to AI? I did, and while the first impressions were very positive, I suddenly realized that with more and more tables the AI started to have problems understanding my data.
Funnily enough, this problem is not unique to AI; even people can get lost in the whole (often complex) schema of data. It’s actually a widespread problem across the whole market. At GoodData, we tackle this with our semantic layer (Logical Data Model). It’s not only about the ease of understanding the whole data schema, it’s rather about making everything simpler, abstracting unnecessary details, and focusing solely on the meaning of the data.
It essentially helps users and AI navigate the data much like a manual from IKEA helps you build a chair or a cupboard. Sure, you might want to try and build it just based on your intuition, but to be honest, I wouldn’t really recommend it.
But even with that, AI can struggle, so the next best step is to add even more context and create rules. Much like you would create rules for your Cursor, you can create rules with which the AI abides, and with them, it can understand the language that is specific to your field or company, for example. These rules are not about covering flaw;, it's rather about tweaking the behaviour to your preferred needs. A little like what ChatGPT does with its memory, which you can always access.
Simplicity
When you have your data structured, with a little elbow grease, the AI can finally understand your data, but the battle is not won. Suddenly, you realise that when you want the AI to create visualizations, it usually needs to use SQL to get your data.
And SQL can get very messy, costly, and in extreme cases can even damage your data. I am not saying that AI would suddenly drop all your tables, but SQL injections are very rea,l and creating optimal and correct SQL is a very hard task, and debugging can be even worse than writing it on your own.
One solution to this is GoodData's read-only language, MAQL, running on top of LDM. MAQL itself makes all the querying safe and simple. No need to worry about SQL dialects for a specific database, as it is database-agnostic. You can even connect any API to it through FlexConnect (the whole concept came from the same developer as FlexQuery). And best of all, you can even reuse pre-existing metrics to create new metrics, so you (or AI) can work iteratively and don’t have to create the whole logic in one step.
Speed
With visualizations being accurate and simple to audit, another pressing problem has emerged. In the past few years, the speed at which users want to see the already computed data has gone down significantly. It’s partly due to AI making it extremely easy to create a PoC and get your results fast, but only sometimes correct. But you can’t really mock the computations, right?
This is why we have our main engine written on top of Apache Arrow. While it won’t help you with the speed at which you fetch the data from your database (although optimizations of MAQL might), you can definitely feel the difference once it is loaded.
Apache Arrow is a columnar format with zero-copy read support and extremely fast data access. On top of this we created a very ambitious project, which created a framework for building data services powered by the Apache Arrow and Flight RPC - FlexQuery. If you want to learn more about it, I highly recommend reading the introductory article to the whole architecture.
When creating FlexQuery, it wasn’t just about glueing “a bunch of technologies” together and hoping for the best. When we created it, it was a very strategic long-term investment, long before AI.
Accessibility
And now that we could have our data crunched reliably and fast we moved to the notion that we can consume our data insights anywhere, anytime. It started with anywhere my AI can go, my data can follow, and now there are even experiments with having your daily digest as a podcast sent to your mail each morning so you can check your data when you sip on your morning coffee.
The ease of access to your data is behind all the other aspects I've mentioned, because not many BI companies treat their platforms like a modular engine against which you can base all your computations. Luckily GoodData with its api-first approach is very well prepared to be hooked up to virtually any frontend or backend. Take OpenAPI specification as an example, if you have a very good and descriptive OpenAPI specification, developers will have a much easier time hooking up your product as well as AI, which definitely needs that extra context.
Anything that can be done in GoodData can be done through APIs and SDKs as well. While they are not perfect (nothing is), they are open-source and they have a very healthy development. The strength of the modular and API-first approach can for example be seen in some of the articles like Hand Drawn Visualizations, turning your Dashboard into a scheduled podcast and Hyperpersonalized Analytics.
New AI-Assisted Features
So apart from the PoC, that is what would stay if AI collapsed tomorrow, but there are also many new AI-assisted features that we couldn’t even fathom before AI. From reactionary KDA to Semantic Quality Checker, there are quite a few use-cases that would simply be impossible without AI.
AI-assisted KDA
One of the use-cases closest to me is AI-assisted KDA. The premise is simple, imagine there is an anomaly somewhere in your data. It can happen any time, even when you are asleep. And while a notification that your data needs attention is nice, there is only so much a simple notification can do, especially at 3AM.
So you can let your notifications trigger AI-assisted workflows, such as KDA. This means that instead of a very robust and often expensive exhaustive KDA, you can utilize AI to help you navigate the search space, thus saving a lot of time and computational power. Even with AI it can be in a magnitude of a few thousand queries, but most of them can be cached e.g., through FlexQuery.
MCP / A2A
A feature that is entirely AI-driven is the utilization of AI-centric protocols to be able to connect to agents and tools. While the change in the BI is definitely not about chasing the next big protocol which might be obsolete in a few months, there is definitely no harm in implementing new ways to connect to your product and this is true not only for BI, but simply for any platform that you can think of.
While you might wonder why you would want to make your platform able to connect to AI (or vice versa), think about the ease of use for your user. And keep in mind: Giving an AI hammer and nails while hoping it will not hit any thumbs is much more dangerous than giving it a sandbox (tool) where you can guarantee the correctness of the outcomes.
Semantic Quality Checker
And lastly a feature that is both enabled and enabling for AI is Semantic Quality Checker. It is actually a small miracle that this feature is finally possible. Data management can get very messy and the meaning of your data can get blurry.
When it comes to the cleanliness of data (or rather the lack of it), there are three cardinal sins:
- Unexplained Abbreviations - AI will not understand your ASDU without explanation, or was it just SDU…?
- Duplicit Names across different tables -
- Lack of Business context - Is your Revenue net, gross or recurring…?
And while duplicate names are quite easy to catch programmatically, I wouldn’t dare try to programmatically solve the lack of business context or unexplained abbreviations. This is where AI actually comes into play, because while it might not be perfect (as you might know, AI never is..) but you can’t build neither semantic models nor Rome in one go. You have to work on it iteratively and slowly improve the simplicity or rather the understandability of your semantics.
With better semantics the AI will actually have a better understanding of the way you want to use your data and suddenly it can pick up more minute details. And with AI on board you can have less review cycles and lower onboarding time.
Conclusion
AI hasn’t replaced BI, but it certainly raised the bar for it. The winners won’t be the teams that hand their data to a chat box and hope; they’ll be the teams that pair deterministic, auditable computation with AI as the interface and accelerator. Reliability, simplicity, speed, and accessibility aren’t nice-to-haves anymore; they’re the scaffolding that lets AI be useful without turning your numbers into a casino.
That’s why the shape of modern BI looks different. A semantic layer (LDM) gives humans and models the same map. A safe, read-only, metric-centric language (MAQL) keeps logic consistent and guards the warehouse. A columnar, Arrow-native runtime and FlexQuery move results at interactive speed. An API-first surface lets insights show up wherever people work, be it dashboards, apps, agents, even a morning “podcast” of your KPIs. On top of that foundation, AI becomes practical: guiding KDA workflows to narrow search space, checking semantic quality to keep meaning tight, and speaking through agent protocols without punching holes in governance.
If AI hype vanished tomorrow, this stack would still matter. The expectations it set (fast, contextual, trustworthy answers) are now permanent. The path forward is incremental: harden your semantics, codify metrics, instrument speed, and then let AI help with the last mile( explanations, navigation, triage) not the math. Treat AI as an interface, not an oracle, and BI stops being a once-a-month report and becomes a dependable, real-time decision companion.