Defining the artificial intelligence trends we’ll see in 2026 is, to say the least, a risky exercise. In an ecosystem where every week brings new announcements, new models, and new tools to try… predicting what’s coming is no easy task.
Still, our DatIA and Goodly teams have rolled up their sleeves and put together the 7 AI trends we must absolutely keep an eye on over the coming months.
1 GenBI: you can finally “talk” to your database
Could one of the biggest problems in Business Intelligence finally be coming to an end? For years—if not decades—whenever someone needed access to a piece of data that wasn’t already on a dashboard, they were almost entirely dependent on a technical analyst who had the time to create and share that report.
The GenBI (Generative Business Intelligence) trend is here to break that bottleneck. The idea is simple but incredibly powerful: connect LLMs to enterprise data so people can ask questions in natural language.
The challenge? Preventing AI from making up data (the infamous hallucinations). The solution that will prevail in 2026 is the use of a semantic layer. It acts as a “translator,” providing the model with context, business rules, data models, and verified query examples—everything it needs so that when we ask about the “ROI of the summer campaign”, the AI knows exactly which tables to join and returns a precise answer, not a creative estimate.
Our colleague Roberto Torena already covered GenBI on the blog last year. If you’d like to go deeper, we recommend his post. And if you prefer video, here’s a 2-minute explainer with everything you need to know about Generative Business Intelligence.
2 The return to on-premises (or why AI is coming back home)
For years, the cloud was the default destination for development teams. But many organizations are now pulling their AI workloads out of the public cloud and bringing them back to their own data centers.
Why? Privacy, cost control, and sovereignty. No one likes sending sensitive data to a third-party API with fluctuating pricing.
Thanks to model optimization (which increasingly requires less hardware) and tools like Ollama or vLLM, deploying powerful AI on local servers is no longer an engineering nightmare.
By 2026, having our own “on-prem GPT” will be the norm. One example is L’Oréal, which has developed L’Oréal GPT—an internal AI platform that enables task automation, content creation, and customer support. They told us on our podcast how they built it and the impact it’s had on the company.
If you regularly follow our blog, you’ll know that we’ve published several posts related to this trend, especially around Ollama. Here they are in case you’d like to try it yourself:
- Running LLMs locally: first steps with Ollama
- Running LLMs locally: advanced Ollama
- Running LLMs locally: LM Studio
3 The end of artificial amnesia: automated feedback
If you’ve ever deployed a chatbot, you know the frustration of giving a “thumbs down” to an incorrect answer—only to see the model repeat the same mistake the next day.
Until recently, correcting an assistant today meant it would make the same error tomorrow, because its “brain” (the trained model) is static. Changing that required retraining, which was slow and expensive.
The automated feedback virtuous cycle changes this. It’s not about retraining the model every night—it’s about allowing the system to update its context or “instruction manual” based on what actually happens.
If an agent fails and the user gives it a thumbs-down, the system analyzes the error and creates a new rule in the context to avoid repeating it. It’s the shift from static AI to AI that truly learns from day-to-day experience.
To support this approach, observability tools like LangSmith or LangFuse come into play. We’ve covered them in more detail on our blog as well—here’s the post if you’d like to understand how they work and what benefits they bring.
4 David vs. Goliath: open-source models are no longer the “little brothers”
Just two years ago, if you wanted real intelligence, you had to pay OpenAI or Anthropic. Open-source models were interesting, but clearly inferior in reasoning.
Today, models like the DeepSeek series (which we discussed earlier this year on our podcast) or the latest iterations of Llama have proven that you can achieve strong performance without paying proprietary licenses or handing over your data.
This democratizes innovation. In 2026, we’ll see startups and large enterprises building amazing products on top of open models—without paying the token toll of the big APIs.
One of those large enterprises is Red Hat, which shared on our podcast how they are developing open, enterprise-grade AI platforms to support this trend.
5 AI as a software architecture specialist (Vibe Coding and low code)
Building robust AI workflows by writing code line by line is slow and error-prone. The industry is moving toward abstraction—either through low-code platforms or a new paradigm of AI-assisted programming, where the AI itself “writes” the code.
By combining both approaches, instead of manually dragging boxes around, we simply tell the AI: “I need an agent that reads Jira and summarizes the tasks.” The AI translates that intent into the technical “blueprint” (JSON or YAML) required by the low-code platform.
We move from AI that writes scripts (telling it how to do something) to AI that generates blueprints (telling it what to do). The result is more robust, easier to maintain, and dramatically faster development.
If you’re exploring AI-powered tools, last year we covered several of them on the blog:
- What’s behind the Vibe Coding hype?
- Cursor AI, the IDE for productive people
- Windsurf Cascade: guide and best practices
If you’re interested in low-code platforms for building generative AI solutions, we also analyzed this trend here:
6 From chat to the browser: agents that “do” things
Asking things from AI via a chat window is now completely normal. The next step is for AI to leave the chat and take control of the browser.
We’re talking about agents capable of “seeing” the web just like we do, logging in with our credentials, and executing complex workflows.
To make it concrete, imagine telling your browser: “Download this month’s supplier invoices from the portal and upload them to the Drive folder.” The AI doesn’t explain how to do it—it does it. This is the natural evolution of robotic process automation (RPA).
If you want to understand how this shift can be brought into an enterprise environment, our Goodly team discussed Google’s solution on the podcast: “Discover Google Agentspace! The future of AI agents in your company”.
Additionally, in another episode of our podcast “Apasionados por la tecnología,” we had the chance to speak with Pol Algueró from AWS about AWS solutions such as Bedrock Agents and Agent Core.
7 The end of tedious code maintenance
It’s estimated that development teams spend half their time maintaining legacy code or updating libraries. It’s necessary—but boring. Autonomous maintenance agents are here to help.
We’re not talking about a linter that flags an error. We’re talking about proactive agents that detect a vulnerability, find the patched version, update the code, refactor if something breaks, run the tests, and leave a pull request ready for approval. Development teams will stop being “code janitors” and focus instead on building new value.
There are already tools like OpenRewrite that allow safe, large-scale automation of Java code refactoring. If you’re curious, we have an introductory post on the topic: Learning to write our own recipes with OpenRewrite.
What’s next?
If these seven trends tell us anything, it’s that AI is becoming invisible yet omnipresent. It’s no longer about which model is the smartest, but about who integrates it best into real business processes. 2026 will be the year AI moves from “talking” to “working.”
Which of these trends do you think will have the biggest impact on your industry? Do you think we’ve missed any key ones? We’d love to read your thoughts in the comments.
Comments are moderated and will only be visible if they add to the discussion in a constructive way. If you disagree with a point, please, be polite.
Tell us what you think.