Defining the artificial intelligence trends weâll see in 2026 is, to say the least, a risky exercise. In an ecosystem where every week brings new announcements, new models, and new tools to try⊠predicting whatâs coming is no easy task.
Still, our DatIA and Goodly teams have rolled up their sleeves and put together the 7 AI trends we must absolutely keep an eye on over the coming months.
1 GenBI: you can finally âtalkâ to your database
Could one of the biggest problems in Business Intelligence finally be coming to an end? For yearsâif not decadesâwhenever someone needed access to a piece of data that wasnât already on a dashboard, they were almost entirely dependent on a technical analyst who had the time to create and share that report.
The GenBI (Generative Business Intelligence) trend is here to break that bottleneck. The idea is simple but incredibly powerful: connect LLMs to enterprise data so people can ask questions in natural language.
The challenge? Preventing AI from making up data (the infamous hallucinations). The solution that will prevail in 2026 is the use of a semantic layer. It acts as a âtranslator,â providing the model with context, business rules, data models, and verified query examplesâeverything it needs so that when we ask about the âROI of the summer campaignâ, the AI knows exactly which tables to join and returns a precise answer, not a creative estimate.
Our colleague Roberto Torena already covered GenBI on the blog last year. If youâd like to go deeper, we recommend his post. And if you prefer video, hereâs a 2-minute explainer with everything you need to know about Generative Business Intelligence.
2 The return to on-premises (or why AI is coming back home)
For years, the cloud was the default destination for development teams. But many organizations are now pulling their AI workloads out of the public cloud and bringing them back to their own data centers.
Why? Privacy, cost control, and sovereignty. No one likes sending sensitive data to a third-party API with fluctuating pricing.
Thanks to model optimization (which increasingly requires less hardware) and tools like Ollama or vLLM, deploying powerful AI on local servers is no longer an engineering nightmare.
By 2026, having our own âon-prem GPTâ will be the norm. One example is LâOrĂ©al, which has developed LâOrĂ©al GPTâan internal AI platform that enables task automation, content creation, and customer support. They told us on our podcast how they built it and the impact itâs had on the company.
If you regularly follow our blog, youâll know that weâve published several posts related to this trend, especially around Ollama. Here they are in case youâd like to try it yourself:
- Running LLMs locally: first steps with Ollama
- Running LLMs locally: advanced Ollama
- Running LLMs locally: LM Studio
3 The end of artificial amnesia: automated feedback
If youâve ever deployed a chatbot, you know the frustration of giving a âthumbs downâ to an incorrect answerâonly to see the model repeat the same mistake the next day.
Until recently, correcting an assistant today meant it would make the same error tomorrow, because its âbrainâ (the trained model) is static. Changing that required retraining, which was slow and expensive.
The automated feedback virtuous cycle changes this. Itâs not about retraining the model every nightâitâs about allowing the system to update its context or âinstruction manualâ based on what actually happens.
If an agent fails and the user gives it a thumbs-down, the system analyzes the error and creates a new rule in the context to avoid repeating it. Itâs the shift from static AI to AI that truly learns from day-to-day experience.
To support this approach, observability tools like LangSmith or LangFuse come into play. Weâve covered them in more detail on our blog as wellâhereâs the post if youâd like to understand how they work and what benefits they bring.
4 David vs. Goliath: open-source models are no longer the âlittle brothersâ
Just two years ago, if you wanted real intelligence, you had to pay OpenAI or Anthropic. Open-source models were interesting, but clearly inferior in reasoning.
Today, models like the DeepSeek series (which we discussed earlier this year on our podcast) or the latest iterations of Llama have proven that you can achieve strong performance without paying proprietary licenses or handing over your data.
This democratizes innovation. In 2026, weâll see startups and large enterprises building amazing products on top of open modelsâwithout paying the token toll of the big APIs.
One of those large enterprises is Red Hat, which shared on our podcast how they are developing open, enterprise-grade AI platforms to support this trend.
5 AI as a software architecture specialist (Vibe Coding and low code)
Building robust AI workflows by writing code line by line is slow and error-prone. The industry is moving toward abstractionâeither through low-code platforms or a new paradigm of AI-assisted programming, where the AI itself âwritesâ the code.
By combining both approaches, instead of manually dragging boxes around, we simply tell the AI: âI need an agent that reads Jira and summarizes the tasks.â The AI translates that intent into the technical âblueprintâ (JSON or YAML) required by the low-code platform.
We move from AI that writes scripts (telling it how to do something) to AI that generates blueprints (telling it what to do). The result is more robust, easier to maintain, and dramatically faster development.
If youâre exploring AI-powered tools, last year we covered several of them on the blog:
- Whatâs behind the Vibe Coding hype?
- Cursor AI, the IDE for productive people
- Windsurf Cascade: guide and best practices
If youâre interested in low-code platforms for building generative AI solutions, we also analyzed this trend here:
6 From chat to the browser: agents that âdoâ things
Asking things from AI via a chat window is now completely normal. The next step is for AI to leave the chat and take control of the browser.
Weâre talking about agents capable of âseeingâ the web just like we do, logging in with our credentials, and executing complex workflows.
To make it concrete, imagine telling your browser: âDownload this monthâs supplier invoices from the portal and upload them to the Drive folder.â The AI doesnât explain how to do itâit does it. This is the natural evolution of robotic process automation (RPA).
If you want to understand how this shift can be brought into an enterprise environment, our Goodly team discussed Googleâs solution on the podcast: âDiscover Google Agentspace! The future of AI agents in your companyâ.
Additionally, in another episode of our podcast âApasionados por la tecnologĂa,â we had the chance to speak with Pol AlguerĂł from AWS about AWS solutions such as Bedrock Agents and Agent Core.
7 The end of tedious code maintenance
Itâs estimated that development teams spend half their time maintaining legacy code or updating libraries. Itâs necessaryâbut boring. Autonomous maintenance agents are here to help.
Weâre not talking about a linter that flags an error. Weâre talking about proactive agents that detect a vulnerability, find the patched version, update the code, refactor if something breaks, run the tests, and leave a pull request ready for approval. Development teams will stop being âcode janitorsâ and focus instead on building new value.
There are already tools like OpenRewrite that allow safe, large-scale automation of Java code refactoring. If youâre curious, we have an introductory post on the topic: Learning to write our own recipes with OpenRewrite.
Whatâs next?
If these seven trends tell us anything, itâs that AI is becoming invisible yet omnipresent. Itâs no longer about which model is the smartest, but about who integrates it best into real business processes. 2026 will be the year AI moves from âtalkingâ to âworking.â
Which of these trends do you think will have the biggest impact on your industry? Do you think weâve missed any key ones? Weâd love to read your thoughts in the comments.
Comments are moderated and will only be visible if they add to the discussion in a constructive way. If you disagree with a point, please, be polite.
Tell us what you think.