<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
  <title>Paradigma Digital</title>
  <link>https://www.paradigmadigital.com/blog/</link>
  <atom:link href="https://www.paradigmadigital.com/feed.xml" rel="self" type="application/rss+xml" />
  <description>Big Data, Blockchain, cultura ágil, desarrollo, diseño… Te ofrecemos toda la información que necesitas para estar al día en tecnología.</description>
  <generator>Eleventy - 11ty.dev</generator>
  <language>en-US</language>
  <lastBuildDate>Mon, 13 Apr 2026 06:47:22 GMT</lastBuildDate>
  
  <item>
        <dc:creator>
            <![CDATA[ Nacho Badenes ]]>
        </dc:creator>
        <title>Purpose-Driven Technology Platforms: Governance, Security, and Digital Resilience</title>
        <link>https://en.paradigmadigital.com/techbiz/purpose-driven-technology-platforms-governance-security-digital-resilience/</link>
        <pubDate>Thu, 09 Apr 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/techbiz/purpose-driven-technology-platforms-governance-security-digital-resilience/</guid>
        <description>A purpose-driven technology platform doesn’t end with architecture. Data governance, cybersecurity, traceability, and software efficiency are key decisions to truly align technology, business, and ESG criteria.
</description>
        <content:encoded>
            <![CDATA[
                <p>In the previous post, we explored in depth <a href="https://en.paradigmadigital.com/techbiz/purpose-driven-technology-platforms-aligning-technology-business-sustainability/" target="_blank">how decisions around cloud, AI, inclusive design, and interoperability lay the foundations for a purpose-driven architecture</a>. However, for a platform to truly have purpose and align with ESG principles, it’s not enough to focus on how it is built; it must also be <strong>managed and protected</strong> with the same level of awareness.</p>
<p>Today, we complete our <strong>map of 8 key technology decisions</strong> by analyzing the final four pillars that ensure the integrity, security, and long-term impact of our digital ecosystem.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/esg_technological_decision_map_b2b84140e8.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/esg_technological_decision_map_b2b84140e8.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/esg_technological_decision_map_b2b84140e8.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/esg_technological_decision_map_b2b84140e8.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/esg_technological_decision_map_b2b84140e8.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="ESG technology decision map" title="undefined"/><figcaption>ESG technology decision map</figcaption></figure>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">5 <span class="enum-header"></span> Data governance and traceability</h2>
<p>This is the engine that ensures data remains a trustworthy asset, directly impacting the company’s <strong>Governance (G)</strong>. Reliable data doesn’t just ensure compliance—it accelerates decision-making and builds market trust.</p>
<p><strong>Key selection factors</strong></p>
<p>Data must have a clear identity and ownership. It is essential to choose technologies that guarantee <strong>end-to-end traceability</strong> for critical data and provide governance capabilities that ensure quality and integrity.</p>
<ul>
<li><strong>Key usage factors</strong></li>
</ul>
<p>Transparency is non-negotiable. It requires clearly defined <strong>access and retention policies</strong>, along with continuous security monitoring and audit processes for responsible usage.</p>
<ul>
<li><strong>Organizational impact</strong></li>
</ul>
<p>Faster decisions driven by trusted data. The result is strong regulatory compliance and <strong>support for ESG transparency</strong> through verifiable and auditable reporting.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/data_governance_traceability_8e21dee232.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/data_governance_traceability_8e21dee232.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/data_governance_traceability_8e21dee232.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/data_governance_traceability_8e21dee232.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/data_governance_traceability_8e21dee232.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Data governance and traceability" title="undefined"/><figcaption>Data governance and traceability</figcaption></figure>
<blockquote class="block-blockquote -like-cms-text-width"><p><em>“<strong>Trusted data</strong> doesn’t just ensure compliance: it accelerates decisions, builds market confidence, and opens the door to <strong>new opportunities</strong>.”</em></p>
</blockquote>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">6 <span class="enum-header"></span> Cybersecurity and resilience</h2>
<p>Security is not a cost; it is the insurance policy that protects business continuity and value, making it a critical pillar of <strong>Governance (G)</strong> and <strong>Social Responsibility (S)</strong>.</p>
<ul>
<li><strong>Key selection factors</strong></li>
</ul>
<p>Prepare for “when,” not “if.” We must evaluate <strong>incident response and recovery capabilities</strong>, ensuring compliance with standards such as <strong>ISO 27001, NIS2, or DORA</strong>.</p>
<ul>
<li><strong>Key usage factors</strong></li>
</ul>
<p>No one enters without permission. Implement <strong>Zero Trust-based access management</strong> and foster a security culture through continuous training and strict protocols.</p>
<ul>
<li><strong>Organizational impact</strong></li>
</ul>
<p>Peace of mind knowing your business is resilient. It minimizes financial risks from <strong>cyberattacks</strong>, ensures <strong>operational continuity</strong>, and strengthens trust among customers and investors.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/cybersecurity_resilience_e5f78b2756.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/cybersecurity_resilience_e5f78b2756.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/cybersecurity_resilience_e5f78b2756.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/cybersecurity_resilience_e5f78b2756.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/cybersecurity_resilience_e5f78b2756.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Cybersecurity and resilience" title="undefined"/><figcaption>Cybersecurity and resilience</figcaption></figure>
<blockquote class="block-blockquote -like-cms-text-width"><p><em>“Security is no longer a cost: it is the insurance policy that protects <strong>continuity, reputation, and business value</strong>.”</em></p>
</blockquote>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">7 <span class="enum-header"></span> ESG integration in the supply chain</h2>
<p>Our technological responsibility extends to every partner; a supply chain aligned with <strong>Environmental (E), Social (S), and Governance (G)</strong> criteria is more competitive and reliable.</p>
<ul>
<li><strong>Key selection factors</strong></li>
</ul>
<p>Tell me who you work with, and I’ll tell you who you are. It is essential to adopt platforms that enable <strong>supplier traceability</strong> and integrate with ESG risk management systems.</p>
<ul>
<li><strong>Key usage factors</strong></li>
</ul>
<p>Your suppliers’ commitment is your own commitment. It requires continuous monitoring of their <strong>ESG performance</strong> and clear criteria for <strong>selection and retention</strong>.</p>
<ul>
<li><strong>Organizational impact</strong></li>
</ul>
<p>A “clean” platform end to end. It reduces <strong>reputational risks</strong> across the value chain and ensures compliance with governance best practices.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/esg_integration_supply_chain_fcb00ca9db.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/esg_integration_supply_chain_fcb00ca9db.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/esg_integration_supply_chain_fcb00ca9db.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/esg_integration_supply_chain_fcb00ca9db.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/esg_integration_supply_chain_fcb00ca9db.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="ESG integration in the supply chain" title="undefined"/><figcaption>ESG integration in the supply chain</figcaption></figure>
<blockquote class="block-blockquote -like-cms-text-width"><p><em>“An ESG-aligned supply chain is <strong>more competitive</strong>, more reliable, and more attractive to customers and investors.”</em></p>
</blockquote>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">8 <span class="enum-header"></span> Software lifecycle optimization</h2>
<p>Efficient code means lower costs and faster delivery; it contributes positively to the <strong>Environmental (E)</strong> pillar by optimizing resource consumption.</p>
<ul>
<li><strong>Key selection factors</strong></li>
</ul>
<p>Build today with tomorrow in mind. Choose frameworks that facilitate maintainability and scalability, while supporting energy efficiency metrics.</p>
<ul>
<li><strong>Key usage factors</strong></li>
</ul>
<p>Deploy fast, but thoughtfully. Implement <strong>CI/CD processes</strong> for agile delivery and apply refactoring strategies to eliminate obsolete code that consumes unnecessary resources.</p>
<ul>
<li><strong>Organizational impact</strong></li>
</ul>
<p>Sustainability translated into profitability. It reduces operational costs, improves delivery quality, and extends the lifespan of technological assets.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/software_lifecycle_optimization_1086f3863f.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/software_lifecycle_optimization_1086f3863f.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/software_lifecycle_optimization_1086f3863f.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/software_lifecycle_optimization_1086f3863f.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/software_lifecycle_optimization_1086f3863f.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Software lifecycle optimization" title="undefined"/><figcaption>Software lifecycle optimization</figcaption></figure>
<blockquote class="block-blockquote -like-cms-text-width"><p><em>“<strong>Efficient code</strong> means lower costs, faster delivery, and development that leaves a… <strong>positive footprint</strong>.”</em></p>
</blockquote>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion: choosing technology is choosing the future</h2>
<p>As I shared at the <strong>Smart Energy Congress 2025</strong>, choosing technology is choosing the future. By integrating these 8 decision points into our strategy, we transform technology from a support function into a driver that aligns business with social, environmental, and governance challenges.</p>
<ul>
<li><strong>Enabler of real impact</strong>: technology drives everything from energy efficiency to digital inclusion.</li>
<li><strong>Competitive advantage</strong>: conscious decisions in architecture, data, security, AI, and accessibility enable new business models.</li>
<li><strong>Investment in resilience</strong>: purpose-driven design is not a cost, but an investment that fosters innovation and trust.</li>
<li><strong>Engine of transformation</strong>: platforms don’t just support the business—they connect it to a positive global impact.</li>
</ul>
<p><em><strong>&quot;True technological disruption lies not in what machines can do, but in the purpose we choose when designing them.&quot;</strong></em></p>
<p>After everything shared in this series of three posts on technology platforms, <strong>what will you do from today onwards?</strong> I’ll read you in the comments.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Simón Rodríguez ]]>
        </dc:creator>
        <title>Running LLMs Locally: Docker</title>
        <link>https://en.paradigmadigital.com/dev/running-llms-locally-docker/</link>
        <pubDate>Tue, 07 Apr 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/running-llms-locally-docker/</guid>
        <description>With the launch of Model Runner, Docker officially enters the world of generative AI, enabling you to run models locally, integrate them into pipelines, and deploy them as just another service within your stack. In this post, we’ll show you how.
</description>
        <content:encoded>
            <![CDATA[
                <p>Closing this series of posts on running LLMs locally, we now arrive at <strong>one of the latest players to join the trend of local LLM/AI execution: Docker!</strong></p>
<p>Being one of the latest to arrive does not mean it should be overlooked. Given its track record as a true <em>game-changer</em>—especially when it comes to transparent application execution—Docker has completely revolutionized the development world.</p>
<p><a href="https://docs.docker.com/ai/model-runner/" target="_blank">Model Runner</a> is the new tool that <a href="https://www.docker.com/blog/announcing-docker-model-runner-ga/" target="_blank">Docker has released</a> for running AI models locally, and in this article we will explore its main features.</p>
<p>As a quick reminder, in case you missed any of the previous articles, you can check out the rest of the <strong>local LLM execution series</strong> here:</p>
<ul>
<li><a href="https://en.paradigmadigital.com/dev/running-llms-locally-getting-started-ollama/" target="_blank">Running LLMs locally: getting started with Ollama</a></li>
<li><a href="https://en.paradigmadigital.com/dev/running-llms-locally-getting-started-ollama/" target="_blank">Running LLMs locally: advanced Ollama</a></li>
<li><a href="https://en.paradigmadigital.com/dev/running-llms-locally-lm-studio/" target="_blank">Running LLMs locally: LM Studio</a></li>
<li><a href="https://en.paradigmadigital.com/dev/running-llms-locally-llamafile/" target="_blank">Running LLMs locally with Llamafile</a></li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">How it works and key features</h2>
<p><strong>Docker Model Runner enables AI model execution by embedding an inference engine</strong> (built on top of the llama.cpp library) <strong>as part of the Docker runtime environment</strong>. At a high level, the <strong>architecture is composed of three main components</strong>:</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/arquitectura_docker_model_runner_54c4c99cfb.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/arquitectura_docker_model_runner_54c4c99cfb.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/arquitectura_docker_model_runner_54c4c99cfb.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/arquitectura_docker_model_runner_54c4c99cfb.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/arquitectura_docker_model_runner_54c4c99cfb.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Docker Model Runner architecture" title="undefined"/><figcaption>Docker Model Runner architecture</figcaption></figure>
<ul>
<li><strong>Model distribution</strong> (model storage and client): the <em>model store</em> is the core component of the architecture, where tensor files are stored. The <em>client</em> performs operations (such as downloading) against <a href="https://opencontainers.org/" target="_blank">OCI</a> registries.</li>
<li><strong>Model Runner</strong>: maps API requests to processes that run inference engines (/engines) and models (/models). It includes components such as the <em>scheduler, loader</em>, and <em>runner</em>, which coordinate loading and unloading models from memory (both inference engines and models operate as ephemeral processes). For each combination of inference engine (e.g., llama.cpp) and model (e.g., ai/llama3.2:3B-Q4_0), a separate process is executed depending on incoming API requests.</li>
<li><strong>Model CLI</strong>: the main user interaction component. This is a Docker CLI plugin that provides an interface similar to running Docker images. Under the hood, the CLI communicates with the Model Runner API to execute most operations.</li>
</ul>
<p>An important note is that, although the overall architecture remains the same, <strong>depending on the platform where it is deployed, these three components are packaged, stored, and executed differently</strong> (sometimes on the host, sometimes in a virtual machine, and sometimes inside a container).</p>
<p>Some of the <strong>main features of Docker Model Runner</strong> include:</p>
<ul>
<li>Ability to <strong>download and upload</strong> models to/from <a href="https://hub.docker.com/u/ai" target="_blank">Docker Hub</a>.</li>
<li><strong>Model execution</strong> via endpoints compatible with the OpenAI API.</li>
<li><a href="https://www.docker.com/blog/oci-artifacts-for-ai-model-packaging/" target="_blank">Packaging GGUF files as OCI artifacts</a> to publish them in any container registry.</li>
<li><strong>Running and interacting</strong> with models directly from the command line.</li>
<li><strong>Managing</strong> local models.</li>
<li>Defining <strong>input prompt details</strong> as well as model responses.</li>
<li><strong>Support</strong> for multi-turn interactions (chat).</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Installation</h2>
<p><strong>Model Runner is available for major operating systems</strong> (Windows, macOS, and Linux), either through <a href="https://docs.docker.com/ai/model-runner/get-started/#docker-desktop" target="_blank">Docker Desktop</a> or <a href="https://docs.docker.com/ai/model-runner/get-started/#docker-engine" target="_blank">Docker Engine</a>. In this article, we will run Docker Model Runner on <strong>Ubuntu</strong> using <strong>Docker Engine</strong>.</p>
<p>After installing <a href="https://docs.docker.com/engine/install/" target="_blank">Docker Engine</a> if necessary, you can proceed to install Model Runner by executing the following command:</p>
<pre><code class="language-none">sudo apt-get install docker-model-plugin
</code></pre>
<p>Verifying the installation using the command:</p>
<pre><code class="language-none">docker model version
</code></pre>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comprobacion_instalacion_model_runner_7fd9b2d8ab.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comprobacion_instalacion_model_runner_7fd9b2d8ab.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comprobacion_instalacion_model_runner_7fd9b2d8ab.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comprobacion_instalacion_model_runner_7fd9b2d8ab.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comprobacion_instalacion_model_runner_7fd9b2d8ab.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Model Runner installation verification" title="undefined"/><figcaption>Model Runner installation verification</figcaption></figure>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">CLI Commands</h2>
<p>Once Docker Model Runner is installed, you can <strong>interact with models</strong> using the following commands:</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">1 <span class="enum-header"></span> INSPECT</h3>
<p>This command displays <strong>detailed information about a model</strong>.</p>
<pre><code class="language-none">docker model inspect ai/llama3.2:3B-Q4_0

docker model inspect ai/llama3.2:3B-Q4_0 --openai #Presentar la información en formato OpenAI
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_inspect_1537dbd737.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_inspect_1537dbd737.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_inspect_1537dbd737.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_inspect_1537dbd737.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_inspect_1537dbd737.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="CLI Commands: inspect docker model" title="Inspect"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">2 <span class="enum-header"></span> LIST</h3>
<p>Command to <strong>list the models</strong> downloaded to the local environment.</p>
<pre><code class="language-none">docker model list

docker model list --json #List the models in JSON format

docker model list --openai #List the models in OpenAI format

docker model list --quiet #Show only the model IDs
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_list_cd905f5b70.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_list_cd905f5b70.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_list_cd905f5b70.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_list_cd905f5b70.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_list_cd905f5b70.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="CLI commands: list" title="List"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">3 <span class="enum-header"></span> LOGS</h3>
<p>Command to <strong>display</strong> logs.</p>
<pre><code class="language-none">docker model logs

docker model logs --follow#View logs in real time
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_logs_1b567aa4b5.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_logs_1b567aa4b5.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_logs_1b567aa4b5.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_logs_1b567aa4b5.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_logs_1b567aa4b5.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Cli Commands: logs" title="Logs"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">4 <span class="enum-header"></span> PACKAGE</h3>
<p>Command to <strong>package a file in GGUF format</strong> into a Docker Model OCI artifact.</p>
<pre><code class="language-none">docker model package --gguf &lt;path&gt; [--license &lt;path&gt;...] [--context-size &lt;tokens&gt;] [--push] MODEL

docker model package --gguf /home/simonrodriguez/dockerModelRunner/model.gguf my_new_llama_model
</code></pre>
<p>The <strong>available options</strong> for this command are:</p>
<ul>
<li><strong>--chat-template</strong>: absolute path to the chat template file (the template must be in Jinja format).</li>
<li><strong>--context-size</strong>: size of the context window.</li>
<li><strong>--gguf</strong> (required): absolute path to the file in GGUF format.</li>
<li><strong>--license</strong>: absolute path to the license file.</li>
<li><strong>--push</strong>: upload to the registry.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_package_b70c56f82c.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_package_b70c56f82c.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_package_b70c56f82c.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_package_b70c56f82c.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_package_b70c56f82c.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="CLI command: package" title="Package"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">5 <span class="enum-header"></span> PULL</h3>
<p>Command to download a model from <a href="https://hub.docker.com/u/ai" target="_blank">Docker Hub</a> or <a href="https://huggingface.co/models?library=gguf" target="_blank">Hugging Face</a>.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_pull_4cf9846bc6.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_pull_4cf9846bc6.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_pull_4cf9846bc6.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_pull_4cf9846bc6.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_pull_4cf9846bc6.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="CLI commands: pull" title="Pull"/></article>
<p>When downloading from Hugging Face, <strong>if no tag is specified</strong>, it will attempt to download the <strong><em>Q4_K_M</em></strong> version of the model. If this version does not exist, it will download the <strong>first GGUF file found in the model’s <em>Files</em> section</strong> on Hugging Face. To specify the model quantization, you simply need to <strong>add the corresponding tag</strong>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">6 <span class="enum-header"></span> PUSH</h3>
<p>Command to <strong>upload a model</strong> to Docker Hub.</p>
<pre><code class="language-none">docker model push ai/llama3.3
</code></pre>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">7 <span class="enum-header"></span> RM</h3>
<p>Command to <strong>delete</strong> local models.</p>
<pre><code class="language-none">docker model rm ai/llama3.2:3B-Q4_0

docker model rm ai/llama3.2:3B-Q4_0 --force #Force model deletion
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_rm_5d2619a29f.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_rm_5d2619a29f.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_rm_5d2619a29f.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_rm_5d2619a29f.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_rm_5d2619a29f.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="comandos cli: RM" title="RM"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">8 <span class="enum-header"></span> RUN</h3>
<p>Command to <strong>run a model and interact with it</strong> by sending a prompt or via chat mode.</p>
<pre><code class="language-none">docker model run ai/llama3.2:3B-Q4_0 #A prompt opens for an interactive chat, which you can exit with the command /bye

docker model run ai/llama3.2:3B-Q4_0 “Hello, what can you tell me about Docker Model Runner?”

docker model run ai/llama3.2:3B-Q4_0 --debug #Enables debug mode

docker model run ai/llama3.2:3B-Q4_0 --ignore-runtime-memory-check #Option to prevent the download from being blocked if the model is estimated to exceed system memory
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_run_1_a0832a9dba.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_run_1_a0832a9dba.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_run_1_a0832a9dba.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_run_1_a0832a9dba.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_run_1_a0832a9dba.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Comando cli: run" title="Run"/></article>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_run_2_0ce0199301.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_run_2_0ce0199301.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_run_2_0ce0199301.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_run_2_0ce0199301.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_run_2_0ce0199301.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="comando cli: run" title="Run"/></article>
<p>When a Docker model is executed, it calls the API endpoint of the inference server hosted by Model Runner. The model will remain in memory until another model is loaded or the inactivity timeout is reached.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">9 <span class="enum-header"></span> PS</h3>
<p>Command that <strong>displays the models</strong> currently running.</p>
<pre><code class="language-none">docker model ps
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_ps_c235a35577.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_ps_c235a35577.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_ps_c235a35577.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_ps_c235a35577.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_ps_c235a35577.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="CLI command: ps" title="PS"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">10 <span class="enum-header"></span> UNLOAD</h3>
<p>Command to <strong>unload a running model</strong>.</p>
<pre><code class="language-none">docker model unload ai/llama3.2:3B-Q4_0
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_unload_be1e8a6d63.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_unload_be1e8a6d63.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_unload_be1e8a6d63.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_unload_be1e8a6d63.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_unload_be1e8a6d63.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="comandos cli: unload" title="Unload"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">11 <span class="enum-header"></span> DF</h3>
<p>Command that displays the <strong>disk space</strong> occupied by the models.</p>
<pre><code class="language-none">docker model df
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_df_6c72548655.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_df_6c72548655.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_df_6c72548655.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_df_6c72548655.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_df_6c72548655.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="comando cli: DF" title="DF"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">12 <span class="enum-header"></span> STATUS</h3>
<p>Command to <strong>check</strong> if Docker Model Runner is running.</p>
<pre><code class="language-none">docker model status

docker model status --json #Display the information in JSON format
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_status_1cddd8195e.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_status_1cddd8195e.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_status_1cddd8195e.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_status_1cddd8195e.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_status_1cddd8195e.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="comando cli: status" title="Status"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">13 <span class="enum-header"></span> TAG</h3>
<p>Command to <strong>create a specific tag</strong> for a model.</p>
<pre><code class="language-none">docker model tag ai/llama3.2:3B-Q4_0 quantized-model
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_tag_932b73aab5.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_tag_932b73aab5.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_tag_932b73aab5.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_tag_932b73aab5.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_tag_932b73aab5.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="cli-command: tag" title="TAG"/></article>
<p>If the tag is not specified, the default value is <em>latest</em>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">14 <span class="enum-header"></span> VERSION</h3>
<p>Command to <strong>check which version</strong> of Docker Model Runner is installed on the system.</p>
<pre><code class="language-none">docker model version
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_version_71473b2cc3.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_version_71473b2cc3.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_version_71473b2cc3.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_version_71473b2cc3.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_version_71473b2cc3.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Comando cli: version" title="Version"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">API</h2>
<p>Once Model Runner is enabled, <strong>API endpoints are automatically exposed</strong> (both native Docker Model Runner endpoints and OpenAI-compatible endpoints), which can be used to <strong>interact</strong> with models programmatically.</p>
<p><strong>When making requests to the exposed API, it is important to consider the origin of the request</strong>:</p>
<ul>
<li><strong>From other containers</strong>: send requests to http://172.17.0.1:12434/. This interface may not always be available for calls from containers. If that is the case, you must include the <em>extra_hosts</em> instruction in the Docker Compose configuration file:</li>
</ul>
<pre><code class="language-none">extra_hosts:
  - &quot;model-runner.docker.internal:host-gateway&quot;
</code></pre>
<p>With the previous instruction, the API can be accessed through the address http://model-runner.docker.internal:12434/</p>
<ul>
<li><strong>From the host</strong>: send requests to http://localhost:12434/</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Native endpoints</h3>
<p>The <strong>available endpoints</strong> are:</p>
<ul>
<li><strong>/models/create (POST)</strong>: endpoint to <strong>download</strong> a model.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_model_create_1_c73b16e565.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/endpoints_propios_model_create_1_c73b16e565.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/endpoints_propios_model_create_1_c73b16e565.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/endpoints_propios_model_create_1_c73b16e565.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_model_create_1_c73b16e565.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="custom endpoints: /models/create" title="/models/create"/></article>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_create_2_6a4577f348.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/endpoints_propios_models_create_2_6a4577f348.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/endpoints_propios_models_create_2_6a4577f348.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/endpoints_propios_models_create_2_6a4577f348.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_create_2_6a4577f348.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="custom endpoints: /models/create" title="/models/create"/></article>
<ul>
<li><strong>/models (GET)</strong>: endpoint to <strong>list</strong> existing models in the system along with their information.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_get_81094cb4f6.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/endpoints_propios_models_get_81094cb4f6.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/endpoints_propios_models_get_81094cb4f6.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/endpoints_propios_models_get_81094cb4f6.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_get_81094cb4f6.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="custom endpoints: /models (get)" title="/models (get)"/></article>
<ul>
<li><strong>/models/{namespace}/{name} (GET)</strong>: endpoint to <strong>display information</strong> about a model.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_namespace_name_0f800d2955.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/endpoints_propios_models_namespace_name_0f800d2955.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/endpoints_propios_models_namespace_name_0f800d2955.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/endpoints_propios_models_namespace_name_0f800d2955.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_namespace_name_0f800d2955.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="custom endpoints: /models/{namespace}/{name} (get)" title="/models/{namespace}/{name} (get)"/></article>
<ul>
<li><strong>/models/{namespace}/{name} (DELETE)</strong>: endpoint to <strong>delete</strong> a local model.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/models_namespace_name_delete_3e95e27ab3.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/models_namespace_name_delete_3e95e27ab3.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/models_namespace_name_delete_3e95e27ab3.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/models_namespace_name_delete_3e95e27ab3.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/models_namespace_name_delete_3e95e27ab3.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/models/{namespace}/{name} (DELETE)" title="/models/{namespace}/{name} (DELETE)"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">OpenAI-compatible endpoints</h3>
<p>The exposed endpoints are:</p>
<ul>
<li><strong>/engines/llama.cpp/v1/models (GET)</strong>: endpoint to <strong>list</strong> available models in the system.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/engines_llama_v1_models_79f4d33c1d.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/engines_llama_v1_models_79f4d33c1d.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/engines_llama_v1_models_79f4d33c1d.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/engines_llama_v1_models_79f4d33c1d.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/engines_llama_v1_models_79f4d33c1d.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/engines/llama.cpp/v1/models (GET)" title="/engines/llama.cpp/v1/models (GET)"/></article>
<ul>
<li><strong>/engines/llama.cpp/v1/models/{namespace}/{name} (GET)</strong>: endpoint to <strong>expose information</strong> about a model.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_models_name_09b51bb285.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/engines_v1_models_name_09b51bb285.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/engines_v1_models_name_09b51bb285.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/engines_v1_models_name_09b51bb285.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_models_name_09b51bb285.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/engines/llama.cpp/v1/models/{namespace}/{name} (GET)" title="/engines/llama.cpp/v1/models/{namespace}/{name} (GET)"/></article>
<ul>
<li><strong>/engines/llama.cpp/v1/chat/completions (POST)</strong>: endpoint to <strong>send</strong> a chat interaction and receive the assistant’s response. Multiple parameters can be specified, such as temperature, stream, seed, etc.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_chat_completions_109f433701.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/engines_v1_chat_completions_109f433701.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/engines_v1_chat_completions_109f433701.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/engines_v1_chat_completions_109f433701.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_chat_completions_109f433701.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/engines/llama.cpp/v1/chat/completions (POST)" title="/engines/llama.cpp/v1/chat/completions (POST)"/></article>
<ul>
<li><strong>/engines/llama.cpp/v1/completions (POST)</strong>: model response to user input. This endpoint is <strong>already deprecated</strong> by OpenAI.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_completions_22bcc1fea1.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/engines_v1_completions_22bcc1fea1.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/engines_v1_completions_22bcc1fea1.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/engines_v1_completions_22bcc1fea1.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_completions_22bcc1fea1.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/engines/llama.cpp/v1/completions (POST)" title="/engines/llama.cpp/v1/completions (POST)"/></article>
<ul>
<li><strong>/engines/llama.cpp/v1/embeddings (POST)</strong>: endpoint to <strong>retrieve</strong> embeddings from a text.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_embeddings_30040dd096.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/engines_v1_embeddings_30040dd096.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/engines_v1_embeddings_30040dd096.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/engines_v1_embeddings_30040dd096.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_embeddings_30040dd096.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/engines/llama.cpp/v1/embeddings (POST)" title="/engines/llama.cpp/v1/embeddings (POST)"/></article>
<p>Since currently only one inference engine (llama.cpp) is supported, <strong>this part can be omitted from the URLs above</strong> (for example, /engines/llama.cpp/v1/models becomes /engines/v1/models).</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/omitir_parte_url_2cd5efa0c0.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/omitir_parte_url_2cd5efa0c0.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/omitir_parte_url_2cd5efa0c0.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/omitir_parte_url_2cd5efa0c0.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/omitir_parte_url_2cd5efa0c0.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="URL simplification" title="URL simplification"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Docker Compose</h2>
<p><strong>Docker Compose allows you to define models as core components of your application, so they can be declared alongside services</strong>, enabling the application to run on any platform compatible with the Compose specification. <strong>To run models in Docker Compose, you need at least version 2.38.0 of the tool</strong>, as well as a platform that supports models in Compose, such as Docker Model Runner.</p>
<p><strong>For using models in Docker Compose, the <em>models</em> element has been introduced</strong>, which allows you to:</p>
<ul>
<li><strong>Declare AI models</strong> required by the application.</li>
<li><strong>Specify configurations and requirements</strong> for each model.</li>
<li><strong>Make the application portable</strong> across different platforms.</li>
<li>Allow the platform to <strong>manage</strong> the model lifecycle.</li>
</ul>
<p>The <strong>configuration options for the <em>models</em> element</strong> are:</p>
<ul>
<li><strong>model</strong> (required): the OCI artifact identifier for the model. This is what will be downloaded and executed by Model Runner.</li>
<li><strong>context_size</strong>: defines the maximum context window size for the model.</li>
<li><strong>runtime_flags</strong>: list of parameters passed to the inference engine when the model starts. For example, for llama.cpp, the parameters can be found <a href="https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md#usage" target="_blank">here</a>.</li>
<li><strong>x-</strong>*: extensible properties for platform-specific options.</li>
</ul>
<p>A simple example of a <em>models</em> definition could be:</p>
<pre><code class="language-none">models:
  llm:
    model: ai/llama3.2:3B-Q4_0
    context_size: 4096
    runtime_flags:
      - &quot;--temp&quot;                # Temperature
      - &quot;0.1&quot;
      - &quot;--top-p&quot;               # Top-p sampling
      - &quot;0.9&quot;
</code></pre>
<p><strong>Services can reference models in two ways</strong>:</p>
<ul>
<li><strong>Short form</strong>: the simplest approach. With this method, the platform automatically generates environment variables based on the model name:
<ul>
<li><strong>LLM_URL</strong>: URL to access the LLM model.</li>
<li><strong>LLM_MODEL</strong>: identifier of the LLM model.</li>
<li><strong>EMBEDDING_MODEL_URL</strong>: URL to access the embedding model.</li>
<li><strong>EMBEDDING_MODEL_MODEL</strong>: identifier of the embedding model.</li>
</ul>
</li>
</ul>
<pre><code class="language-none">services:
  app:
    image: my-app
    models:
      - llm
      - embedding-model

models:
  llm:
    model: ai/llama3.2:3B-Q4_0
  embedding-model:
    model: ai/embeddinggemma
</code></pre>
<ul>
<li><strong>Long form</strong>: with this configuration, the service is explicitly provided with:
<ul>
<li><strong>AI_MODEL_URL</strong> and <strong>AI_MODEL_NAME</strong> for the LLM model.</li>
<li><strong>EMBEDDING_URL</strong> and <strong>EMBEDDING_NAME</strong> for the embedding model.</li>
</ul>
</li>
</ul>
<pre><code class="language-none">services:
  app:
    image: my-app
    models:
      llm:
        endpoint_var: AI_MODEL_URL
        model_var: AI_MODEL_NAME
      embedding-model:
        endpoint_var: EMBEDDING_URL
        model_var: EMBEDDING_NAME

models:
  llm:
    model: ai/llama3.2:3B-Q4_0
  embedding-model:
    model: ai/embeddinggemma
</code></pre>
<p>Here you can find <a href="https://docs.docker.com/ai/compose/models-and-compose/#common-runtime-configurations" target="_blank">some configurations for specific use cases</a> of the <em>models</em> element in Docker Compose.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Demo</h2>
<p>To see this new <em>models</em> element in Docker Compose in action, we created a <strong>simple application to interact with an LLM</strong>. The application uses the following components:</p>
<ul>
<li>Java 21</li>
<li>Spring Boot 3.4.4 (with built-in support for <a href="https://buildpacks.io/" target="_blank">Buildpacks</a> to create Docker images for applications)</li>
<li><a href="https://www.paradigmadigital.com/dev/deep-learning-spring-ai-primeros-pasos/" target="_blank">Spring AI</a></li>
<li>Maven 3.8.5</li>
<li>Docker version 28.4.0</li>
<li>Docker Model Runner 0.1.40</li>
<li>Docker Compose 2.39.4</li>
</ul>
<p>The application simply exposes a <strong>/chat endpoint</strong> that receives user input and sends it to the corresponding LLM.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/docker_demo_models_2245d3e6ce.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/docker_demo_models_2245d3e6ce.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/docker_demo_models_2245d3e6ce.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/docker_demo_models_2245d3e6ce.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/docker_demo_models_2245d3e6ce.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Demo Models" title="undefined"/><figcaption>Demo Models</figcaption></figure>
<p>Here you can <a href="https://github.com/paradigmadigital/local-llms" target="_blank">download the sample application code and the README file with the steps to run it</a>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusions</h2>
<p>In this final post of the series, we explored how to run LLMs with Docker and the <strong>ease it provides</strong> to integrate them into our applications thanks to <strong>Docker Compose integration</strong>.</p>
<p>Throughout this series focused on running LLMs locally, we have reviewed the most widely used tools and their particularities, all of which offer core functionalities such as commands and API endpoints to interact with models. Currently, <strong><a href="https://www.paradigmadigital.com/dev/ejecutando-llms-local-primeros-pasos-ollama/" target="_blank">Ollama</a> arguably stands out among the rest in terms of available features and advanced model customization</strong>.</p>
<p>Based on what we have seen with Ollama and Docker, will we soon see custom AI models (containerized or not) running in the cloud alongside our microservices? Only time will tell.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">References</h3>
<ul>
<li><a href="https://docs.docker.com/ai/model-runner/" target="_blank">Docker Model Runner Documentation</a></li>
</ul>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Santiago López ]]>
        </dc:creator>
        <title>The Green QA Framework: Quality that Breathes</title>
        <link>https://en.paradigmadigital.com/dev/green-qa-framework-quality-breaths/</link>
        <pubDate>Tue, 31 Mar 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/green-qa-framework-quality-breaths/</guid>
        <description>Green QA is not just about reducing tests or consumption, but about changing how we understand software quality. It involves integrating energy and carbon metrics into the entire software development lifecycle. Sustainability becomes another quality criterion, just like performance or functionality.
</description>
        <content:encoded>
            <![CDATA[
                <p>After understanding <a href="https://en.paradigmadigital.com/dev/what-is-green-qa-quality-that-breathes/" target="_blank">what it means to be eco-conscious in the world of QA</a> and recognizing the regulatory and corporate pressures pushing us toward more sustainable practices, the inevitable question arises: <strong>how do we actually implement it?</strong> The transition to a sustainable testing model requires structure, methodology, and, above all, a clear objective.</p>
<p>In this second article of our Green Quality Assurance (GQA) series, we leave behind the “why” and move into the <strong>“how.”</strong> If in the <a href="https://en.paradigmadigital.com/dev/what-is-green-qa-quality-that-breathes/" target="_blank">first article</a> we established that measuring quality in watts and CO2 is just as important as ensuring software works properly, now it is time to <strong>build the foundations that will support this new way of working</strong>. It is not just about reducing the energy consumption of our tests or running fewer test cases; it is about <strong>completely reimagining our approach to software quality</strong>.</p>
<p>The software industry must understand that <strong>technical excellence and environmental responsibility are not mutually exclusive goals</strong>. In fact, the most innovative organizations are discovering that sustainable QA practices often lead to more efficient processes, more productive teams, and, surprisingly, <strong>better final product quality</strong>.</p>
<p>But to achieve this balance, we need a <strong>robust framework</strong> that allows us to assess, implement, and continuously improve our practices.</p>
<p><strong>The journey toward Green QA is evolutionary</strong>. We cannot expect an organization to go from zero to one hundred overnight. That is why, in the following sections, we explain a <strong>gradual approach</strong> that recognizes different maturity levels, provides concrete tools for each stage, and enables us to measure our progress.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Framework layers</h2>
<p>Addressing cultural change in an organization regarding quality is usually an evolutionary process in which awareness plays a major role. The shift proposed by GQA (Green QA) introduces a <strong>new dimension to that awareness</strong>: the goal of doing things in the greenest way possible. But are we really aware of what must change to make this happen? Let’s explore it.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Governance</h3>
<p>First of all, it is important to have governance that is appropriate for this context. This is the organization’s official “mandate” and the basis of the entire process. Without it, Green QA remains an isolated initiative. Governance addresses the following points:</p>
<ul>
<li><strong>Sustainable quality policy</strong></li>
</ul>
<p>This is not just a document; it is about defining the “Green Acceptance Threshold.” It establishes the sustainability goals the company is pursuing, setting targets by resource type. Once these goals are defined, everything else is aimed at meeting them. For example: <em>“No production deployment may increase the energy consumption of the microservice by more than 5%.”</em></p>
<ul>
<li><strong>Responsibility matrix</strong></li>
</ul>
<p>This defines what each member of the organization involved in the delivery lifecycle is responsible for.</p>
<ol>
<li><strong>QA</strong>: designs efficiency test cases, defines baseline metrics, and is responsible for executing measurements in each cycle.</li>
<li><strong>Architecture</strong>: validates that design decisions do not introduce structural energy debt before reaching the testing phase.</li>
<li><strong>DevOps / Platform Engineering</strong>: ensures that the instrumentation needed to measure consumption is available in test environments. Without observability infrastructure, QA cannot measure.</li>
<li><strong>Sustainability</strong>: provides energy-to-CO2 conversion factors and ensures data traceability into the ESG reporting system.</li>
<li><strong>Compliance</strong>: verifies that data collection, processing, and reporting comply with current regulations, particularly the CSRD.</li>
<li><strong>Product Owner</strong>: formally accepts that acceptance criteria include efficiency metrics, not only functionality and traditional performance.</li>
</ol>
<ul>
<li><strong>Alignment with ESG and corporate strategy</strong>. In the <a href="https://www.paradigmadigital.com/dev/que-es-green-qa-calidad-que-respira/" target="_blank">previous post</a>, we discussed ESG extensively and its connection to Green QA, and it is something that must be considered within governance.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Processes</h3>
<p>This layer establishes how Green QA is integrated into the project’s day-to-day work.</p>
<ul>
<li><strong>Shift-Left Green</strong>: integrating environmental criteria during the Refinement phase. If a feature consumes too much unnecessary data, it is rejected before development begins.</li>
<li><strong>Green Gateways in the Pipeline</strong>: adding “quality gates” into CI/CD. If automated tests detect an unusual spike in CPU or RAM usage, the build fails. Environmental controls must be present in design, development, testing, and deployment.</li>
<li><strong>Suppliers</strong>: evaluating whether our service providers (Cloud, SaaS) operate using renewable energy.</li>
</ul>
<p>It is important that governance also defines the consequences of non-compliance with the policy, whether by blocking a release, creating technical debt, or establishing the appropriate mechanisms to prevent bypassing the defined goals.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Data and metrics</h3>
<p>This framework layer analyzes the <strong>quality of the data</strong> that will support decision-making.</p>
<ul>
<li><strong>Data traceability</strong>: ensuring that ESG data has a clear lineage, from the sensor or server log to the annual report.</li>
<li><strong>QA Data Cleaning</strong>: a process for deleting test environments, temporary databases, and old execution logs that worsen storage conditions and consume unnecessary energy.</li>
<li><strong>Accuracy vs. estimation</strong>: defining which data is measured (real) and which is estimated (mathematical models), applying different QA approaches to each.</li>
<li><strong>Definition of environmental and ESG KPIs</strong></li>
<li><strong>Data quality controls</strong> (accuracy, completeness, traceability)</li>
<li><strong>Audit and reporting</strong></li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Technology</h3>
<p>This is the layer of <strong>tools</strong>. Green QA needs technical eyes to “see” energy. Therefore, it is advisable to have tools to <strong>measure the impact of programming languages</strong> (for example, comparing Python vs. Rust consumption in critical processes), optimize test suites so they do not run 2,000 tests if only 2 lines of code changed (risk-based test selection), configure the framework so test environments “self-destruct” immediately after execution, and have measurement tools (energy, carbon, resources), green test automation, and infrastructure optimization (cloud, hardware).</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Continuous improvement</h3>
<p>Within the framework layers, it is advisable to invest in the <strong>cultural and improvement aspect</strong> so that the framework becomes circular rather than linear.</p>
<p>Elements such as “Retro-Green,” where each sprint retrospective can include a question like: “what process or code did we make more efficient this month?”, gamification through rankings of development/QA teams that have reduced their digital carbon footprint the most, and the updating of standards.</p>
<p>As ESG laws evolve, this layer changes and is typically reviewed quarterly. In this way, the framework continues to comply with new regulations and allows organizations to establish measurable reduction goals, whether quantitative or qualitative.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The path toward Zero-waste Testing</h2>
<p>One of the points mentioned above is <strong>measurement</strong>. It is essential to understand at every moment where we stand relative to the objectives in order to <strong>take the necessary actions and decisions</strong> while working within the defined governance. If you are familiar with <a href="https://www.tmmi.org/" target="_blank">TMMi</a>, it is a process that <strong>measures maturity levels</strong> in relation to testing within an organization.</p>
<p>In this regard, below is an <strong>analysis of existing levels in relation to GQA</strong>, so that we can understand where we are and what the goals should be to consolidate or advance through them. <strong>Each of these levels should be further developed to make evaluations more concrete.</strong></p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Level 1: initial (we start walking)</h3>
<p>At this level, the company is not aware of the environmental impact of its testing. If there is efficiency, it is for cost savings, not because of purpose. Compliance is reactive, without metrics.</p>
<p>At this early stage, all tests are always executed in environments where we have no visibility into their consumption and which generally remain on 24/7.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Level 2: basic (awareness has awakened)</h3>
<p>Stakeholders understand what Green QA is, and the first manual efforts begin to document impact. Manual controls and basic reporting appear.</p>
<p>At this intermediate level, assets such as <strong>servers and tools</strong> are identified in order to request sustainability reports from providers, creating an inventory of digital assets. Testing begins to be handled more consistently with this policy, for example by deleting old test data.</p>
<p>At this stage, actions still depend on people rather than automated processes.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Level 3: defined (the green standard)</h3>
<p>Sustainability is officially integrated into the quality manual. The first standardized processes and KPIs appear.</p>
<p>At this level, having a green production readiness checklist should be one of the goals. To support this, <strong>green KPIs</strong> are defined (for example, watts per test suite).</p>
<p>At the same time, this level seeks to ensure that the QA team has sufficient training in Green Coding and energy efficiency.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Level 4: managed (automated and measurable quality)</h3>
<p>A set of metrics has been established, and continuous reporting automation through pipelines is in place.</p>
<p>Once certain aspects are consolidated, including cultural ones, the goal becomes the use of “real-time carbon dashboards” integrated into tools like Jira or Grafana, so teams can see their daily impact.</p>
<p>The challenge is <strong>integrating this data with the rest of the company (ESG)</strong>. CI/CD pipelines include tools that automatically measure the CPU/RAM consumption of tests. If a test is inefficient, an alert is generated.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Level 5: optimized (Green QA as DNA)</h3>
<p>GQA is no longer an “extra”; it is the only way of working. <strong>It is aligned with the company’s overall strategy.</strong></p>
<p>Now the company uses <strong>AI to predict and minimize the energy consumption</strong> of tests. The carbon savings achieved by the QA team are reported directly in the company’s annual sustainability report (ESG). The challenge is <strong>maintaining innovation and leading standards in the industry</strong>.</p>
<p>Aspects such as the “circular economy of data,” where test data is intelligently reused to avoid generating new loading processes, begin to be taken seriously in order to consolidate green goals.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Strategy and methodology</h2>
<p>The <strong>strategy</strong> is the <strong>decision-making framework</strong> in which the elements needed to address GQA in practice are defined, aligned with all the framework layers described above.</p>
<p>The <strong>methodology</strong> describes <strong>how we implement the strategy</strong> we have defined and is responsible for defining <strong>how to execute GQA</strong>.</p>
<p>In the context of <strong>GQA</strong> (or sustainability in the software lifecycle), these tools do not only measure emissions, but also integrate into the quality process to ensure that software is efficient and complies with environmental regulations (ESG).</p>
<p>Once a strategy aligned with objectives has been established, the <strong>tools and frameworks</strong> that will be used for its implementation are selected. Below are some examples.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Measuring software efficiency (Green Testing)</h3>
<p>This is where <strong>QA has direct control</strong>. The energy consumption of a process or test suite is measured. These measurements are used to establish the baseline.</p>
<p>Before optimizing a single line of code, QA needs to know <strong>how many grams of CO2 the hardware running the application generates</strong>. Without that, we cannot measure improvement after an optimization.</p>
<p><strong>Tool: Scaphandre (energy metrology)</strong></p>
<p>An open-source power consumption metrics agent designed for Kubernetes and bare-metal servers.</p>
<ul>
<li><strong>What it is used for</strong></li>
</ul>
<p>It measures exactly how many watts a specific process consumes (for example, your Selenium suite or a microservice under load).</p>
<ul>
<li><strong>Setup and use</strong></li>
</ul>
<ol>
<li><strong>Installation</strong>: it is installed as a binary or Docker container on the server where tests run.</li>
<li><strong>Usage</strong>: it exposes metrics in Prometheus format.</li>
<li><strong>Green QA Step</strong>: configure a Grafana dashboard that crosses “CPU consumption” with “Watts consumption.” If, after a code optimization, the tests take the same time but consume fewer watts, Green QA has succeeded.</li>
</ol>
<p><strong>Tool: Eco-Code / SonarQube (Green Rules)</strong></p>
<ul>
<li><strong>What it is used for</strong></li>
</ul>
<p>Static code analysis focused on energy efficiency.</p>
<ul>
<li><strong>Setup</strong></li>
</ul>
<ol>
<li><strong>Installation</strong>: add the &quot;Green IT&quot; or &quot;Eco-Code&quot; plugin to your SonarQube instance.</li>
<li><strong>Usage</strong>: QA defines quality gateways. If the code contains patterns that wake up the CPU unnecessarily (inefficient loops, redundant API calls), the quality test fails.</li>
</ol>
<p><strong>Tool: SimaPro and GaBi</strong></p>
<p>These are industrial standards that can be adapted to the QA ecosystem in the following way:</p>
<ul>
<li><strong>What they mean for IT environments</strong></li>
</ul>
<p>They allow us to model the impact of our digital infrastructure (servers, test mobile devices, or networks). They do not only measure energy expenditure, but also the “carbon debt” of the hardware supporting our software.</p>
<ul>
<li><strong>Their strategic use in Green QA</strong></li>
</ul>
<p>They are used to establish the baseline. Before optimizing a single line of code, QA needs to know how many grams of CO2 the hardware generates. Without this, we cannot measure improvement after optimization.</p>
<ul>
<li><strong>Setup and data sources</strong></li>
</ul>
<p>To control the environmental footprint of our applications, Green QA maps:</p>
<ul>
<li><strong>Hardware inventories</strong>: CPUs, RAM, and storage systems used in staging and production environments.</li>
<li><strong>Environmental databases</strong>: sources such as Ecoinvent are integrated to calculate the impact of the energy mix (it is not the same to run a test on a server in Norway powered by hydroelectric energy as in a coal-dependent region).</li>
<li><strong>Practical decision example</strong></li>
</ul>
<p>Thanks to these tools, the QA team can make comparisons based on real data:</p>
<p>Case study: is it more sustainable to run our regression suite on old on-premise servers or migrate testing to a cloud instance with Energy Star certification and auto-scaling? Green QA uses LCA to demonstrate that migration reduces carbon footprint by X%.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Measuring Cloud Carbon Footprint (CCF)</h2>
<p>If you do not want to rely on native tools (which are sometimes opaque), <strong>Cloud Carbon Footprint is the open standard</strong>.</p>
<p><strong>Tool: Cloud Carbon Footprint (CCF)</strong></p>
<ul>
<li><strong>What it is used for</strong></li>
</ul>
<p>To visualize emissions from AWS, Azure, and GCP in one place with a transparent calculation methodology.</p>
<ul>
<li><strong>Setup</strong></li>
</ul>
<ol>
<li><strong>Connection</strong>: you need read permissions for billing files (CUR in AWS, Billing Export in GCP).</li>
<li><strong>Usage</strong>: it allows the QA team to compare regions.</li>
<li><strong>Technical decision</strong>: QA can demonstrate that moving the staging environment from a coal-based region (for example, Virginia, US-East-1) to one powered by cleaner energy (for example, Sweden or France) instantly reduces carbon footprint without changing a single line of code.</li>
</ol>
<p><strong>Tools: Watershed, Persefoni, Plan A</strong></p>
<p>These are SaaS platforms for measuring corporate carbon footprint.</p>
<ul>
<li><strong>What they are</strong></li>
</ul>
<p>They automate the calculation of Scope 1, 2, and 3 emissions.</p>
<ul>
<li><strong>Usage in Green QA</strong></li>
</ul>
<p>Software falls under Scope 3 (indirect emissions). These tools collect data from your electricity bills and cloud providers.</p>
<ul>
<li><strong>Setup</strong></li>
</ul>
<p>They connect via API to your inventory systems. The QA team reports the energy consumption of test server farms here so the company has real data on IT department impact.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Measuring sustainability in the Frontend</h2>
<p>GQA also measures the <strong>impact on the end-user device</strong>.</p>
<p><strong>Tool: GreenFrame.io or Lighthouse (Carbon Indicator)</strong></p>
<ul>
<li><strong>What it is used for</strong></li>
</ul>
<p>To measure the carbon footprint of a user session in the browser.</p>
<ul>
<li><strong>Setup and use</strong></li>
</ul>
<ol>
<li><strong>CI/CD Integration</strong>: it integrates with GitHub Actions or Jenkins.</li>
<li><strong>Usage</strong>: every time a visual regression test is launched, GreenFrame estimates the grams of CO2 produced by loading the page (data transfer + JS execution on the client).</li>
<li><strong>QA Metric</strong>: “this new Home version weighs 2MB more and generates 0.5g of extra CO2 per visit.” This is reported as a Sustainability Bug.</li>
</ol>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">QA and audit: green quality management</h2>
<p>This is where <strong>you connect data with the testing process</strong>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Jira and TestRail</h3>
<ul>
<li><strong>Usage in Green QA</strong>: they do not measure carbon by themselves, but they are configured to manage green requirements.</li>
<li><strong>Setup</strong>
<ul>
<li><strong>Jira</strong>: create custom fields such as Estimated Carbon Cost in User Stories.</li>
<li><strong>TestRail</strong>: create a “Green Test Cases” section where you validate that the app enters power-saving mode or does not make unnecessary API requests.</li>
</ul>
</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">ESG data platforms (Environmental, Social, and Governance)</h3>
<ul>
<li><strong>What they are</strong></li>
</ul>
<p>Repositories where all evidence for legal audits is stored.</p>
<ul>
<li><strong>Usage in Green QA</strong></li>
</ul>
<p>The result of your green tests is uploaded here as proof of regulatory compliance (for example, to comply with the CSRD directive in Europe).</p>
<ul>
<li><strong>Pipeline configuration</strong></li>
</ul>
<p>For this to be true “Green QA,” the flow must be:</p>
<ol>
<li><strong>Define thresholds</strong>: in Jira, set an energy consumption limit per feature.</li>
<li><strong>Measure</strong>: during execution (Cucumber/Playwright), monitor consumption with tools such as Scaphandre or Intel Power Gadget.</li>
<li><strong>Visualize</strong>: cross that data with the Azure Emissions Dashboard.</li>
<li><strong>Audit</strong>: export reports to Persefoni or Plan A for annual accounting.</li>
</ol>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusions</h2>
<p>The transition process within an organization to embrace GQA involves a series of unavoidable steps and an adaptation process at every level.</p>
<p>It requires involvement from both the business and technical sides, beginning with cultural change and continuing through methodological and strategic adaptation.</p>
<p>All these changes will help the company comply with the legal framework already mentioned in the previous post.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Eva Ferrer ]]>
        </dc:creator>
        <title>What Should Be Considered Before the First Sprint?</title>
        <link>https://en.paradigmadigital.com/organizational-transformation-rev/what-should-be-considered-before-first-sprint/</link>
        <pubDate>Thu, 26 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/organizational-transformation-rev/what-should-be-considered-before-first-sprint/</guid>
        <description>Many times, a project doesn’t fail because of execution, but due to a lack of initial alignment. Misaligned expectations, unclear goals, or decisions made without context can create friction. Taking the time to understand each other before starting brings clarity and leads to better outcomes. In this post, we show you how to do it.
</description>
        <content:encoded>
            <![CDATA[
                <p>If you are familiar with Agile methodologies in general and Scrum in particular (and especially if you practice it), I know you will tell me that <strong>Scrum does not recognize the existence of a Sprint Zero</strong>. This post is not about explaining what a Sprint is or clarifying what the term “time-box” implies or does not imply, but about <strong>giving a place to this period of time that is needed at the start of a project</strong>. We can call it whatever we want, but <a href="https://www.paradigmadigital.com/techbiz/sprint-0-clave-la-gestion-proyectos-agiles/" target="_blank">Sprint Zero</a> is a necessary good if we want to avoid surprises halfway through… or even at the end.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">My experience</h2>
<p>I have been managing projects for almost fifteen years — first under a waterfall methodology, and now I am <strong>an agilista</strong> (or at least I try), working with <strong>Scrum</strong>. When humanity took that great leap toward true value delivery, leaving behind endless Project Management Plan documents printed in A3, we also forgot to ensure, through a time-box, this <strong>space to understand the project, the client, and the team</strong> — to make initial contact, put faces to names, and start getting to know each other.</p>
<p>Defining the rules of the game, understanding needs, feeding the backlog, making initial estimates… <strong>all of this is just as important as delivering a Minimum Viable Product (MVP)</strong>. In fact, it is <strong>essential</strong> for the MVP to meet the expectations of all stakeholders. Because sometimes — just sometimes — in that first contact, one side realizes the other cannot deliver what was expected, and we avoid discovering this when half the work is already done.</p>
<p>It is true that <strong>working in consulting is not the same as working for a final client</strong>. Consultants serve clients who have a product to sell, which somehow also becomes our product. But this service has an end date, so generally <strong>in consulting we tend to request these time-boxes</strong> to align expectations and ensure deliverables with greater reliability. On the client side, where information is more direct and immediate, there is often a tendency to think that these spaces — where the team reflects, plans, aligns, and gathers context — are unnecessary.</p>
<p>This initial phase of the project provides a series of <strong>advantages</strong> that allow us to start the <strong>first Sprint with a higher chance of success</strong>, compared to diving straight into it without fully understanding what we are facing:</p>
<ol>
<li><strong>You will get to know your client</strong></li>
</ol>
<p>And to do so, you will make time — even when there is none — for kickoff meetings where the team presents what has been understood from the agreed scope (the project charter) and the conclusions drawn from this initial phase, where sometimes people only understand what they want (or are able) to understand.</p>
<p>From these meetings, questions will arise that may (or may not) change the product scope and even increase its value — because, honestly, they had not even been considered before. And what better value delivery than adding value to your own product?</p>
<ol start="2">
<li><strong>You will get to know the team and start building relationships</strong></li>
</ol>
<p>I’m not talking about having “a good team” in abstract terms, but about <strong>creating the feeling of moving as one</strong>, rowing in the same direction. In those early days, you begin to see — and be seen: strengths, habits, styles, and how each person reacts under pressure or uncertainty.</p>
<p>That knowledge is invaluable because it allows you to anticipate and manage risks and <a href="https://www.paradigmadigital.com/transformacion-organizacional-rev/toma-control-estrategias-gestion-dependencias-activa/" target="_blank">dependencies</a> that are not visible at first: misunderstandings, misaligned expectations, misinterpreted silences, or decisions no one dares to make without a clear framework.</p>
<ol start="3">
<li><strong>You will help the team and the client get to know each other</strong></li>
</ol>
<p>In those early meetings filled with questions and clarifications (business, technical, and “how do we actually do this?”), the <strong>real starting point</strong> is built. If mutual trust is established during this period, everything flows better afterward: when the team proposes something unexpected, the client will at least listen; and when business priorities change mid-way, the team will treat it as what it is — an adjustment, not an impossible drama.</p>
<ol start="4">
<li><strong>Together, we establish the rules of the game</strong></li>
</ol>
<p>This already accounts for half the success of the deliverable: sprint duration, daily stand-up time, planning/review/retro agendas, estimation approach, and how we write user stories. We also set up the basics: repositories, access, tools (Jira or not?), and initial technical decisions to avoid improvisation. With all this, we outline a <strong>lightweight initial roadmap</strong> to understand where we are starting and what milestones lie ahead.</p>
<ol start="5">
<li><strong>The Product Owner can leave this phase with an initial Product Backlog</strong></li>
</ol>
<p>A first version — but enough to build an MVP that becomes the foundation for everything else. As Scrum dictates, the backlog <strong>will evolve throughout the product lifecycle</strong>, but defining initial epics, stories, and tasks gives us a solid starting point to begin building (and learning) from Sprint 1.</p>
<ol start="6">
<li><strong>From all meetings, impressions, ideas, and decisions, an initial <a href="https://www.paradigmadigital.com/dev/gestion-riesgos-entornos-agiles/" target="_blank">risk management</a> can be established</strong></li>
</ol>
<p>This allows us to create an <strong>initial picture</strong> of potential scenarios the team may face and try to eliminate or mitigate them before they become real problems.</p>
<p>All of this <strong>lays the foundation of the project</strong>, from which the path begins — not only toward successful deliverables but toward doing things properly, always, always through alignment.</p>
<h2 class="block block-header h--h30-15-400 left  ">In real life…</h2>
<p>Let me share a real case that reinforced all of this for me. I was assigned to a project with a client we had worked with before — in theory, nothing should have surprised us. <strong>A short, urgent project</strong>: the client wanted a deliverable by early year, and we received it in mid-November. December in between, with holidays already scheduled… so the margin was tight.</p>
<p>Looking ahead, we realized it was <strong>necessary to discover each other</strong> to strengthen the product and build as much as possible at the highest possible speed, delivering a quality MVP that could grow incrementally.</p>
<p>We spent two weeks getting to know each other: 10 working days where, based on the project initiation document, <strong>we worked closely with the client</strong> to resolve requirement doubts, extract all necessary User Stories, estimate them, and therefore estimate the entire project.</p>
<p>Given the impossibility of delivering everything desired by the client’s deadline, the idea was to <strong>propose an MVP</strong> that would evolve through value deliveries every two weeks, with new features and deployment milestones until completing everything defined in the requirements document.</p>
<p>We started with enthusiasm, determined to do it right. We explained that sprints would be 2-week time-boxes, used every available moment to gather more information, clarify doubts, define how features could be more effective, and build a roadmap with milestones and value deliveries.</p>
<p>The <strong>first week went smoothly</strong>: committed client, motivated team… what could go wrong? By the <strong>second week</strong>, the client stopped attending meetings, became vague in responses, and stopped replying. Before we could finalize the roadmap and delivery plan, they canceled the project.</p>
<p>A failure? For me, quite the opposite: it was an early signal that saved us weeks of pressure and wasted effort. Without that initial phase, we would have discovered it much later — with work already done, an exhausted team, and a frustrated client.</p>
<h2 class="block block-header h--h30-15-400 left  ">What conclusion do we draw?</h2>
<p>So, <strong>was Scrum wrong for not “inventing” a Sprint Zero?</strong> From a strict Scrum perspective, it may not make sense: a Sprint is a Sprint. But real life — especially in consulting — rarely starts under ideal conditions, and that’s where this discovery time-box becomes an ally.</p>
<p>In today’s context, with the growing <strong>need for adaptability</strong>, it makes sense to go through this discovery phase with the client — something that has existed since the origins of project management.</p>
<p>What would have happened in that project where the client withdrew in week two? Would we have realized so quickly that we were not aligned? Probably not. We would have gone through <strong>six weeks of intense pressure</strong>, trying to deliver something unachievable, with a stressed team, a frustrated client, and quality and methodology both questioned.</p>
<p>Traditional project management already accounted for something similar: <strong>before execution, you needed to initiate and plan — understand the context, align expectations, and define a roadmap</strong>.</p>
<p>Call it what you want, but the need to understand the client, the problem, and the project conditions is not new — what has changed is how we do it. Today, we don’t aim for exhaustive waterfall-style planning, but for a lightweight and adaptive start: empathizing with goals, clarifying needs, making assumptions visible, identifying risks, and leaving with a first working vision.</p>
<p>In short: <strong>think just enough at the beginning so you don’t pay later for not thinking at all</strong>.</p>
<p>At Paradigma, we apply this mindset through <a href="https://en.paradigmadigital.com/offering/polaris/" target="_blank">Polaris</a>, our framework, where we maintain Agile principles while adapting to each client’s lifecycle:</p>
<ul>
<li><strong>We keep the Agile essence</strong>, applying the <strong>80/20 rule</strong>: 80% is team attitude, 20% is practices, tools, and experience.</li>
<li><strong>The Scrum Master evolves into an Agile Delivery Leader</strong>, ensuring coordination, value focus, and clarity.</li>
<li><strong>We follow a shared True North</strong>, guiding decisions and maintaining direction.</li>
<li><strong>We work in a 100% adaptive framework</strong> with clear phases: pre-project, kickoff (Sprint Zero), iterations, and closure.</li>
<li><strong>We proactively manage risks</strong> from day one.</li>
<li><strong>We aim for productivity</strong>, doing what is necessary first, then what is possible, and eventually what seemed impossible.</li>
<li><strong>We rely on metrics to make decisions</strong>, adapting them to each context.</li>
<li><strong>Quality is non-negotiable</strong>, ensuring superior products.</li>
<li>We understand that software is about solving business problems, so <strong>our approach is always aligned with the client</strong>.</li>
<li><strong>Team experience matters</strong>, and relationships with clients are key to success.</li>
</ul>
<p>And I’ll close with something simple: life gives us the ability to choose — also when managing projects and trusting others with our products. <strong>Nothing guarantees 100% success</strong>, but if we can choose, I’m clear: better to get to know each other before jumping to conclusions, better to empathize before blaming.</p>
<p>Better to discover things early than end up with a list of “I told you so” that leads nowhere… and certainly not to product success.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Andrés Macarrilla ]]>
        </dc:creator>
        <title>AI Platforms: From Theory to Practice</title>
        <link>https://en.paradigmadigital.com/techbiz/ai-platforms-from-theory-to-practice/</link>
        <pubDate>Tue, 24 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/techbiz/ai-platforms-from-theory-to-practice/</guid>
        <description>Artificial Intelligence has already proven its potential in proofs of concept. Now the challenge is different: bringing it into production in an efficient, scalable, and secure way. In this post, we explain how to move from theory to practice in applying AI.
</description>
        <content:encoded>
            <![CDATA[
                <p>In the previous post, we were overwhelmed by the sheer number of <a href="https://en.paradigmadigital.com/techbiz/from-poc-to-production/" target="_blank">ticketing tools, quality issues, and bottlenecks when trying to bring everything into production</a>. We analyzed how the gap between a successful PoC and industrialized production is real — and very costly.</p>
<p>The good news is that the industry has already found the solution, and it’s not something invented overnight. The answer, <a href="https://en.paradigmadigital.com/techbiz/2026-the-year-ai-platform/" target="_blank">as we have been advocating</a>, lies in <strong>applying the most rigorous discipline we have</strong>: <a href="https://en.paradigmadigital.com/techbiz/how-to-release-platform-as-a-product/" target="_blank">Platform Engineering</a>. The days of <em>“testing with soda”</em> (the AI preparation phase) are over — now it’s time for the <strong>application of AI in production</strong>, at scale, with responsibility and speed.</p>
<p>The need for an AI platform is not just a trend — it is the <strong>AI-native infrastructure</strong> that leading companies, including some of our clients, are already building in the market today.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The industry is already there, and there are reference solutions</h2>
<p>The truth is, we don’t need to be guinea pigs. The heavyweights are already clearly stating what we are seeing: <strong>AI needs Platform Engineering</strong>.</p>
<p><strong>Google and Thoughtworks</strong> are just two examples of how the industry’s vision is converging. The message is clear: if you want to scale MLOps consistently, you need <a href="https://en.paradigmadigital.com/techbiz/understanding-golden-paths-practical-guide/" target="_blank">Golden Paths</a> and the abstraction that only Platform Engineering can provide. It’s not just about having more GPUs — it’s about <strong>making those GPUs consumable</strong> without a 300-page manual.</p>
<p><strong>What does this mean?</strong> It means that the goal is not just to support current models, but to <strong>create AI-native infrastructure</strong>. In other words, a platform designed from the ground up to orchestrate the complexity of AI agents, vector data, and the MLOps lifecycle. We are talking about a platform where automation and intelligence are built in.</p>
<p>If industry leaders have already validated this <em>blueprint</em>, the question is not <em>if</em> we should build it, but <strong>when</strong> we start.</p>
<p>An AI platform is rarely built from scratch using a single product. It is based on an <strong>intelligent orchestration of components</strong> that reduce friction and provide flexibility. For teams starting this journey, it is <strong>crucial to identify the stack</strong> that will serve as the foundation. An example of an industrialized Golden Path includes tools such as:</p>
<ul>
<li><strong>Pipeline orchestration: Kubeflow or MLflow</strong>, to standardize workflows (training, packaging, and model deployment).</li>
<li><strong>Training data management (Feature Store)</strong>: tools like <strong>Feast</strong>, ensuring data consistency between training (offline) and prediction (online).</li>
<li><strong>Model serving and MLOps core</strong>: using Kubernetes for deployment is just the beginning. Specialized solutions like <strong>KServe</strong> help manage scaling and model monitoring complexity.</li>
</ul>
<p>The key is not adopting all these tools, but ensuring that <strong>the AI Platform abstracts them</strong>, so Data Scientists only interact with the simplified Golden Path we define.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Real use cases: tangible ROI</h2>
<p>CxOs love the word <strong>ROI</strong>. How do we evangelize the value of this platform? Simple: <strong>with facts</strong> — showing where self-service and traceability generate revenue or mitigate risk.</p>
<p>Let’s ground this with three examples where the <strong>AI Platform provides the strongest foundation</strong>:</p>
<ul>
<li><strong>Retail and personalization at scale</strong></li>
</ul>
<p>A platform that enables deploying a <strong>recommendation engine</strong> for each customer segment (or even each individual) in a matter of hours. The advantage: if Data Scientists can iterate on recommendations ten times faster thanks to an automated Golden Path, the company sells more. The business focus is on improving the algorithm’s Time-to-Market.</p>
<ul>
<li><strong>Dynamic fraud detection</strong></li>
</ul>
<p>Risk analysis. Here, the AI platform not only deploys the detection model but also <strong>ensures auditability</strong> and <strong>real-time monitoring of model drift</strong>. If the model starts to fail, the platform detects it and automatically rolls it back, protecting the company from multimillion losses or regulatory failures.</p>
<ul>
<li><strong>Internal IT and automation</strong></li>
</ul>
<p>We can use the AI platform to enable Data Scientists to build internal AI agents that automate support ticket management or optimize cloud resources. Moving from “we have an AI agent in testing” to “the agent resolves 20% of L1 incidents” is only possible with a platform that reliably manages its lifecycle and state.</p>
<p>There are countless use cases. As experts in your vertical and business domain, you will surely identify many more opportunities to capitalize on.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The CxO roadmap: beyond code and technology</h2>
<p>This is where the CxO shifts into strategic consultant mode. Technology is the <strong>how</strong>, but strategy and people are the <strong>what</strong>. We cannot talk about an AI Platform without addressing <strong>change management</strong>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Strategic alignment: connecting DORA to the balance sheet</h3>
<p>We need to translate <em>Feature Store</em> jargon into business language — the language of those who unlock investment.</p>
<p>Using principles from our <a href="https://en.paradigmadigital.com/techbiz/what-platform-engineering-is-what-it-is-not/" target="_blank">Platform Engineering series</a>:</p>
<ul>
<li><strong>Business KPIs and MLOps metrics</strong>: demonstrating that reducing Lead Time for Changes (a DORA metric we should use to measure the platform) directly translates into faster product launches and reduced risk. This is the only way to justify the investment with evidence.</li>
</ul>
<p>Additionally, for <strong>AI and models</strong>, we can introduce specific metrics:</p>
<ul>
<li><strong>PoC-to-Prod Ratio</strong>: the percentage of Proofs of Concept that reach production within 90 days. The platform should increase this ratio from the current 10–20% to over 70%.</li>
<li><strong>Model Lift</strong>: measuring the incremental improvement in business metrics (e.g., conversion increase, fraud reduction) before and after deploying the model through the platform.</li>
<li><strong>MLOps TCO Reduction</strong>: quantifying operational cost savings achieved through self-service and automation, eliminating dependency on TicketOps and manual infrastructure configuration.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Change management: bridging Data Science and engineering</h3>
<p>This is probably the hardest part. We need to <strong>reconcile and empower both worlds</strong>:</p>
<ul>
<li><strong>Data Scientists are not DevOps — and they shouldn’t be</strong>. That’s why the platform is essential as a bridge. We must establish an <strong>internal product culture</strong>: the platform team provides services and capabilities, and the Data Science team is the customer.</li>
<li><strong>Training and new roles</strong>: invest in developing <strong>Machine Learning Engineers</strong>, the glue between Data Scientists and Platform Engineers.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/gestion_cambio_data_scientist_ingenieria_f0be4bd0fa.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/gestion_cambio_data_scientist_ingenieria_f0be4bd0fa.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/gestion_cambio_data_scientist_ingenieria_f0be4bd0fa.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/gestion_cambio_data_scientist_ingenieria_f0be4bd0fa.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/gestion_cambio_data_scientist_ingenieria_f0be4bd0fa.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Change management, ROI use cases and the CxO roadmap: strategy and people" title="Change management"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The first step is technological abstraction and addressing real business needs</h2>
<p>The industry is doing it. At Paradigma, we are already doing it. We see that <strong>the benefits are tangible</strong>, and above all, we see that <strong>the risk of falling behind is unacceptable</strong>. AI is the new competitive battlefield.</p>
<p>So, where do we start?</p>
<p><strong>My recommendation for all C-level executives, decision-makers, and business unit leaders responsible for technology decisions is simple</strong>: start with the <strong>highest-friction components</strong> — those that consume the most effort without directly impacting the business.</p>
<p>Focus on what is <strong>most valuable to optimize and industrialize</strong>, and what accelerates the journey from idea to production.</p>
<ul>
<li><strong>Do not try to build the entire IDP at once</strong>. That would be a mistake. Be pragmatic.</li>
<li>Start with capabilities like a <strong>Feature Store</strong> to solve data quality issues, or a <strong>Model Registry</strong> to address traceability and governance of model usage within your organization.</li>
</ul>
<p>These are just a couple of examples, but those familiar with the process and its pain points will be able to identify “small wins” that gradually build momentum and shape the AI Platform.</p>
<p>Any effort that reduces manual self-service and increases standardization is paving the way toward that <strong>inevitable AI Platform</strong>. The platform is not built in a day — it is a journey. The best way to evangelize and generate traction is to demonstrate value through small but solid Golden Paths.</p>
<p>I hope this series of posts has sparked your curiosity about why this topic is so important, provided strategic justification through real-world challenges, and most importantly, given you ideas on where to start building a practical roadmap that delivers value and helps lead the AI conversation in your organization.</p>
<p>If you’d like to revisit the previous two posts in the series, here they are:</p>
<ul>
<li><a href="https://en.paradigmadigital.com/techbiz/2026-the-year-ai-platform/" target="_blank">2026 will be the year of AI platforms</a></li>
<li><a href="https://en.paradigmadigital.com/techbiz/from-poc-to-production/" target="_blank">From PoC to production: many proofs of concept, but few in production</a></li>
</ul>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Andrés Macarrilla ]]>
        </dc:creator>
        <title>From PoC to Production: Many Proofs of Concept, but Few in Production</title>
        <link>https://en.paradigmadigital.com/techbiz/from-poc-to-production/</link>
        <pubDate>Tue, 17 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/techbiz/from-poc-to-production/</guid>
        <description>AI only generates value when it reaches production and becomes integrated into real products and processes. Without platforms that automate deployments, data governance, and security, models remain experiments. The difference between an interesting PoC and a competitive advantage lies in the ability to bring AI into production. The question is: how? We explain it in this post.
</description>
        <content:encoded>
            <![CDATA[
                <p>We have spent more than two very intense years with Artificial Intelligence everywhere and at every level. This has triggered and driven <strong>an enormous number of proof of concepts</strong>, applying <strong>different technologies to different use cases</strong> to demonstrate potential returns for businesses and customers.</p>
<p>All of this has worked relatively well. Beyond the constant change and evolution across the entire technological landscape related to AI, <strong>there are no major problems until the moment comes to make it production-ready</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The reality of AI in production</h2>
<p>As we already mentioned in the first post of this three-part series, <a href="https://en.paradigmadigital.com/techbiz/2026-the-year-ai-platform/" target="_blank">the AI Platform is no longer an option but a necessity</a>. But <strong>why is this need so urgent?</strong> In my opinion, it’s because we are losing money, time, and opportunities every single day.</p>
<p>Most companies are trapped in a discouraging <strong>“Groundhog Day” loop</strong>: <strong>proofs of concept tend to succeed, but moving them into production is painful</strong>. The reality is that Data Scientists perform magic in their notebooks and the results are promising, but when trying to move those models into the real world, they hit a wall of bureaucracy, tickets, lack of standardized tools, missing automation, and security team bottlenecks.</p>
<p>It becomes a nightmare. And that nightmare translates into <strong>very concrete bottlenecks and problems</strong> that the AI Platform must eliminate.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Problem 1: self-service and the pain of TicketOps</h2>
<p>Let’s put ourselves in the shoes of the AI team. They have just trained a model that could save millions, and all they need is surprisingly <strong>to deploy it</strong>.</p>
<p>What happens next? In 90% of cases, they need to contact an infrastructure team, open a ticket, wait three days to get a namespace assigned, request network permissions, and hope for access to a feature database. This is what we call <strong>&quot;TicketOps&quot;</strong>, and it is a major problem for Time-to-Market.</p>
<p><strong>⚠️ Important:</strong> this process is necessary. We should not underestimate the potential security or regulatory compliance issues that could arise.</p>
<p>However, the situation is becoming even worse. With the rise of LLMs, the problem has grown significantly. Deploying an LLM is very different from deploying a simple regression model. It may require GPUs, specialized storage, and APIs that manage the complexity of prompting and memory usage. Without a self-service tool that allows this to be deployed in minutes, it simply does not happen. The democratization of LLMs ends right there.</p>
<p>Previously, a Data Scientist could request a server and that solved everything. Today, it is necessary to deploy a microservice that consumes an LLM, uses a <strong>vector database</strong> for long-term memory, and also requires <strong>dedicated access to a GPU node</strong> in the cluster. If that process cannot be reduced to a single command in our IDP, we are creating a <strong>technical barrier</strong> for every new AI idea — directly impacting the business and, therefore, our customers.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/autoservicio_ticketops_8e39da2e00.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/autoservicio_ticketops_8e39da2e00.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/autoservicio_ticketops_8e39da2e00.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/autoservicio_ticketops_8e39da2e00.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/autoservicio_ticketops_8e39da2e00.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Self-service and TicketOps" title="Self-service and TicketOps"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Problem 2: compliance and regulatory requirements</h2>
<p>When we talk about AI, <strong>the risk is enormous</strong>. A model that makes biased decisions or operates with outdated (or private) data can cost a fortune in fines or even <strong>destroy a company’s reputation</strong>.</p>
<p>Here, the problem is twofold:</p>
<ol>
<li><strong>Quality</strong></li>
</ol>
<p>The AI Platform must guarantee that <strong>training data is consistent</strong>. If we do not standardize how data is ingested and versioned, the production model will inevitably drift. It is <strong>Garbage In, Garbage Out</strong> taken to the extreme.</p>
<p>The consistency problem is caused by the absence of a <strong>Feature Store</strong>. Data Science teams often calculate the mean or standard deviation of a data column during training, but the code that calculates that feature in real-time in production is different.</p>
<p>A <strong>Feature Store</strong>, managed by the platform, guarantees that the code used to compute a feature is <strong>the same during training and serving (production)</strong>. It is the only way to ensure mathematical consistency.</p>
<ol start="2">
<li><strong>Security</strong></li>
</ol>
<p>Who has access to the model? How are changes in code and datasets tracked? Without <strong>traceability and security policies by design</strong> (integrated into the <a href="https://en.paradigmadigital.com/techbiz/understanding-golden-paths-practical-guide/" target="_blank">Golden Paths</a>), auditing becomes impossible.</p>
<p>And honestly, the idea of a Data Science team manually configuring firewall rules gives me chills. It’s a disaster waiting to happen.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/cumplimiento_compilance_normativo_16cc14c0c1.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/cumplimiento_compilance_normativo_16cc14c0c1.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/cumplimiento_compilance_normativo_16cc14c0c1.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/cumplimiento_compilance_normativo_16cc14c0c1.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/cumplimiento_compilance_normativo_16cc14c0c1.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="compliance and regulatory requirements" title="Compliance / regulatory requirements"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Problem 3: Time-to-Market (TTM) pressure</h2>
<p>This is the point that matters most to the business. <strong>What is the value of having an innovative model if it takes six months to reach the customer?</strong> Competitors are not waiting, and Time-to-Market becomes the most critical business metric.</p>
<p>Today, platform teams face enormous pressure to <strong>operationalize AI</strong> (that is, to make it reliable and fast). They must move at the speed of innovation, not at the speed of infrastructure.</p>
<p>The solution — and this is where <a href="https://en.paradigmadigital.com/techbiz/how-to-release-platform-as-a-product/" target="_blank">Platform Engineering</a> starts to feel almost like science fiction, is moving from TicketOps to <strong>Intent-to-Infrastructure</strong>.</p>
<p>In other words, giving a Data Scientist the ability to tell the platform: <em>&quot;I want an environment to train a classification model with a specific dataset.&quot;</em></p>
<p>The platform automatically <strong>translates that intent into all the required infrastructure</strong> (Kubernetes, networking, secure storage) in a matter of minutes.</p>
<p>This removes the biggest bottleneck: <strong>friction and dependency between teams</strong>. It allows us to move from idea to production at the speed the business demands.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/presion_time_to_market_237899e3e0.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/presion_time_to_market_237899e3e0.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/presion_time_to_market_237899e3e0.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/presion_time_to_market_237899e3e0.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/presion_time_to_market_237899e3e0.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="from TicketOps (bottleneck) to Intent-to-Infrastructure (fast, automated and self-service)" title="Intent-to-Infrastructure"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p>If we combine bureaucracy and rigid processes (often tied to ticket-based workflows), the importance of data quality, and relentless Time-to-Market pressure, <strong>we quickly understand that AI without a platform becomes an expensive and frustrating research project, not a competitive advantage</strong>.</p>
<p>AI platforms are the only tool that allows organizations to maintain <strong>control</strong> without sacrificing <strong>speed</strong>.</p>
<p>We have seen the real problems and bottlenecks that slow down the ability of AI to deliver value to businesses and customers. In the final post of this series, we will stop talking about pain and start focusing on <strong>solutions</strong>.</p>
<p>We will explore <strong>what the industry is doing</strong>, with concrete examples and use cases, and define the <strong>first steps of a practical roadmap</strong> so your team can start building that model factory today.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Santiago López ]]>
        </dc:creator>
        <title>What Is Green QA? Quality That Breathes</title>
        <link>https://en.paradigmadigital.com/dev/what-is-green-qa-quality-that-breathes/</link>
        <pubDate>Tue, 17 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/what-is-green-qa-quality-that-breathes/</guid>
        <description>Green QA introduces a new way of measuring software quality. In addition to validating functionality and performance, it aims to analyze the energy consumption and carbon footprint of the process. This makes it possible to build more efficient digital products aligned with sustainability criteria.
</description>
        <content:encoded>
            <![CDATA[
                <p>Making software work correctly and ensuring that work processes follow a quality-oriented methodology are not the only goals that should be pursued within QA, QC &amp; Testing and in the software world in general. There is a new frontier: <strong>measuring quality in watts and CO₂</strong>.</p>
<p>Testing everything at all times may not be the most optimal or the most ecological approach. Improving <strong>testing cycles</strong> and adapting both <strong>process quality and product quality</strong> to reduce emissions becomes a new challenge for teams.</p>
<p>In this series of three posts about <strong>Green Quality Assurance</strong>, we will explore this new perspective to understand what it consists of, which frameworks may be most suitable to achieve our goals, and how to establish KPIs and metrics within an organization or project to guide our practices toward greater efficiency and sustainability.</p>
<p>In this first part, we will focus on explaining <strong>what it means to be environmentally conscious in the QA world and how this need arises</strong>, driven by European regulations such as CSR (Corporate Sustainability Reporting Directive) and concepts like ESG (Environmental, Social, and Governance).</p>
<h2 class="block block-header h--h30-15-400 left  ">What is Green QA?</h2>
<p>Imagine your quality process becoming an “athlete”: faster, stronger, and much more efficient. <strong>Green Quality Assurance (GQA)</strong> redefines QA, QC, and testing processes so that each one matters, <strong>reducing energy consumption and carbon footprint</strong> without losing the rigor required.</p>
<p>Integrating sustainability into the <strong>quality of the process</strong> (QA activities during software construction) and into the <strong>quality of the product</strong> (QC and testing, validating and verifying the built product) means ensuring that software is not only good in terms of quality, but also <strong>efficient in terms of energy consumption and sustainability</strong>, and aligned with <strong>ESG principles</strong>, which we will explore later.</p>
<p>Organizations that <strong>integrate social responsibility into their development lifecycle</strong>, while maintaining performance and controlling costs, present a <strong>competitive advantage</strong> to potential clients. This allows those clients to improve their sustainability metrics without sacrificing other objectives.</p>
<p>Aligned with the concept of GQA is <strong>GreenCode</strong>, which focuses on coding practices that seek energy efficiency through techniques such as <strong>lazy loading</strong>, <strong>microservices instead of monoliths</strong>, and <strong>lightweight code</strong> that extends the lifespan of the devices on which it runs, among other aspects.</p>
<p><strong>Green IT</strong> goes hand in hand with the above concepts and serves as their foundation. It refers to the use of <strong>hardware with high energy efficiency certifications</strong> and the use of <strong>cloud infrastructure</strong>, since large providers such as Amazon and Google often rely partly on renewable energy sources like solar power to maintain their data centers.</p>
<p>Currently, from a legal perspective, there are <strong>regulations</strong> that require companies to present CSR reports (or Sustainability Reports). At the European level, this is known as the <strong>CARD directive</strong>, which in Spain has been mainly integrated through the <strong>Corporate Sustainability Reporting Law (LIES)</strong>. It is important to remember that the CARD directive requires the use of <strong>ESRS standards (European Sustainability Reporting Standards)</strong>. These standards require certain companies to <strong>break down their energy consumption and emissions</strong> depending on the size and type of company. As you can see, Green QA has legal coverage and, if its objectives are achieved, it helps companies comply with these requirements.</p>
<p>As an example of the use of these regulations and Green QA, a traditional CSR report might say: “we want to be green,” while a report under the <strong>CARD directive</strong> would require something like: “our <strong>Green QA</strong> suite reduced CPU consumption by 12%, saving X tons of CO₂ this year.”</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">ESG as DNA</h2>
<p>Continuing with the concepts surrounding GQA, and since we will refer to it frequently, let’s briefly explain what ESG is.</p>
<p><strong>ESG (Environmental, Social, and Governance)</strong> is the set of criteria that investors, governments, and customers use to <strong>measure</strong> whether a company is responsible in terms of <strong>environmental impact, social responsibility, and internal governance</strong>.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/esg_software_development_13ef1bb249.jpeg"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/esg_software_development_13ef1bb249.jpeg 1920w,https://www.paradigmadigital.com/assets/img/resize/big/esg_software_development_13ef1bb249.jpeg 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/esg_software_development_13ef1bb249.jpeg 910w,https://www.paradigmadigital.com/assets/img/resize/small/esg_software_development_13ef1bb249.jpeg 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="ESG in software development" title="ESG in software development"/></article>
<p>When we talk about <strong>Environmental</strong>, we refer to the environmental impact a company has on the planet. Green QA plays a significant role here, helping with aspects such as:</p>
<ul>
<li><strong>Key topics</strong>: carbon footprint, energy efficiency, waste management, and climate change.</li>
<li><strong>In software</strong>: how much energy do your servers consume? Is your code optimized to avoid unnecessarily heating processors?</li>
</ul>
<p>When we talk about <strong>Social</strong>, we refer to how the company manages its relationships with people and society.</p>
<ul>
<li><strong>Key topics</strong>: diversity, human rights, workplace safety, and data protection.</li>
<li><strong>In software</strong>: this includes accessibility (ensuring that people with disabilities can use your application) and ethical data practices (user privacy).</li>
</ul>
<p>Finally, when we talk about <strong>Governance</strong>, we refer to how the company is managed internally — the rules and transparency that guide its operations.</p>
<ul>
<li><strong>Key topics</strong>: business ethics, executive compensation transparency, anti-corruption measures, and legal compliance.</li>
<li><strong>In software</strong>: quality audits, compliance with software regulations, and transparency in development processes.</li>
</ul>
<p>Now that we understand ESG, we can see how the technical quality role can evolve — from guaranteeing the method and the product to also ensuring <strong>ESG compliance</strong>, thereby becoming a guardian of sustainability as well.</p>
<p>Understanding the major ESG pillars (environmental impact, social impact, and internal governance), GQA helps <strong>achieve objectives in the technological domain</strong>:</p>
<ul>
<li><strong>🍃 Environmental</strong>: optimizing automated test suites and improving cloud efficiency to directly reduce the software’s carbon footprint.</li>
<li><strong>🤝 Social</strong>: promoting digital inclusion. Efficient code consumes fewer resources and performs better on older devices, helping combat planned obsolescence.</li>
<li><strong>⚖️ Governance</strong>: generating technical metrics and transparent reports on energy consumption, facilitating compliance with green audits.</li>
</ul>
<p>Organizations that align their quality lifecycle management processes with ESG goals without compromising speed, cost, or performance will gain a <strong>decisive advantage in a market increasingly saturated with high-performance CSR requirements</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Main objectives</h2>
<p>The main objectives of this philosophy are <strong>resource optimization, operational efficiency, and profitability</strong>. Among the goals pursued are <strong>reducing environmental impact</strong> by lowering consumption, <strong>reducing waste and emissions</strong>, and <strong>optimizing the use of material and computational resources</strong>.</p>
<p>We know that every time a test suite runs (especially in the cloud or on large servers), electricity is consumed, which generates a carbon footprint. Reducing this impact through <strong>selective testing</strong> and <strong>efficient test code</strong> should be a key goal.</p>
<p>Another objective of Green QA is <strong>alignment with ESG</strong>, providing environmental metrics:</p>
<ul>
<li><strong>Enabling auditable sustainability reports</strong> (test cycle consumption, etc.)</li>
<li><strong>Providing insights for sustainable data governance</strong> (reducing data duplication in TDM test environments), which lowers physical storage requirements in data centers.</li>
<li><strong>Providing audit trails</strong> proving that quality processes comply with the organization’s “Net Zero” policies.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Impact areas: where quality becomes green</h2>
<p>To achieve Green QA objectives, we must <strong>intervene in key digital assets</strong>. It’s not only about verifying that software works — it’s about <strong>ensuring it is sustainable</strong>. To do so, we must act in the following areas:</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Validation of digital products and processes</h3>
<ul>
<li><strong>Software lifecycle analysis (S-LCA)</strong>: QA no longer only validates the “Go-Live”; it audits environmental impact from development and testing through deployment and eventual software decommissioning. Its role is to ensure that energy consumption calculations at each stage are accurate and realistic.</li>
<li><strong>Code sustainability verification</strong>: implementing protocols to measure the “energy density” of functions. QA performs efficiency tests to prevent bloatware and ensure the product does not force user hardware obsolescence.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Sustainability audits in infrastructure</h3>
<ul>
<li><strong>Cloud provider validation</strong>: QA verifies that the infrastructure hosting the software complies with renewable energy certifications (PUE – Power Usage Effectiveness). We don’t just test in the cloud — we audit that the cloud is green.</li>
<li><strong>Optimization of the digital supply chain</strong>: reviewing third-party libraries and dependencies. Green QA detects inefficient dependencies that consume resources in the background without delivering value.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Validation of eco-design in software</h3>
<ul>
<li><strong>Repairability and modularity criteria</strong>: validating that code is structured so it can be easily maintained and updated without refactoring the entire system, saving unnecessary computation cycles.</li>
<li><strong>Data transfer efficiency</strong>: specific tests to reduce the weight of API requests and data traffic, directly lowering electricity consumption in data centers and telecom networks.</li>
<li><strong>Carbon-aware load testing</strong>: measuring not only how many users the system supports but also how much CO₂ the server emits under that load.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Ensuring ESG data integrity</h3>
<p>For a sustainability report to be valid, the data must be highly <strong>accurate</strong>. Green QA becomes the <strong>technical auditor</strong> ensuring the integrity of every metric, which requires proper training for this role to certify the provided information.</p>
<ol>
<li><strong>Validation of emissions KPIs</strong></li>
</ol>
<p>The QA profile must ensure that calculation algorithms and data sources reflect the real energy consumption of the digital ecosystem:</p>
<ul>
<li><strong>Direct emissions</strong>: first level. Validation of data from proprietary infrastructure and on-premise servers.</li>
<li><strong>Purchased energy</strong>: second level. Verification of electricity consumption reports from contracted data centers and their energy mix (renewable vs. fossil).</li>
<li><strong>Value chain</strong>: third level. The biggest challenge. QA audits the efficiency of third-party APIs and the energy consumption generated by the software on end-user devices.</li>
</ul>
<ol start="2">
<li><strong>Accuracy and reliability of reports</strong></li>
</ol>
<p>Having data is not enough — it must be correct. Techniques are applied to prevent bias in sustainability reports:</p>
<ul>
<li><strong>Stress testing</strong> of carbon footprint calculation models.</li>
<li><strong>Validation</strong> to ensure there are no duplicated emissions counted across departments.</li>
</ul>
<ol start="3">
<li><strong>Traceability and auditing (Data Lineage)</strong></li>
</ol>
<p>Implementation of traceability tests to ensure that every data point in the annual report can be traced back to its technical origin (server logs, CPU metrics, etc.). If an auditor asks where a number comes from, Green QA has the documented answer.</p>
<ol start="4">
<li><strong>Consistency in ESG disclosure</strong></li>
</ol>
<p>Ensuring that the data published on the website, in the app, and in the legal <strong>CARD directive report</strong> are identical. Automated cross-validation processes should be implemented to avoid discrepancies that could result in legal penalties.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Application of standards and regulations</h2>
<p>Several frameworks and standards support the implementation of these “green” processes. Here we briefly review them; in future articles we will explain how they can be applied strategically and methodologically:</p>
<ul>
<li><strong>ISO frameworks (14001 and 50001)</strong>: ensuring QA processes are documented to pass external environmental and energy management audits.</li>
<li><strong>CARD (Corporate Sustainability Reporting Directive)</strong>: the new European regulation ensuring legal sustainability reporting requirements are technically fulfilled.</li>
<li><strong>EU Taxonomy</strong>: validating that company activities are correctly classified as “green” according to EU technical criteria.</li>
<li><strong>GHG Protocol (Greenhouse Gas Protocol)</strong>: establishing the standard methodology for calculating carbon emissions derived from the software lifecycle. QA must validate energy consumption data collection in Scope 2 (server/cloud energy) and Scope 3 (third-party cloud services and end-user software usage), ensuring emission factors are accurate for carbon footprint reports.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p>We have seen how, within the world of software quality, there is an aspect that is rarely considered yet has a significant impact.</p>
<p>Applying GQA not only <strong>improves sustainability</strong> but also <strong>enhances efficiency</strong>, which ultimately reduces costs in both processes and product development.</p>
<p>All these processes are supported by a set of European regulations and their transposition into Spanish law, making them even more relevant since non-compliance can result in financial penalties for companies.</p>
<p>Ultimately, <strong>Green QA is about doing our part to improve life through quality practices.</strong></p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Eider Ogueta ]]>
        </dc:creator>
        <title>Angular Signals: The Evolution of Reactivity and Change Detection in Angular</title>
        <link>https://en.paradigmadigital.com/dev/angular-signals-evolution-reactivity-change-detection/</link>
        <pubDate>Tue, 10 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/angular-signals-evolution-reactivity-change-detection/</guid>
        <description>Angular Signals changes the reactivity model in Angular. Instead of traversing the entire component tree on every event, the framework updates only the bindings that depend on the state that has changed. This improves performance and simplifies state management, especially in large applications. Here’s how it works.
</description>
        <content:encoded>
            <![CDATA[
                <p><strong>Reactivity in Angular</strong> is the process by which the framework determines that the <strong>application state has changed</strong> and, if necessary, refreshes the view or the DOM. If you’ve read the article <a href="https://www.paradigmadigital.com/dev/estrategia-deteccion-cambios-la-magia-de-angular/" target="_blank">Change detection strategy, the magic of Angular</a>, you’ll know that Angular bases this detection on a <strong>component tree</strong> and mechanisms such as <strong>NgZone</strong> or <strong>ChangeDetectionStrategy</strong> to control when and how the UI is updated.</p>
<p>Today we’ll go one step further: we’ll see how <strong>Angular Signals</strong>, the new <strong>reactivity API in Angular</strong> introduced experimentally in <strong>Angular 16</strong>, makes it possible to improve UI efficiency, update only what is necessary, and <strong>simplify state management in Angular components</strong>, optimizing the performance of your applications.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The problem with classic change detection in Angular</h2>
<p>In the <strong>traditional approach</strong>:</p>
<ul>
<li>Each component has its own change detector.</li>
<li>Angular traverses the component tree every time an asynchronous event occurs (click, timer, HTTP request).</li>
<li>In applications with many components or large lists, this creates <strong>unnecessary change detection cycles</strong>, affecting performance.</li>
</ul>
<p>For example, in the previous article we created a <strong>clock that updates every second and a table of users</strong>. The clock update triggered a full <strong>Angular change detection cycle</strong>, recalculating random values in each row, even though the data had not changed.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Angular Signals: how they improve reactivity</h2>
<p>With Angular Signals:</p>
<ul>
<li>Each signal represents a <strong>reactive value</strong>.</li>
<li>Angular updates <strong>only the bindings that depend on that signal</strong>.</li>
<li>Derived values (computed) are recalculated automatically.</li>
<li>There is no need for ChangeDetectionStrategy.OnPush or ChangeDetectorRef to optimize performance.</li>
</ul>
<p>This makes Signals a <strong>key tool for improving performance in Angular</strong>, especially in <strong>large applications</strong> with lists and complex components.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Signals, effects, and derived values</h2>
<p>To better understand it, let’s see how Signals work in Angular.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Create a signal</h3>
<pre><code class="language-javascript">import { signal } from '@angular/core';

const counter = signal(0);
</code></pre>
<ul>
<li>A signal represents a reactive value.</li>
<li>To read it, it is invoked as a function: counter().</li>
<li>To write to it, .set() or .update() is used.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Effects (effect)</h3>
<p>Effects are functions that run automatically when the signals they use <strong>change</strong>.</p>
<pre><code class="language-javascript">import { signal, effect } from '@angular/core';
const counter = signal(0);
effect(() =&gt; {
  console.log(`The counter value is ${counter()}`);
});
counter.set(1); // The effect runs and prints “The counter value is 1”
</code></pre>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Computed values</h3>
<p>You can compute a value from one or more signals without having to rewrite additional logic.</p>
<pre><code class="language-javascript">import { computed } from '@angular/core';

const double = computed(() =&gt; counter() * 2);
</code></pre>
<ul>
<li>When counter changes, Angular automatically recalculates double.</li>
<li>computed is <strong>read-only</strong> and cannot be modified manually.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Advanced features of Angular Signals</h2>
<p>In addition to what we have seen, Angular Signals includes several <strong>important features</strong>, according to the official documentation:</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Computed is lazy and memoized</h3>
<ul>
<li>It does not execute until someone reads it.</li>
<li>It memorizes its last value.</li>
<li>It only recalculates if one of its dependencies changes.</li>
</ul>
<pre><code class="language-javascript">const counter = signal(0);
const double = computed(() =&gt; {
  console.log('Recalculando...');
  return counter() * 2;
});
</code></pre>
<ul>
<li>If you never call double(), the function is not executed.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Dynamic dependency tracking</h3>
<p>Angular only tracks signals that are actually read within a computed or effect.</p>
<pre><code class="language-javascript">const show = signal(false);
const counter = signal(0);

const conditional = computed(() =&gt; show() ? counter() : 0);
</code></pre>
<ul>
<li>As long as show() is false, conditional <strong>does not depend on counter</strong>.</li>
<li>This optimizes performance and avoids unnecessary recalculations.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Signals in templates</h3>
<pre><code class="language-html">{{ counter() }}
</code></pre>
<ul>
<li>Angular automatically detects that the template depends on the signal.</li>
<li>The UI updates only when the value changes.</li>
<li>You don't need async pipe, ChangeDetectorRef, or markForCheck.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">untracked(): avoid reactive dependencies</h3>
<p>Allows you to read a signal without registering it as a dependency within an effect or computed.</p>
<pre><code class="language-javascript">effect(() =&gt; {
  console.log(`User set to ${currentUser()} and the counter is ${untracked(counter)}`);
});
</code></pre>
<ul>
<li>The effect runs only when <strong>currentUser</strong> changes.</li>
<li>Changes in <strong>counter</strong> do not trigger the effect, but we can still read its current value.</li>
<li>This is useful when you want to read data incidentally without turning it into a trigger for reactivity.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Live practical example: clock and user table with Angular Signals</h2>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img data-src="https://www.paradigmadigital.com/assets/cms/angular_caso_real_a01d6014fb.gif" class="lazy-img" title="Practical example" alt="Real case Angular Signals"></article>
<p>Instead of showing all the code here, we’ve prepared an <strong>interactive example on StackBlitz</strong> so you can see <strong>Angular Signals</strong> working in real time.</p>
<p>In this example you will observe:</p>
<ul>
<li>A <strong>clock</strong> that updates every second.</li>
<li>A <strong>user table</strong> where each row depends on a <strong>signal</strong>, avoiding unnecessary updates.</li>
<li>How the UI updates <strong>only when the relevant state changes</strong>, improving performance compared to classic change detection.</li>
</ul>
<p>Try the live example on <a href="https://stackblitz.com/edit/stackblitz-starters-mpxjrrta?file=src%2Fuser%2Frow%2Frow.component.ts" target="_blank">StackBlitz</a>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Visual comparison</h2>
<table>
<thead>
<tr>
<th style="text-align:center">Before (Default CD)</th>
<th style="text-align:center">Now (Signals)</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">Full change detection cycle every second</td>
<td style="text-align:center">Only what depends on a signal is updated</td>
</tr>
<tr>
<td style="text-align:center">randomId was recalculated unnecessarily</td>
<td style="text-align:center">randomId remains stable</td>
</tr>
<tr>
<td style="text-align:center">OnPush required for optimization</td>
<td style="text-align:center">No need for OnPush or ChangeDetectorRef</td>
</tr>
<tr>
<td style="text-align:center">Performance affected with many rows</td>
<td style="text-align:center">Performance remains stable even when the clock updates</td>
</tr>
</tbody>
</table>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p><strong>Angular Signals</strong> represents the evolution of the change detection model explained in <a href="https://www.paradigmadigital.com/dev/estrategia-deteccion-cambios-la-magia-de-angular" target="_blank">our previous article</a>.</p>
<p>Previously, any asynchronous event could trigger a full change detection cycle. Now, Angular updates <strong>only the bindings that depend on the signals that change</strong>, avoiding unnecessary computations and improving efficiency.</p>
<p>With Signals and derived values (computed), we can build <strong>faster, more predictable, and easier-to-maintain applications</strong>, especially as the component tree grows.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Iria Calvelo ]]>
        </dc:creator>
        <title>From Vision to Action: Build Your First Roadmap</title>
        <link>https://en.paradigmadigital.com/organizational-transformation-rev/from-vision-to-action-build-first-roadmap/</link>
        <pubDate>Tue, 10 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/organizational-transformation-rev/from-vision-to-action-build-first-roadmap/</guid>
        <description>Have you ever wondered what you should do next, what path to follow, or how to move forward while creating a digital product? If so, today we’ll tell you what you need: a roadmap! In this post, we’ll explain what a roadmap is, and what it isn’t, and you’ll also learn how to build one, always keeping the focus on achieving your product vision and delivering value to your users.
</description>
        <content:encoded>
            <![CDATA[
                <p><strong>Have you ever, while creating a digital product, wondered what you should do next, what path to follow, or how to move forward?</strong> If so, today we’ll tell you what you need: <strong>a roadmap!</strong> In this post, we’ll explain what a roadmap is — and what it isn’t — and you’ll also learn how to build one, always keeping the focus on <strong>achieving your product vision and delivering value</strong> to your users.</p>
<h2 class="block block-header h--h30-15-400 left  ">Before we begin…</h2>
<p>Throughout this post we will use some concepts that are very common when talking about digital product development, but that’s okay if you’re not familiar with all of them. Here are a few to make sure we can provide as much value as possible:</p>
<p><strong>Product vision</strong>: what we want to achieve with our product and how we see it in the future. It’s an inspiring statement that indicates where we want to go and where we want to be in the future.</p>
<p><strong>Product strategy</strong>: the <strong>how</strong> we will achieve our vision. Strategy includes aspects such as differentiation from competitors, the type of users we are creating the product for, and the value proposition. Strategy aligns what we can do with what we should do.</p>
<p><strong>Product backlog</strong>: a prioritized list of the features we need to develop to create solutions for our users’ problems and needs — in other words, the product.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">What is a roadmap</h2>
<p>A roadmap is the <strong>plan to follow</strong> in the development of your product in order to <strong>achieve the vision and meet the objectives</strong>.</p>
<p>Contrary to what some may think, <strong>a roadmap and a project plan are two different tools</strong>, although both are used in software product management and can complement each other.</p>
<p>While a <strong>project plan</strong> is very <strong>operational</strong> and reflects aspects such as scope, budget, dependencies, and tasks in great detail, a roadmap is much more strategic and focuses on <strong>how to achieve the product vision</strong>.</p>
<p>Here are some of the <strong>most significant characteristics</strong> of roadmaps:</p>
<ul>
<li><strong>It is the plan to achieve the vision.</strong> It is not a list of tasks but an action plan that guides the path toward our vision.</li>
<li><strong>Flexible and adaptable tool.</strong> It is dynamic and evolves over time as we receive feedback from users, allowing us to adapt our strategy to achieve the vision.</li>
<li><strong>A communication tool.</strong> It helps align different stakeholders through transparency, ensuring that everyone shares a common vision.</li>
<li><strong>Connects business goals with the plan to achieve them.</strong> The initiatives included in the roadmap, individually or together, aim to achieve the objectives of the product and the company.</li>
</ul>
<p>We can say that a <strong>roadmap</strong> is how we present our <strong>strategy</strong> to achieve our <strong>vision</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">What a roadmap is NOT</h2>
<p>Now that we’ve discussed what a roadmap is, let’s also clarify what it is not:</p>
<ul>
<li><strong>It is not a list of tasks.</strong> Our roadmap will not include every task we need to develop, but rather a set of initiatives that will guide product or project development.</li>
<li><strong>It is not a timeline.</strong> Although a roadmap may include timeframes and dates, they are more flexible and can change if the strategy evolves based on feedback. A roadmap should always be agile, flexible, and adaptable.</li>
<li><strong>It is not an execution plan.</strong> It does not explain how to carry out each initiative; instead, it focuses on what needs to be achieved.</li>
</ul>
<p>For example, an <strong>initiative in the roadmap</strong> could be improving the onboarding process. The <strong>tasks required to achieve it</strong> might include adding an interactive demo or integrating an assistant to guide users through the process. These tasks might have specific dates, but those would not appear directly in the roadmap.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Steps to create a roadmap</h2>
<p>To <strong>start working on your roadmap</strong>, consider the following:</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Product vision</h3>
<p>The first thing you need to do is ensure that <strong>you clearly understand your product vision</strong>. If not, try asking yourself the following questions to help define it:</p>
<ul>
<li>Why does our product exist?</li>
<li>What problems does it solve or what needs does it cover?</li>
<li>Who will use it?</li>
<li>What would have to happen for us to consider it a success?</li>
</ul>
<p>For example, Slack’s vision is: <em>Make work life simpler, more pleasant and more productive.</em></p>
<p>Here is a <a href="https://www.romanpichler.com/tools/product-vision-board/" target="_blank">template to help you develop your product vision</a>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Themes</h3>
<p>These are <strong>initiatives or specific projects that help achieve business goals</strong>. These themes will later be broken down into more specific tasks such as user stories.</p>
<p>Example: “Redesign the purchase funnel.”</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Choosing the type of roadmap</h3>
<p>There are several ways to design your roadmap, which will determine the next steps you need to follow. Here are two examples to help you get an idea — although many more exist.</p>
<p><strong>Now–Next–Later roadmap</strong></p>
<p>If we focus on the flexibility of roadmaps, we can talk about a well-known type: <strong>Now, Next, Later</strong>.</p>
<p>In this type of roadmap, no specific timeframes are defined. Instead, we focus on <strong>what we are currently working on</strong> (Now), <strong>what we will work on next</strong> (Next), and <strong>what we plan to do in the future</strong> (Later).</p>
<p>Here is a visual example of a Now–Next–Later roadmap 👇</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/roadmap_now_next_later_113cebf3dd.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/roadmap_now_next_later_113cebf3dd.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/roadmap_now_next_later_113cebf3dd.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/roadmap_now_next_later_113cebf3dd.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/roadmap_now_next_later_113cebf3dd.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="example of a now next later roadmap" title="Now–Next–Later"/></article>
<p>As you can see, each theme is represented, and within each column (Now / Next / Later), projects are shown in the color corresponding to their initiative.</p>
<p>Additionally, below the roadmap we display the product vision, since it is essential to keep it in mind when building the roadmap.</p>
<p><strong>Timeline-based roadmap</strong></p>
<p>These roadmaps use a <strong>time scale</strong> or timeframe, which can vary depending on the needs of your product or project: <strong>3 months</strong> (short term), <strong>6 months</strong> (mid term), <strong>12 months</strong> (long term), or even <strong>per sprint</strong> if you work with Scrum.</p>
<p>Our advice is <strong>not to plan your roadmap more than six months ahead</strong>, because the idea behind a roadmap is agility. You should be able to adapt it based on feedback and new needs that arise.</p>
<p>Here is a visual example of a <strong>timeline-based roadmap</strong> 👇</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/timeline_based_roadmap_765df01d25.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/timeline_based_roadmap_765df01d25.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/timeline_based_roadmap_765df01d25.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/timeline_based_roadmap_765df01d25.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/timeline_based_roadmap_765df01d25.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="example of a timeline based roadmap" title="Timeline-based roadmap"/></article>
<p>In this example, we see a <strong>roadmap with four different initiatives</strong> that will be developed during Q2. Each initiative or theme has a different color indicating which project or epic it belongs to.</p>
<p>Just like in the previous type of roadmap, the vision is included to ensure it is always considered during roadmap development.</p>
<p>In this case, the <strong>months</strong> are also included, helping provide transparency and alignment with stakeholders and the team.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Key elements</h3>
<p>As you’ve seen, regardless of the type of roadmap that best fits your product or project, <strong>its core elements are the same</strong>:</p>
<ul>
<li><strong>Time scale</strong>, whether a timeline or a Now–Next–Later format.</li>
<li><strong>Themes or initiatives</strong> at a high level.</li>
<li><strong>Projects or epics</strong> for each initiative.</li>
</ul>
<p>And don’t forget to <strong>reflect your product vision</strong>!</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Next steps</h2>
<p>Although we haven’t mentioned it earlier in this post, there are <strong>other important aspects</strong> when creating a roadmap:</p>
<ul>
<li><strong>Prioritization.</strong> We cannot do everything, and certainly not all at once. That’s why we must make decisions and prioritize initiatives. There are different techniques and frameworks that can help with this, as we explained in this post 👉 <a href="https://www.paradigmadigital.com/transformacion-organizacional-rev/desbloqueando-potencial-priorizacion-frameworks-mas-importantes/" target="_blank">Unlocking the potential of prioritization: the most important frameworks</a>.</li>
<li><strong>Communication.</strong> The main value of building a roadmap is that it acts as a tool for alignment and transparency. It helps everyone understand the focus and why certain decisions are made.</li>
<li><strong>Connection with the business.</strong> Every initiative must be linked to a business objective; otherwise, we risk investing time in developments that generate no real value. To achieve this connection, it’s important to stop thinking about outputs and start focusing on outcomes — generating real impact by solving specific problems and needs.</li>
</ul>
<p>There is still much more to say about this last point, which we’ll cover in future posts 😉.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p>A roadmap is a <strong>tactical tool</strong> that aligns product vision with strategy.</p>
<p>Also keep in mind that <strong>no roadmap is better than another</strong>. You simply need to adapt it to the specific needs of each project and continue <strong>iterating</strong> until you find the roadmap that best fits your product.</p>
<p>A roadmap is a compass. <strong>Build yours!</strong></p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Andrés Macarrilla ]]>
        </dc:creator>
        <title>2026 Will Be the Year of AI Platforms</title>
        <link>https://en.paradigmadigital.com/techbiz/2026-the-year-ai-platform/</link>
        <pubDate>Thu, 05 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/techbiz/2026-the-year-ai-platform/</guid>
        <description>A platform with AI is not the same as a platform for AI. The first enhances the developer experience. The second builds the factory that enables you to deploy dozens of models with control, traceability, and real ROI. In this post, we explore how AI platforms will evolve this year.
</description>
        <content:encoded>
            <![CDATA[
                <p>This year I didn’t have the chance to publish my prediction on <a href="https://en.paradigmadigital.com/dev/from-conversation-to-execution-7-ai-trends-2026/" target="_blank">2026 trends</a>, so I’d like to take this opportunity to share a <strong>series of three posts about what I see as a key trend</strong> in <strong>Artificial Intelligence</strong>, of course.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">From Cognitive Load to Model Load</h2>
<p>If you’ve been following what we’ve published at Paradigma, you already know our mantra and our <a href="https://en.paradigmadigital.com/techbiz/what-platform-engineering-is-what-it-is-not/" target="_blank">Platform Engineering series</a>: the discipline was born to free development teams from operational hell.</p>
<p>Do you remember that “sentence” of <em>You Build It, You Run It</em>? It created an unbearable <strong>cognitive load</strong>. Countless teams, instead of inventing the next feature that brings business value, spent their days fighting with Kubernetes, Prometheus, and a thousand other tools.</p>
<p>The truth is, Platform Engineering has been our great lever to bring order to that chaos and tell development teams: <em>“Focus on the secret sauce, we’ll take care of the kitchen.”</em></p>
<p>The <strong>pattern is always the same</strong> when a technological shift or new trend emerges: we start experimenting, applying it to use cases, and then we feel the need to scale and bring it to production as quickly as possible.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/introducing_org_tech_change_b315b81189.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/introducing_org_tech_change_b315b81189.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/introducing_org_tech_change_b315b81189.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/introducing_org_tech_change_b315b81189.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/introducing_org_tech_change_b315b81189.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Introducing organizational and technological change" title="undefined"/><figcaption>Introducing organizational and technological change</figcaption></figure>
<p>But you know what? Just when we begin to breathe again, the next big wave arrives and it’s… <strong>even more complex</strong>.</p>
<p>In this case, I’m talking (as you might expect) about Artificial Intelligence and Machine Learning. This isn’t just about code and infrastructure; it’s about <strong>data, expiring models, versioning, explainability, and an MLOps lifecycle</strong> that is frankly a <em>superset</em> of what we already had in DevOps.</p>
<p>That’s why my bet is clear: <strong>2026 will not simply continue the “AI everywhere” narrative, 2026 will be the year when the AI Platform becomes a strategic imperative for both business and technology inside companies, not an option.</strong></p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The Platform Engineering legacy: friction reduction as the foundation for MLOps</h2>
<p><strong>Platform Engineering is our recipe for success</strong>: the use of <a href="https://en.paradigmadigital.com/techbiz/understanding-golden-paths-practical-guide/" target="_blank">Golden Paths</a> to standardize, and the <a href="https://en.paradigmadigital.com/techbiz/platform-as-a-product-scaling-devops-idps/" target="_blank">Internal Developer Platform (IDP)</a> as a self-service entry point so no one has to open a ticket just to spin up an environment.</p>
<p>Now we add ML to the equation. And the problem is that a data scientist cannot be configuring CI/CD pipelines for a model or dealing with networking. They simply can’t (and even if they could, they shouldn’t).</p>
<p><strong>MLOps needs the same Golden Paths</strong> we built for traditional software but extended to more delicate tasks. Imagine a Golden Path that guides you from the experimental notebook where you play with algorithms and models, all the way to production deployment with traceability, feature stores, and model monitoring… all in one go!</p>
<p>And here we see the <strong>first major specialization</strong> emerging. In the coming years, I believe we’ll see a surge in the <strong>Data Platform Engineer</strong> role. This role doesn’t just manage the IDP; it focuses on ensuring the <strong>reliability and governance of the data</strong> that feeds the model.</p>
<p>If the model is a black box, at least let’s make sure the data inputs feeding it are flawless! This is not a minor technical detail it’s a matter of <strong>corporate trust and compliance</strong>.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/mlops_golden_path_e83ed8fdfa.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/mlops_golden_path_e83ed8fdfa.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/mlops_golden_path_e83ed8fdfa.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/mlops_golden_path_e83ed8fdfa.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/mlops_golden_path_e83ed8fdfa.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="MLOps Golden Path" title="undefined"/><figcaption>MLOps Golden Path</figcaption></figure>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Fundamental distinction: Platforms with AI vs. Platforms for AI</h2>
<p>At the CxO level, we must be precise with terminology. Because <strong>using AI to improve your platform is not the same as building a platform to do AI</strong>. The difference determines where you should place the bulk of your AI budget.</p>
<ul>
<li><strong>Platforms with AI (AI-powered platforms).</strong> This is cool, trendy (and like all trends, it will eventually become BAU). It’s about using AI <em>inside</em> the platform to enhance developer experience. Think automated troubleshooting or predictive auto-scaling. Great, but it’s an <strong>add-on</strong> that improves DevEx.</li>
<li><strong>Platforms for AI (AI Platforms).</strong> <strong>This is the real focus.</strong> This means building infrastructure, tools, and Golden Paths <strong>specifically</strong> to support the entire MLOps lifecycle end-to-end. It’s the factory that allows you to move from three models in production to fifty. <strong>It’s the layer that generates new business value.</strong></li>
</ul>
<p><strong>If your AI investment exceeds ten projects per year, you need the second one. There’s no alternative.</strong></p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/plataformas_de_ia_vs_plataformas_para_ia_bf946e3197.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/plataformas_de_ia_vs_plataformas_para_ia_bf946e3197.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/plataformas_de_ia_vs_plataformas_para_ia_bf946e3197.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/plataformas_de_ia_vs_plataformas_para_ia_bf946e3197.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/plataformas_de_ia_vs_plataformas_para_ia_bf946e3197.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Platforms with AI vs. Platforms for AI" title="undefined"/><figcaption>Platforms with AI vs. Platforms for AI</figcaption></figure>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Why 2026 is the tipping point</h2>
<p>Why 2026 (and 2027)? Because the market will force us. We’re at a tipping point driven by <strong>risk and ambition</strong>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">The Proof-of-Concept “Groundhog Day” (And the C-Level's Fear)</h3>
<p>Many companies are stuck in an <strong>endless PoC loop</strong>: AI projects that work beautifully in a sandbox or controlled environment but take months to reach production or worse, reach production and fail due to lack of monitoring or security policy compliance.</p>
<p>Here’s where the rebel in me speaks up: the <strong>AI Platform</strong> is the only answer to <strong>scale and prove ROI</strong>. When the business demands speed, the platform must guarantee that a model goes into production in hours, not weeks.</p>
<p>And let’s not forget <strong>governance</strong>. As AI makes critical decisions, <strong>auditability and explainability</strong> become vital. You need a platform that forces you to document training datasets and monitor bias. You can’t rely on each team’s goodwill. The risk is simply too high.</p>
<p>As if this weren’t complex enough, the future of AI is not just static classification models. It’s <strong>AI agents</strong>: applications that maintain state, have memory, plan, and act and their architectures are, unsurprisingly, incredibly complex.</p>
<p><strong>Who will orchestrate their state, persistence, and interactions?</strong> For me, the answer is clear: Kubernetes. But who will make Kubernetes consumable for AI teams? Exactly, the AI Platform. This is, <strong>without a doubt</strong>, the final catalyst that will make the platform a fundamental requirement.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Recap: putting everything on the table</h2>
<p>We’ve reached the point where <strong>we can no longer apply patches</strong> or continue running experiments that bring no real business return. Extending the Platform Engineering mindset, the AI Platform will teach us how to <strong>master AI at scale</strong>. It’s a matter of <strong>competitive survival and risk management</strong>.</p>
<p>The AI Platform allows us to build the model factory with <strong>security, control, and the speed</strong> the market demands.</p>
<p>But of course, if this were easy, we would have done it already, right? In the next post, I want to get more concrete and explore the <strong>real, raw, everyday pain points</strong> faced by data scientists and MLOps teams and how the AI Platform finally becomes the necessity that <strong>business and technology</strong> will be compelled to drive forward.</p>
<p>I’ll read you in the comments! 👇</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Cristina Redondo ]]>
        </dc:creator>
        <title>ITIL® 5: Digital Ecosystems</title>
        <link>https://en.paradigmadigital.com/organizational-transformation-rev/itil-5-digital-ecosystems/</link>
        <pubDate>Tue, 03 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/organizational-transformation-rev/itil-5-digital-ecosystems/</guid>
        <description>ITIL 5 is not just a simple update. It changes the way we understand ITSM, it’s no longer just about managing services, but about coordinating an entire digital ecosystem where providers, cloud, and AI are all part of the same value stream.
</description>
        <content:encoded>
            <![CDATA[
                <p>With the recent release of <strong>ITIL 5</strong> (February 2026), the ITSM landscape evolves from “managing services” to “orchestrating digital ecosystems.”</p>
<p>Exactly seven years ago, we <a href="https://www.paradigmadigital.com/techbiz/que-es-itil-v4/" target="_blank">wrote about the launch of ITIL 4</a>, a well-intentioned and agile version that received a lukewarm welcome: the ITIL <em>fan lovers</em> were not convinced by the shift from “processes” to “practices,” and agilists were not thrilled either, given the extensive number of practices to master (34, no less).</p>
<p>With version 5, things get interesting. If you work in technology, this is not just an update: <strong>it’s a paradigm shift!</strong></p>
<p>If this is your first time approaching ITIL, <a href="https://www.paradigmadigital.com/techbiz/que-es-itil-v4/" target="_blank">you can read what it’s about here</a>. A subtle nuance is that it is <strong>no longer defined as a library of best practices</strong>, but directly as a <strong>framework</strong>.</p>
<p>ITIL 5 has been long awaited, and my impression is that they wanted to see <strong>how AI evolved before integrating it properly</strong>, rather than forcing AI into the framework just for the sake of trendiness.</p>
<p><strong>Its progression scheme remains</strong>, with a Foundations level covering the fundamentals and specialized modules aligned across strategy, business, and technical roles.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The conceptual leap: from service to digital ecosystem</h2>
<p>The concept of digital ecosystems in ITIL 5 marks the end of the corporate view of ITSM as a technological island.</p>
<p>While <strong>ITIL 4</strong> focused on how an organization co-created <strong>value with its customers</strong>, <strong>ITIL 5</strong> recognizes that today we operate within a <strong>hyperconnected network</strong> of partners, cloud providers, external AI platforms, and third-party APIs. We no longer manage “our” services in isolation; we now <strong>orchestrate an interdependent network</strong> where the success or failure of an external partner directly impacts our value proposition.</p>
<p>ITIL 5 provides the <strong>framework to govern this complexity</strong>, ensuring that value flows seamlessly across organizational boundaries. This evolution transforms IT’s role: from being an “internal provider” to becoming an “ecosystem orchestrator.”</p>
<p><strong>In ITIL 5, optimization does not stop at your company’s walls — it extends to intelligent, dynamic integration with strategic partners.</strong> By adopting this mindset, organizations shift from managing fixed assets to managing dynamic capabilities, enabling unprecedented agility. Those who master this concept do not just deliver software or support — they <strong>guarantee the resilience and growth of the entire digital fabric</strong> in which their business operates. Version 5 introduces <strong>powerful and differentiating ideas</strong>:</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">1 <span class="enum-header"></span> From “Digital Native” to “AI Native”</h3>
<p>While ITIL 4 laid the groundwork for agility and value, <strong>ITIL 5 integrates Artificial Intelligence into the framework’s DNA</strong>. It’s not just about using bots — it offers <strong>practical and ethical governance</strong> so that AI can make decisions and optimize value flows responsibly.</p>
<p>To achieve this, it logically leverages <a href="https://en.wikipedia.org/wiki/Process_mining" target="_blank">process mining</a>. AI tools create a graph where process objects interact and move across events. A data-driven window opens, allowing real-time tracing of slowdowns, accelerations, or bottlenecks. More than interconnected lines (processes), it becomes a connective tissue.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">2 <span class="enum-header"></span> Goodbye to the barrier between product and service</h3>
<p>In ITIL 4, we talked about the <strong>Service Value System (SVS)</strong>. ITIL 5 evolves this into the <strong>ITIL Value System (VS)</strong>, unifying the lifecycle of products and services inseparably: they are no longer separate entities but are designed, delivered, and improved under a single continuous End-to-End flow. It consists of eight activities (Discover, Design, Acquire, Build, Transition, Operate, Deliver, Support).</p>
<p>With this, ITIL acknowledges that <strong>the traditional division between product and service must be overcome</strong>, since in practice they constantly merge, and those managing them must be prepared to understand and articulate both dimensions.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">3 <span class="enum-header"></span> Practical certification (prove what you can do)</h3>
<p>A radical shift for professionals: <strong>ITIL 5 prioritizes application over memorization</strong>. Advanced exams (not Foundations) are now “open book,” focusing on real cases and the ability to transform organizations through optimization and articulation of variables and factors.</p>
<p>In my view, this gives certification much greater value compared to analogous titles. Professionals will be assessed based on <strong>deep understanding, interconnection capability, and practical deployment of practices</strong>, not on memorizing the manual and applying it rigidly — which has caused so many project management frustrations.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">4 <span class="enum-header"></span> Optimization as the mantra</h3>
<p>If in version 4 the most repeated concept was “holistic,” in version 5 it is <strong>“optimization.”</strong> In my view, where this new version converges with 2026 framework trends is in <strong>how it redefines optimization as an activity</strong>:</p>
<p><strong>From manual optimization to “hyper-optimization”</strong></p>
<p>In ITIL 4, we applied the principle “optimize and automate.” In ITIL 5, we move toward <strong>AI-driven continuous optimization</strong>. The system no longer waits for a consultant to find a bottleneck — <a href="https://www.paradigmadigital.com/techbiz/machine-learning-dummies/" target="_blank">machine learning</a> algorithms and process mining tools analyze value flows in real time and suggest immediate adjustments to eliminate waste.</p>
<p><strong>Self-healing processes</strong></p>
<p>The major competitive advantage of ITIL 5 is its focus on <strong>operational resilience</strong>. Optimization is not just about doing things “faster,” but about ensuring that <strong>processes adjust themselves</strong> during demand spikes or technical failures, guaranteeing uninterrupted value flow. A proactive and adaptive sustainability that seeks automated homeostasis.</p>
<p><strong>Optimized unified data model</strong></p>
<p>ITIL 5 introduces the premise (which will surely be debated) of a <strong>unified data model</strong>, signaling the end of “data silos.” This enables transversal optimization: if you optimize development processes, the impact is automatically reflected in operations and support.</p>
<h2 class="block block-header h--h30-15-400 left  ">What are the advantages of the “New ITIL”?</h2>
<ul>
<li><strong>Speed of adaptation</strong>: designed for high-velocity IT environments.</li>
<li><strong>Lean and Agile hybridized</strong>: combines the best of Lean within complex and adaptive contexts.</li>
<li><strong>AI governance</strong>: a solid framework to implement automation without losing ethical or legal control.</li>
<li><strong>Resilience and sustainability</strong>: aligned with Industry 5.0, placing humans and the planet at the center of technology.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  ">Who can benefit from ITIL v5?</h2>
<ul>
<li><strong>Digital transformation leaders</strong> who need a common language to bridge IT and business.</li>
<li><strong>Product Owners / Product Managers</strong> who now find in ITIL the tools to manage the full lifecycle of digital products.</li>
<li><strong>ITSM professionals</strong> looking to scale their careers toward strategy and AI governance.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  ">What about ITIL 4?</h2>
<p>ITIL 4 and ITIL 5 will run in parallel for at least 12 months, and ITIL 4 certifications remain valid and recognized as prerequisites for advanced ITIL 5 modules. However, <strong>there is no “automatic upgrade.”</strong></p>
<p>A <strong>bridge exam</strong> from ITIL 4 to ITIL 5 is available from late February 2026 for certified professionals.</p>
<p><strong>Recertification credits</strong>: if you already hold ITIL 4 certifications, check your PeopleCert dashboard; some credits may be transferable to accelerate your path to <strong>ITIL 5 Master</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p>In my view, the ITIL committee continues to perform a commendable exercise in self-questioning, applying double-loop learning and inspect-and-adapt thinking. It has been necessary, because the more specific and less abstract a theoretical corpus becomes, the worse it ages.</p>
<p>Version 5 repositions ITIL within innovation, making knowledge practical only when ideas are truly usable. Novelty becomes progress.</p>
<p>The real competitiveness of project management areas and consultancies in 2026 will lie in mastering complex digital ecosystems. <strong>Is your IT team still managing tickets — or can it orchestrate value for your organization?</strong></p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Cristina Redondo ]]>
        </dc:creator>
        <title>Process Optimization: A Training Roadmap</title>
        <link>https://en.paradigmadigital.com/organizational-transformation-rev/process-optimization-training-roadmap/</link>
        <pubDate>Thu, 26 Feb 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/organizational-transformation-rev/process-optimization-training-roadmap/</guid>
        <description>The natural evolution of Agile Coaches and process specialists is no longer just about optimizing flows, but about orchestrating entire ecosystems. If you’re looking to broaden your horizons toward orchestration beyond certifications, this is your roadmap
</description>
        <content:encoded>
            <![CDATA[
                <p>Professionals who already master agility and process management need <strong>updates and incentives</strong> that enable them to move from <strong>optimization to the challenge of orchestrating flows and processes</strong>. A strong option is to integrate <strong>regulatory rigor (ISOs)</strong>, <strong>operational efficiency</strong> (<a href="https://www.paradigmadigital.com/rev/podcast-filosofia-lean-aplicada-proyectos-desarrollo/" target="_blank">Lean</a>), and <strong>exponential automation capabilities (AI)</strong> into a single working ecosystem.</p>
<p>If you are ready to broaden your horizons toward <strong>Ecosystem Orchestration beyond certifications</strong>, we propose a roadmap designed for professionals seeking these <strong>integrated and systemic outcomes</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">1 <span class="enum-header"></span> Lay the Foundation</h2>
<p>Before automating with AI, your understanding of flow must be <strong>solid and efficient</strong>. Strengthen your foundations in process optimization. This phase should formalize everything you know and embed it within <strong>recognized international standards</strong>:</p>
<ul>
<li><strong>Quality and Continuous Improvement (ISO + Lean)</strong></li>
</ul>
<p>The key here is Operational Efficiency as described in ISO 9001 (Quality Management Systems). Don’t see it as bureaucracy, but as a governance framework that allows optimization and quality experts to align foundations and standards for shared understanding. A plus: learn how to align OKRs with standard requirements using a <strong>Hoshin Kanri matrix</strong>.</p>
<ul>
<li><strong>Lean Thinking</strong></li>
</ul>
<p>Focus on mastering the identification of the 8 wastes. Training your mindset in a disciplined way to automatically detect inefficiencies in any system and group them for prioritized action is a differentiating factor when designing and deploying frameworks as an Agile Coach.</p>
<ul>
<li><strong>ISO 18404 (Lean &amp; Six Sigma)</strong></li>
</ul>
<p>This standard defines competencies in Lean and Six Sigma, a testament to how significant Lean Six Sigma has been to earn its own ISO standard. Mastering it provides international recognition as an optimizer under global standards and enables you to measure dispersion and variance to scientifically identify causes and effects.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">2 <span class="enum-header"></span> Secure</h2>
<p>Flow resilience (ISO + Agile). Once a flow is efficient, it must also be <strong>secure and adaptable</strong>. Secure in terms of compliance and risk mitigation, adaptable in terms of understanding human relational dynamics and their need</p>
<ul>
<li><strong>ISO/IEC 27001 (Information Security)</strong></li>
</ul>
<p>Essential today for any optimization activity. If you optimize a process but expose data, you have failed.</p>
<ul>
<li><strong>ISO 22301 (Business Continuity)</strong></li>
</ul>
<p>An ideal complement in Agile environments. It teaches how to maintain value flow even during crises.</p>
<ul>
<li><strong>Lean Change Management (LCM)</strong></li>
</ul>
<p>To manage the cultural transition often required by these standards in teams accustomed to “total freedom” due to lack of governance, train in the change management model proposed by LCM. While maintaining efficiency, it emphasizes a systemic vision of people, teams, tools, and processes, focusing on their interactions.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">3 <span class="enum-header"></span> Orchestrate</h2>
<p>Combining intelligences and perspectives. This is where an experienced Agile Coach can integrate Intelligent Process Modeling to connect them into a diagram that maps the territory. Flows must be connected and dynamic.</p>
<ul>
<li><a href="https://www.paradigmadigital.com/dev/industrializando-ia-generativa-prompt-engineering-promptmeteo" target="_blank">Prompt Engineering for Process Analysts</a></li>
</ul>
<p>It’s not just “chatting.” It’s about learning how to leverage AI and augmented intelligence tools to analyze Value Stream Maps (VSMs), SIPOCs, any diagrammatic representation, or Business Process Automations (BPAs) in seconds. You obtain a process map, an organizational infrastructure…</p>
<ul>
<li><strong>RAG for Automated Inefficiency Detection</strong></li>
</ul>
<p><a href="https://en.paradigmadigital.com/techbiz/retrieval-augmented-generation-corporate-usage/" target="_blank">RAG (Retrieval-Augmented Generation)</a> connects LLM applications with proprietary data sources. Instead of fragmented information in silos (PDFs, emails, tools, manuals), RAG unifies these assets into an accessible layer easily managed by analysts.</p>
<ul>
<li><strong>Process Mining</strong></li>
</ul>
<p>Extracts data from event logs in systems like ERP or CRM to create a “digital twin” showing how the company truly operates at the system level, delegating supervised analysis to these platforms. Not what manuals say, but the real operational flow.</p>
<ul>
<li><a href="https://www.iso.org/es/norma/42001?browse=ics" target="_blank">ISO/IEC 42001</a> <strong>(Artificial Intelligence Management System)</strong></li>
</ul>
<p>The first international AI standard — today’s “Holy Grail” for management teams. It teaches how to implement AI ethically, securely, and efficiently. You need it as a new vector within your VSMs. <a href="https://eur-lex.europa.eu/content/help/search/predefined-rss.html" target="_blank">Subscribe to EU updates to stay current</a>.</p>
<ul>
<li><strong>Transcontextuality with Warm Data</strong></li>
</ul>
<p>Often grouped under Systemic Symbiosis or Ecology of Mind, “Transcontextuality” studies how information gains meaning only when it crosses multiple contexts simultaneously (biological, cultural, economic, emotional, etc.). Familiarize yourself with these concepts — they can reveal inefficiencies invisible to tools and AI, enabling you to provide brilliant, holistic analyses to clients.</p>
<h2 class="block block-header h--h30-15-400 left  ">Why this order?</h2>
<p>If you automate (AI) an inefficient process (Lean) that is not under control (ISO), you will only make mistakes at unprecedented speed. If you ignore the relationships within the ecosystem where processes operate, you will oversimplify in a dangerous and ineffective way.</p>
<p><strong>This roadmap ensures that each layer of knowledge strengthens the previous one: ISO provides the structure, Lean clears the path, and AI accelerates the engine.</strong></p>
<p>Do you have this need in your area or organization? At Paradigma Digital, we have professionals who integrate all dimensions of Flow and Process Orchestration to ensure the proposed model has consistent technical, operational, and relational foundations within your organizational ecosystem.</p>
<p>Interested in Process Optimization and already familiar with Lean? <a href="https://en.paradigmadigital.com/contact/" target="_blank">Contact us!</a></p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Marcos Martín ]]>
        </dc:creator>
        <title>AIOps applied to CI/CD: when the pipeline decides something is wrong</title>
        <link>https://en.paradigmadigital.com/dev/aiops-applied-ci-cd-when-pipeline-decides-something-wrong/</link>
        <pubDate>Tue, 24 Feb 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/aiops-applied-ci-cd-when-pipeline-decides-something-wrong/</guid>
        <description>Not every problematic pipeline fails in red. Sometimes it passes in green… but takes 5 minutes when it usually takes 40 seconds. AIOps makes it possible to detect those anomalies without relying on rigid timeouts or arbitrary thresholds, integrating it directly into the pipeline itself. Want us to show you how?
</description>
        <content:encoded>
            <![CDATA[
                <p>Nowadays, it seems that <strong>everything comes with artificial intelligence</strong>: your phone has AI, your car has AI… and if we believe some ads, even your hair dryer “learns from you” to style your bangs better.</p>
<p>The word <strong>AI</strong> has become a <strong>catch-all term</strong>. It’s used to describe both deep learning models and simple rule-based systems with a couple of if statements. And amid all that noise, it’s becoming increasingly difficult to answer a simple but important question: <strong>what does it really mean to use AI in engineering systems?</strong></p>
<p>In the world of <strong>infrastructure</strong>, <strong>CI/CD</strong>, and <strong>platform operations</strong>, AI usually has nothing to do with giant neural networks or “thinking” models. Here, it shows up in a much more humble — and, interestingly, much more useful — way: <strong>as a tool to detect anomalous behavior and make automated decisions</strong>.</p>
<p>This article is not about grand promises or dashboards full of futuristic charts. It’s about a very concrete problem: <strong>how to detect anomalies in pipelines that, at first glance, seem to work perfectly</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">A practical case: detecting pipelines that get “stuck thinking”</h2>
<p>The use case we built is quite down-to-earth: <strong>detecting pipelines that take too long to do things that should, in theory, be fast</strong>. No science fiction. No future-predicting models. Just identifying those moments when a pipeline lingers longer than usual… and we pretend it’s normal.</p>
<p>Instead of manually deciding what counts as “too long” using the classic number pulled out of thin air, we let <strong>the system observe how pipelines usually behave</strong>. If the normal execution time is a few seconds and, suddenly, one decides to take several minutes (to reflect on the meaning of life, perhaps), the system raises its hand.</p>
<p>Not because it crossed some magical threshold, but <strong>because it doesn’t fit the usual pattern</strong>.</p>
<p>The result is <strong>a pipeline that monitors itself</strong>. When it behaves as expected, it passes without issue. When it starts getting creative with waiting times, it automatically fails. No debates, no ignored dashboards, and no humans deciding whether “this time we’ll let it slide.”</p>
<p>In the end, it’s not about making pipelines smarter — it’s about <strong>making them a bit less naive</strong>. And above all, about <strong>letting the data decide when something stops being normal</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  ">What tools did we use to achieve this?</h2>
<p>To make this solution work, we need a set of tools that complete the loop of <strong>pipeline execution, anomaly detection, and model training</strong> for automation.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Argo Workflows: pipeline execution and control</h3>
<p><a href="https://www.paradigmadigital.com/dev/que-es-argo-cd-por-que-deberias-dejar-usar-kubectl-apply-a-mano/" target="_blank">Argo Workflows</a> is the execution engine. This is where the <strong>real pipelines live</strong>: in this case, workflows that simulate jobs with variable durations and, once finished, execute an automatic validation step.</p>
<p>A <strong>key design point</strong> is the use of onExit, which allows:</p>
<ul>
<li>The workflow to execute its main logic.</li>
<li>Once finished, an AIOps evaluation step to run.</li>
<li>If that step fails, the entire workflow is marked as failed.</li>
</ul>
<p>This ensures that <strong>anomaly detection</strong> becomes part of the <strong>pipeline lifecycle</strong>, not an external observer system.</p>
<p>One way to apply this is to <strong>build a workflow that simulates a pipeline</strong>. The pipeline configuration includes a one-second sleep, but we add a “variable” (for lack of a better explanation) so that, occasionally, the sleep lasts 300 seconds.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Prometheus: the source of truth for behavior</h3>
<p>Prometheus acts as the <strong>historical data source</strong>. It doesn’t make decisions, but it collects key metrics such as <strong>workflow duration</strong> and exposes them consistently.</p>
<p>It allows us to:</p>
<ul>
<li><strong>Build</strong> the model’s training dataset.</li>
<li><strong>Understand</strong> how the system behaves under normal conditions.</li>
<li><strong>Separate</strong> real signals from noise.</li>
</ul>
<p>An important point is that Prometheus is not used as an ML engine, but as a <strong>time-series database</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Exporting metrics from Prometheus to learn</h2>
<p>We built a custom exporter because we needed a <strong>simple and reliable metric</strong> representing the total duration of each workflow. Argo Workflows does not expose this directly, and for an AIOps system, <strong>duration</strong> is a <strong>key behavioral signal</strong>.</p>
<p>The exporter simply queries the workflow state, calculates its execution time, and exposes it as a Prometheus metric. It doesn’t make decisions or apply logic — it <strong>just converts internal platform state into reusable observability data</strong>.</p>
<p>This way, we keep <strong>responsibilities clearly separated</strong>: the exporter generates data, Prometheus stores it, and the AI layer decides how to interpret it.</p>
<pre><code class="language-python">from kubernetes import client, config
from prometheus_client import Gauge, start_http_server
from kubernetes.config.config_exception import ConfigException
from datetime import datetime
import time

print(&quot;Exporter starting...&quot;, flush=True)

try:
   config.load_incluster_config()
   print(&quot;Using in-cluster config&quot;, flush=True)
except ConfigException:
   config.load_kube_config()
   print(&quot;Using local kubeconfig&quot;, flush=True)

api = client.CustomObjectsApi()

DURATION = Gauge(
   &quot;argo_pipeline_duration_seconds&quot;,
   &quot;Workflow duration&quot;,
   [&quot;workflow&quot;]
)

def parse(ts):
   return datetime.fromisoformat(ts.replace(&quot;Z&quot;, &quot;+00:00&quot;))

start_http_server(8000)
print(&quot;Metrics server listening on :8000&quot;, flush=True)

while True:
   wfs = api.list_namespaced_custom_object(
       group=&quot;argoproj.io&quot;,
       version=&quot;v1alpha1&quot;,
       namespace=&quot;argo&quot;,
       plural=&quot;workflows&quot;
   )

   count = 0
   for wf in wfs[&quot;items&quot;]:
       status = wf.get(&quot;status&quot;, {})
       if status.get(&quot;phase&quot;) == &quot;Succeeded&quot;:
           start = parse(status[&quot;startedAt&quot;])
           end = parse(status[&quot;finishedAt&quot;])
           duration = (end - start).total_seconds()
           DURATION.labels(
               workflow=wf[&quot;metadata&quot;][&quot;name&quot;]
           ).set(duration)
           count += 1

   print(f&quot;Updated {count} workflows&quot;, flush=True)
   time.sleep(30)
</code></pre>
<p>With the query <em>argo_pipeline_duration_seconds{workflow=~&quot;sleep-random-.*&quot;}</em> we can view the exported data in Prometheus.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/datos_exportados_prometheus_171397cbb8.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/datos_exportados_prometheus_171397cbb8.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/datos_exportados_prometheus_171397cbb8.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/datos_exportados_prometheus_171397cbb8.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/datos_exportados_prometheus_171397cbb8.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Data exported in Prometheus" title="undefined"/><figcaption>Data exported in Prometheus</figcaption></figure>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The trainer: how we teach the system what “normal” looks like</h2>
<p>We implemented a <strong>simple and explicit trainer</strong> whose sole purpose is to <strong>learn what “normal” workflow duration looks like</strong> based on real historical data.</p>
<p>The <strong>process</strong> is straightforward:</p>
<ul>
<li>We load a dataset with historical durations.</li>
<li>We validate that there is enough data for training.</li>
<li>We compute basic statistics to gain visibility into system behavior.</li>
<li>We train an anomaly detection model (<em>Isolation Forest</em>).</li>
<li>We store the trained model in a persistent volume.</li>
</ul>
<p>Training runs as a <strong>Kubernetes Job/CronJob</strong>, allowing us to refresh the model periodically without impacting pipeline execution. There’s no complex logic or heavy dependencies: <strong>the value lies in learning the system’s real distribution</strong>, not in algorithmic sophistication.</p>
<p>This way, the definition of “normal” evolves over time and <strong>automatically adapts</strong> to environmental changes.</p>
<pre><code class="language-python">import pandas as pd
from sklearn.ensemble import IsolationForest
import joblib
import os
import time

def log(msg):
   print(f&quot;[TRAINER] {msg}&quot;, flush=True)

DATASET_PATH = &quot;/data/dataset.csv&quot;
MODEL_PATH = &quot;/models/sleep-random.pkl&quot;
MIN_SAMPLES = 20

log(&quot;Starting model training&quot;)

if not os.path.exists(DATASET_PATH):
   log(&quot;Dataset not found, aborting training&quot;)
   exit(1)

df = pd.read_csv(DATASET_PATH)

log(f&quot;Loaded dataset with {len(df)} samples&quot;)

if len(df) &lt; MIN_SAMPLES:
   log(f&quot;Not enough samples (&lt;{MIN_SAMPLES}), skipping training&quot;)
   exit(0)

min_d = df[&quot;duration&quot;].min()
max_d = df[&quot;duration&quot;].max()
mean_d = df[&quot;duration&quot;].mean()

log(f&quot;Duration stats → min={min_d:.2f}s max={max_d:.2f}s mean={mean_d:.2f}s&quot;)

X = df[[&quot;duration&quot;]]

log(&quot;Training IsolationForest model&quot;)

model = IsolationForest(
   contamination=0.02,
   random_state=42
)

start = time.time()
model.fit(X)
elapsed = time.time() - start

log(f&quot;Model trained in {elapsed:.2f}s&quot;)

# 4️⃣ model saved
joblib.dump(model, MODEL_PATH)
log(f&quot;Model saved to {MODEL_PATH}&quot;)

ts = int(time.time())
versioned_path = f&quot;/models/sleep-random-{ts}.pkl&quot;
joblib.dump(model, versioned_path)
log(f&quot;Versioned model saved to {versioned_path}&quot;)

log(&quot;Training job completed successfully&quot;)
</code></pre>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The test workflow: putting the system to work</h2>
<p>To validate the entire architecture, we created a <strong>simple but intentionally variable Argo Workflows workflow</strong>. Its purpose is not to perform real work, but to <strong>generate executions with different behaviors</strong> so we can verify that the AIOps system works as expected.</p>
<p>The workflow runs <strong>a single step</strong> that sleeps for a random amount of time:</p>
<ul>
<li>Most of the time it simulates a fast execution.</li>
<li>Occasionally, it forces an abnormally slow execution.</li>
</ul>
<p>This way, we introduce <strong>controlled noise</strong> and create a clear signal that the system can learn from.</p>
<p>The key part lies in the onExit section. When the workflow finishes, an <strong>additional step</strong> is executed that:</p>
<ul>
<li><strong>Loads</strong> the trained machine learning model.</li>
<li><strong>Evaluates</strong> the total workflow duration.</li>
<li><strong>Decides</strong> whether the execution is normal or anomalous.</li>
</ul>
<p>If the model <strong>detects an anomaly, the workflow automatically fails</strong>. Otherwise, it is considered valid.</p>
<p>This turns <strong>anomaly detection into part of the pipeline itself</strong>, rather than an external, after-the-fact analysis.</p>
<p>This workflow acts as a <strong>test bench</strong> to run the system multiple times, observe how the model evolves, and verify that automatic detection works consistently under real conditions.</p>
<pre><code class="language-python">apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
 generateName: sleep-random-
 namespace: argo
spec:
 serviceAccountName: mmartin
 entrypoint: main
 onExit: aiops-check

 templates:
   - name: main
     steps:
       - - name: random-sleep
           template: sleep

   - name: sleep
     container:
       image: alpine:3.19
       command: [sh, -c]
       args:
         - |
           R=$((RANDOM % 5))
           if [ &quot;$R&quot; -eq 0 ]; then
             SLEEP_TIME=300
             echo &quot;🔥 ANOMALOUS RUN: sleeping ${SLEEP_TIME}s&quot;
           else
             SLEEP_TIME=1
             echo &quot;Normal run: sleeping ${SLEEP_TIME}s&quot;
           fi

           sleep ${SLEEP_TIME}

   - name: aiops-check
     container:
       image: quitos90/argo-aiops-ml-check:0.1
       imagePullPolicy: Always
       command: [&quot;python&quot;, &quot;-u&quot;, &quot;/app/aiops-check-ml.py&quot;]
       env:
         - name: STEP_DURATION
           value: &quot;{{workflow.duration}}&quot;
       volumeMounts:
         - name: models
           mountPath: /models

 volumes:
   - name: models
     persistentVolumeClaim:
       claimName: aiops-models
</code></pre>
<p><strong>We ran the job a few times and…</strong></p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/resultados_workflow_prueba_748f82fcf3.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/resultados_workflow_prueba_748f82fcf3.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/resultados_workflow_prueba_748f82fcf3.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/resultados_workflow_prueba_748f82fcf3.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/resultados_workflow_prueba_748f82fcf3.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="results of the test workflow in the code" title="Test Workflow"/></article>
<p>Voilà! We can see the <strong>results</strong> in Argo Workflows.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/resultados_workflow_prueba_argo_workflows_9586266869.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/resultados_workflow_prueba_argo_workflows_9586266869.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/resultados_workflow_prueba_argo_workflows_9586266869.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/resultados_workflow_prueba_argo_workflows_9586266869.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/resultados_workflow_prueba_argo_workflows_9586266869.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="results of the test workflow in Argo Workflows" title="Results in Argo Workflows"/></article>
<p>If we look at the two executions that failed, they are the ones that <strong>lasted more than 5 minutes</strong> and, upon closer inspection, we can see that they are the ones we deliberately <strong>forced</strong> to behave as anomalies.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/logs_anomalias_argo_workflows_f81e4c1bb2.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/logs_anomalias_argo_workflows_f81e4c1bb2.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/logs_anomalias_argo_workflows_f81e4c1bb2.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/logs_anomalias_argo_workflows_f81e4c1bb2.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/logs_anomalias_argo_workflows_f81e4c1bb2.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="logs of anomalies in Argo Workflows showing the forced ones" title="Anomalies in Argo Workflows"/></article>
<p>With this, even though the pipeline appears to have finished successfully at first glance (since its main task did not fail), we can clearly see that <strong>something unusual is happening</strong>. In this case, it takes 5 minutes when most pipelines complete in approximately 40 seconds.</p>
<h2 class="block block-header h--h30-15-400 left  ">Why not simply use a timeout?</h2>
<p>The most common way to control slow pipelines is to <strong>add a fixed timeout</strong>. It’s simple, easy to understand, and <strong>works… until it doesn’t</strong>. The AIOps-based approach we followed addresses limitations that timeouts simply cannot solve.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">A timeout is static</h3>
<p>A timeout defines a <strong>rigid limit</strong>: if the pipeline takes longer, it fails. This forces you to choose an <strong>arbitrary value</strong> that is almost never perfect:</p>
<ul>
<li><strong>Too low</strong> → false positives.</li>
<li><strong>Too high</strong> → real issues go unnoticed.</li>
</ul>
<p>Our approach, instead, <strong>learns</strong> what normal system behavior looks like and <strong>adapts over time</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusions</h2>
<p>After walking through the entire process, the main conclusion is clear: <strong>the difficulty of applying AIOps lies not in the model itself, but in training it properly</strong>. The algorithm we used is simple, but its behavior depends entirely on the quality and representativeness of the data it learns from.</p>
<p>For this type of system to work, it is essential to have <strong>good observability and sufficient historical data</strong>. Without reliable metrics and temporal context, any attempt at “intelligence” quickly degrades into arbitrary rules or constant false positives. AI does not fix a lack of visibility; it simply amplifies what already exists.</p>
<p>This exercise leaves us with an important lesson: before thinking about more complex models or more sophisticated architectures, it is crucial to invest in <strong>understanding the real behavior of the system</strong>. In AIOps, intelligence begins long before training and is often much closer to observability than to machine learning.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Jorge de la Llana ]]>
        </dc:creator>
        <title>AI will not solve your organizational problems if you’re not clear about the “why” behind the transformation.</title>
        <link>https://en.paradigmadigital.com/organizational-transformation-rev/ai-not-solve-organizational-problems-not-clear-why-transformation/</link>
        <pubDate>Thu, 19 Feb 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/organizational-transformation-rev/ai-not-solve-organizational-problems-not-clear-why-transformation/</guid>
        <description>AI is a tool with great potential, but it does not generate alignment, trust, or clarity on its own. Before asking how to use it, ask yourself why you are transforming your organization. “Why do we want to transform?” That is the real starting point.
</description>
        <content:encoded>
            <![CDATA[
                <p>For years, we have talked about organizational transformation as if it were a destination. But transforming an organization has never been about changing processes, implementing methodologies, or digitizing tasks.</p>
<p>It has always been about something deeper: <strong>how an organization behaves, how it makes decisions, and how it learns</strong>.</p>
<p>Five years ago, we identified the <a href="https://www.paradigmadigital.com/techbiz/para-que-transformacion-organizacional/" target="_blank">main challenges holding modern organizations back</a>. Today, with the massive arrival of Artificial Intelligence, those challenges have not disappeared: <strong>they have become more visible… and more costly</strong>.</p>
<p>AI does not replace organizational transformation. It is an <strong>accelerator</strong>. And like any accelerator, it does not only boost what works: <strong>it also amplifies what is broken</strong>.<br>
To understand this, we need to revisit the <strong>core challenges</strong> that are still present.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">1 <span class="enum-header"></span> Communication: the root of almost every problem</h2>
<p>In many organizations, formal communication does not flow to decision-makers. Information arrives <strong>late, biased, or incomplete</strong>. This leads to endless meetings, poorly informed decisions, and documents that nobody reads.</p>
<p>Informal communication, which should act as a lubricant, turns into rumor when <strong>trust and psychological safety</strong> are missing.</p>
<h3 class="block block-header h--h20-175-500 left  ">What happens when AI arrives?</h3>
<ul>
<li>You have more information than ever.</li>
<li>But not more clarity.</li>
<li>Nor more alignment.</li>
</ul>
<p>AI does not fix poor communication; it makes it more evident. You can have spectacular dashboards, but if people do not speak honestly, the <strong>quality of decisions</strong> does not improve.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">2 <span class="enum-header"></span> Alignment: if every area optimizes its own goals, the organization loses</h2>
<p>Misalignment is not a lack of good intentions; it is <strong>a lack of a shared framework</strong>. Objectives defined in closed committees, departments competing with each other, priorities changing without warning… This generates <strong>duplication, conflict, fatigue, and loss of focus</strong>.</p>
<h3 class="block block-header h--h20-175-500 left  ">What happens when you introduce AI in this context?</h3>
<ul>
<li>AI optimizes locally.</li>
<li>But the organization needs global optimization.</li>
<li>And without alignment, AI generates more noise, not more impact.</li>
</ul>
<p>AI can provide valuable insights, but it cannot decide why one objective is more important than another, nor resolve tensions between departments. <strong>That is still leadership</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">3 <span class="enum-header"></span> Lack of focus: doing everything is the fastest way to achieve nothing</h2>
<p>A lack of focus is a clear symptom of an <strong>organization without real priorities</strong>. More projects are started than can be sustained. People attend <strong>ten meetings and make zero decisions</strong>. Initiatives drag on because nobody knows what should stop or what should continue.</p>
<h3 class="block block-header h--h20-175-500 left  ">What happens when we add AI?</h3>
<ul>
<li>AI generates more possibilities.</li>
<li>But more options without focus = more dispersion.</li>
</ul>
<p>AI does not provide focus. The organization must define it, sustain it, and protect it.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">4 <span class="enum-header"></span> Adaptation to change: structures designed for stability, not learning</h2>
<p>Organizations have innovated in technology… but <strong>they continue operating with structures designed for a different time and context</strong>. Strong structures, yes, but rigid ones. Structures that slow down today’s market speed.</p>
<h3 class="block block-header h--h20-175-500 left  ">What about AI?</h3>
<p>AI requires fast cycles, experimentation, and distributed decision-making. If your structure does not allow this, AI will simply collide with existing bureaucracy. <strong>AI does not make a slow organization agile</strong>. It makes its slowness more visible.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">5 <span class="enum-header"></span> Talent: attracting is difficult, wasting is easy</h2>
<p>New generations seek purpose, autonomy, growth, and balance. Rigid structures, lack of clarity, and limited internal mobility generate frustration and talent loss.</p>
<h3 class="block block-header h--h20-175-500 left  ">What happens when AI arrives?</h3>
<p>Talent wants to leverage technology to <a href="https://www.paradigmadigital.com/transformacion-organizacional-rev/gestion-personas-evolucionar-organizaciones-adaptativas/" target="_blank">learn, create, and deliver more value</a>. But if the organization does not enable it, AI becomes a threat rather than an opportunity. Wasting talent is, more than ever, losing competitive advantage.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">6 <span class="enum-header"></span> Customer proximity: more data does not mean more understanding</h2>
<p>Paradoxically, the more many organizations grow, the further they move away from real customers. No direct contact, no feedback, no deep understanding.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">And here AI can be misleading</h3>
<ul>
<li>You may have thousands of data points about your users.</li>
<li>But if you do not understand their context, emotions, or intent… you will not make the right decisions.</li>
</ul>
<p><strong>AI does not replace empathy</strong>. It complements it.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The real problem is not AI</h2>
<p>The problem is <strong>not being clear about why</strong> we want to transform.</p>
<p>AI <strong>can</strong>:</p>
<ul>
<li>Automate</li>
<li>Accelerate</li>
<li>Recommend</li>
<li>Optimize</li>
</ul>
<p>But it <strong>cannot</strong>, on its own:</p>
<ul>
<li>Generate clarity / build trust</li>
<li>Align teams</li>
<li>Resolve tensions</li>
<li>Set priorities</li>
<li>Define purpose</li>
<li>Rebuild culture</li>
</ul>
<p>…unless we are all AI agents (Skynet muhahaha).</p>
<p>Organizational transformation remains a human process. AI is a powerful tool that only creates real impact when the <strong>organizational system is ready to adopt it</strong>.</p>
<p>That is why, before asking <em>“How do we implement AI?”</em>, the critical question is:<br>
<strong>“Why do we want to transform as an organization?”</strong><br>
<strong>“What behaviors, beliefs, and dynamics must change for AI to create impact?”</strong></p>
<p>If we do not address cultural, structural, and leadership challenges, AI will not be the transformation lever we expect. It will be a mirror that amplifies our inefficiencies.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Conclusion</h3>
<p><strong>AI</strong> is the <strong>multiplier</strong>.<br>
<strong>Culture</strong> is the <strong>enabler</strong>.<br>
<strong>Transformation</strong> is the <strong>purpose</strong>.</p>
<p>The organizations that will survive in the coming years will not be those that adopt AI the fastest. They will be those that have:</p>
<ul>
<li>A <strong>culture</strong> that enables learning,</li>
<li>A <strong>structure</strong> that enables decision-making,</li>
<li>The <strong>clarity</strong> that enables prioritization,</li>
<li>A <strong>leadership</strong> that enables trust,</li>
<li>And a <strong>purpose</strong> that guides every decision.</li>
</ul>
<p><strong>AI needs all of that to create real impact</strong>.</p>
<p>Is your organization preparing the ground for AI… or trying to implement it on structures that cannot sustain it? I’d love to read your thoughts 👇</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Rodrigo Zavala ]]>
        </dc:creator>
        <title>The Dark Side of Microfrontends</title>
        <link>https://en.paradigmadigital.com/dev/dark-side-microfrontends/</link>
        <pubDate>Tue, 17 Feb 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/dark-side-microfrontends/</guid>
        <description>The biggest risk of microfrontends isn’t technical, it’s organizational. Without clear agreements and shared ownership, this architecture only amplifies collaboration issues. Sometimes, the best architecture is the simplest one that works.
</description>
        <content:encoded>
            <![CDATA[
                <p>When I first discovered microfrontends a few years ago, I thought I had found the ultimate solution to all my frontend architecture problems. Spoiler: it wasn’t. Not because they’re a bad technology, but because, like everything in software development, <strong>they come with a price that isn’t always mentioned when people talk about implementing them</strong>.</p>
<p>If you’re considering implementing microfrontends in your project, or if you already have them and feel identified with the chaos I’m about to describe, this post is for you.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/microfrontends_5eea485f80.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/microfrontends_5eea485f80.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/microfrontends_5eea485f80.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/microfrontends_5eea485f80.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/microfrontends_5eea485f80.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="microfrontends vs. chaos" title="Microfrontends"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The Promised Dream</h2>
<p>Let’s start with the beautiful part. Microfrontends arrive with a brilliant <strong>promise</strong>:</p>
<ul>
<li><strong>Full autonomy</strong> for each team</li>
<li><strong>Independent deployments</strong> without stepping on others’ code</li>
<li><strong>Organizational and technical scalability</strong></li>
<li><strong>Different technologies</strong> coexisting in harmony</li>
<li><strong>Clear ownership</strong> by business domains</li>
</ul>
<p>It sounds perfect, right? And in theory, it is. Each team builds its own slice of the frontend, deploys whenever it wants, uses the framework it prefers, and everything works like a Swiss watch.</p>
<p><em>Until it doesn’t.</em></p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">When Reality Hits</h2>
<p>The truth is that <strong>the real complexity of microfrontends isn’t in the code, but in everything else</strong>: teams, blurry boundaries, maintenance, coordination.</p>
<p>Let me tell you what no one mentions when they sell this architecture as the silver bullet.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">1 <span class="enum-header"></span> The Complexity You Didn’t Expect</h3>
<p><em>&quot;We went from having one big monolith that was hard to maintain to having 10 small monoliths that are hard to coordinate.&quot;</em></p>
<p>Suddenly you have:</p>
<ul>
<li>A <strong>multiplication of CI/CD pipelines</strong> (one per microfrontend).</li>
<li><strong>Versioning nightmares</strong> with shared libraries.</li>
<li><strong>Communication strategies</strong> between microfrontends that no one properly documented.</li>
<li><strong>Dependency synchronization issues</strong> breaking things everywhere.</li>
</ul>
<p>What used to be a problem in one place is now ten distributed problems. And distributing problems doesn’t make them disappear, <strong>it makes them harder to find</strong>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">2 <span class="enum-header"></span> The User Experience Becomes Fragmented</h3>
<p><em>&quot;Each microfrontend is independent… until the user notices they all have different spinners.&quot;</em></p>
<p>Every team has its own priorities, pace, and way of doing things. The result:</p>
<ul>
<li><strong>Visual inconsistencies</strong> everywhere.</li>
<li><strong>Uneven performance</strong> (one loads in 200ms, another in 3 seconds).</li>
<li><strong>Interactions that don’t feel cohesive</strong>.</li>
<li><strong>Global changes</strong> (like a rebranding or accessibility improvements) turning into near-impossible missions.</li>
</ul>
<p>In the end, users don’t know or care that you’re using microfrontends. <strong>They just know your app feels weird.</strong></p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">3 <span class="enum-header"></span> Integration and Security: The Constant Headache</h3>
<p>Sharing authentication between microfrontends is more complex than it seems:</p>
<ul>
<li><strong>Token handling</strong> across domains or contexts.</li>
<li><strong>Cookies</strong> that work in development but break in production.</li>
<li><strong>Sandboxing issues</strong> when using iframes (and all the problems that come with them).</li>
<li><strong>Permissions and roles</strong> that must stay synchronized.</li>
</ul>
<p>And all of this while trying to keep things secure. Guaranteed fun.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">4 <span class="enum-header"></span> The Invisible Cost</h3>
<p>Microfrontends aren’t free. They carry a cost that isn’t always visible at the beginning:</p>
<ul>
<li><strong>Onboarding time</strong> multiplies (now you need to understand N architectures).</li>
<li><strong>Duplicated dependencies</strong> that bloat your bundles.</li>
<li><strong>Longer build times</strong> because everything is built separately.</li>
<li><strong>More complex infrastructure</strong> to maintain and monitor.</li>
</ul>
<p>Oh, and your cloud bill? It grows too.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">5 <span class="enum-header"></span> The Human Factor (The Most Important One)</h3>
<p><em>&quot;Microfrontends don’t fix misaligned teams, they just make the problems more visible.&quot;</em></p>
<p>This was the hardest lesson for me: <strong>architecture does not solve organizational problems</strong>.</p>
<p>If your teams don’t communicate, if there are no clear agreements, if ownership boundaries are blurry… microfrontends will only amplify those issues:</p>
<ul>
<li><strong>Conflicts</strong> over who owns what.</li>
<li><strong>Duplicated work</strong> because teams don’t know what others have already built.</li>
<li><strong>Technical inconsistency</strong> that turns into distributed technical debt.</li>
</ul>
<p>In the end, you’re left with frustrated development teams arguing with each other.</p>
<h2 class="block block-header h--h30-15-400 left  ">So, What Do We Do?</h2>
<p>I won’t leave you with a bitter aftertaste. Microfrontends <strong>can work,</strong> and despite how it may sound, I don’t hate them at all. But you need to carefully evaluate whether you truly need them and <strong>be willing to invest in doing them right</strong>.</p>
<p>Here are some lessons I’ve learned:</p>
<ol>
<li><strong>Evaluate whether you really need them</strong></li>
</ol>
<p>If your team is small or medium-sized, you probably don’t. A well-structured monolith can take you very far.</p>
<ol start="2">
<li><strong>Establish clear contracts</strong></li>
</ol>
<p>APIs, events, interfaces… everything clearly defined and documented. Without this, you’re heading straight into chaos. A well-organized Architecture team is your best ally.</p>
<ol start="3">
<li><strong>Communication</strong></li>
</ol>
<p>Having people who act as bridges between different development teams (business domains) is essential for things to flow smoothly.</p>
<ol start="4">
<li><strong>Unify the experience</strong></li>
</ol>
<p>Invest in a solid design system and shared base components. Consistency is non-negotiable.</p>
<ol start="5">
<li><strong>Automate everything you can</strong></li>
</ol>
<p>Your pipelines, deployments, versioning, tests. Complexity is only manageable if it’s automated.</p>
<ol start="6">
<li><strong>Organization first, technology second</strong></li>
</ol>
<p>Microfrontends should reflect how your organization works, not the other way around. Remember Conway’s Law: your architecture will inevitably mirror your organizational structure.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">In Conclusion</h2>
<p>As architects, we shouldn’t focus only on the technical impact on the final product (scalability, resilience, etc.), but also on the <strong>Developer Experience</strong>. Our goal should be to provide the <strong>right tools</strong> and the <strong>necessary support</strong> so development teams can work efficiently and effectively.</p>
<p>Microfrontends are not the villain, but they’re not the hero that will save your project either. <strong>They’re just a tool.</strong> And like any powerful tool, you need to <strong>know when to use it… and when to leave it in the box</strong>.</p>
<p>The real question isn’t “Are microfrontends good or bad?” but rather:<br><strong>“Do they solve the concrete problems we have right now and are we willing to pay their cost?”</strong></p>
<p>If the answer is yes then great, use them. If not, that’s fine too: sometimes the best architecture is <strong>the simplest one that works</strong>.</p>

            ]]>
        </content:encoded>
    </item>
</channel>
</rss>
