<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
  <title>Paradigma Digital</title>
  <link>https://www.paradigmadigital.com/blog/</link>
  <atom:link href="https://www.paradigmadigital.com/feed.xml" rel="self" type="application/rss+xml" />
  <description>Big Data, Blockchain, cultura ágil, desarrollo, diseño… Te ofrecemos toda la información que necesitas para estar al día en tecnología.</description>
  <generator>Eleventy - 11ty.dev</generator>
  <language>en-US</language>
  <lastBuildDate>Fri, 01 May 2026 06:47:24 GMT</lastBuildDate>
  
  <item>
        <dc:creator>
            <![CDATA[ David García Luna ]]>
        </dc:creator>
        <title>The Speed Paradox: Why XP (not AI) Will Keep your Team from Burning Out</title>
        <link>https://en.paradigmadigital.com/organizational-transformation-rev/speed-paradox-xp-not-ai-will-keep-team-burn-out/</link>
        <pubDate>Thu, 30 Apr 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/organizational-transformation-rev/speed-paradox-xp-not-ai-will-keep-team-burn-out/</guid>
        <description>AI can accelerate delivery, but it shouldn’t turn teams into machines. In this new context, XP becomes relevant again for more than just its technical practices—it sets healthy boundaries on pace and protects team sustainability. We’ll walk you through how in this post.
</description>
        <content:encoded>
            <![CDATA[
                 <h2 class="block block-header h--h30-15-400 left  add-last-dot">The digital rat race</h2>
<p>The promise of Generative Artificial Intelligence is seductive: instant code, solutions in seconds, and skyrocketing productivity. But for project management roles, this raises an uncomfortable question: <strong>if machines don’t get tired, do we expect our human teams to keep up with that pace?</strong> In this productivity boom, XP’s “sustainable pace” is the only guarantee that your team won’t implode from burnout.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/carrera_rata_digital_ia_52b5b15ddb.jpg"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/carrera_rata_digital_ia_52b5b15ddb.jpg 1920w,https://www.paradigmadigital.com/assets/img/resize/big/carrera_rata_digital_ia_52b5b15ddb.jpg 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/carrera_rata_digital_ia_52b5b15ddb.jpg 910w,https://www.paradigmadigital.com/assets/img/resize/small/carrera_rata_digital_ia_52b5b15ddb.jpg 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="the digital rat race: work, innovation, financial pressure, burnout" title="Digital rat race"/></article>
<p>This is where <a href="https://en.paradigmadigital.com/organizational-transformation-rev/ai-xp-from-craftsmanship-manifesto-to-ai-era/" target="_blank">Extreme Programming (XP) comes back into the spotlight</a>, not as an engineering methodology, but as a <strong>manifesto for human sustainability</strong>. Paradoxically, to move at AI speed, we need the <strong>safety brakes</strong> and <strong>psychological well-being</strong> that Kent Beck designed in the ’90s.</p>
<h2 class="block block-header h--h30-15-400 left  ">How does XP ensure well-being in “high-speed” environments?</h2>
<p>Unlike other agile frameworks that sometimes turn into a “ticket factory,” <strong>XP has explicit principles around team mental health</strong>. These are not HR suggestions—they are strict technical rules.</p>
<ul>
<li><strong>The Sustainable Pace rule</strong></li>
</ul>
<p>XP explicitly forbids heroics. The premise is simple: a tired person introduces more <em>bugs</em> in one hour than they can fix in two. XP states that if overtime happens one week, it’s an exception. If it happens two weeks in a row, the problem lies in the plan, not the team.</p>
<ul>
<li><strong>Psychological safety (the value of “courage”)</strong></li>
</ul>
<p>XP requires the courage to say <em>“no”</em> or <em>“I don’t know”</em>, but it also requires leadership to respect and listen to that truth. By eliminating fear-based estimation and replacing it with “accepted responsibility”—where the person doing the work estimates the work—anxiety around unrealistic deadlines is drastically reduced.</p>
<ul>
<li><strong>The end of the “lone wolf” concept</strong></li>
</ul>
<p>Through <em>Collective Code Ownership</em>, XP removes the stress of being “the only person who knows how this works.” If someone gets sick or goes on vacation, the project doesn’t stop—and no one has to take emergency calls from the beach.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/bienestar_entornos_alta_velocidad_2d2956b5fc.jpg"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/bienestar_entornos_alta_velocidad_2d2956b5fc.jpg 1920w,https://www.paradigmadigital.com/assets/img/resize/big/bienestar_entornos_alta_velocidad_2d2956b5fc.jpg 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/bienestar_entornos_alta_velocidad_2d2956b5fc.jpg 910w,https://www.paradigmadigital.com/assets/img/resize/small/bienestar_entornos_alta_velocidad_2d2956b5fc.jpg 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="well-being in high-speed environments" title="Well-being in high-speed environments"/></article>
<h2 class="block block-header h--h30-15-400 left  ">Is AI the end of Pair Programming or its rebirth?</h2>
<p>One of the most controversial XP practices has always been <em>Pair Programming</em> (two people, one computer). With tools like Copilot, ChatGPT, Windsurf, Cursor, and many others, many managers ask: <em>“Why pay for two people if I have AI?”</em></p>
<p>To the question of whether AI will replace developers, we can respond with an analogy: <em>“Autopilot in airplanes has been around for decades, optimizing work—but it has never replaced the pilot. Why is the human factor still necessary?”</em></p>
<p>In software teams, we cannot rely solely on <em>Vibe Coding</em> (just as we cannot rely only on autopilot in aviation).</p>
<p><em>Vibe Coding</em> produces code that seems to work but is incomprehensible and ultimately unmaintainable, leading to <strong>production failures due to lack of human oversight</strong>, with much higher long-term cost and impact.</p>
<p>To avoid this scenario, XP proposes a <strong>crucial evolution</strong>:</p>
<ul>
<li><strong>From “Driver and Navigator” to “Cyborg Pairing”</strong>. AI doesn’t replace people—it elevates the pair. In this new model, AI acts as a “learning accelerator” for junior profiles, reducing the time spent searching for information—but (and this is critical) <strong>without removing the need for human validation</strong>.</li>
<li><strong>The danger of “Vibe Coding”</strong>. Martin Fowler and Kent Beck warn about the risk of accepting AI-generated code just because it “seems to work.” XP acts as a counterbalance: human <em>Pair Programming</em> remains essential for strategy and architectural design—areas where AI tends to hallucinate or introduce complex technical debt. AI writes, humans navigate, and XP sets the traffic rules.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/pair_programming_06934ea815.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/pair_programming_06934ea815.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/pair_programming_06934ea815.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/pair_programming_06934ea815.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/pair_programming_06934ea815.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Pair Programming with AI" title="Pair Programming with AI"/></article>
<h2 class="block block-header h--h30-15-400 left  ">TDD: why work twice as hard to go faster?</h2>
<p>XP requires writing the test <em>before</em> the code (Test-Driven Development).</p>
<ul>
<li><strong>TDD is not a testing technique—it’s an anxiety management technique</strong>. Knowing you have an automated safety net that alerts you in seconds if something breaks gives the team “extreme” confidence.</li>
<li><strong>AI excels at generating tests</strong>. Using AI to generate the TDD test suite before writing the code is the ultimate productivity hack. It turns AI into the strictest quality gatekeeper.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The death of documentation (as we know it)</h2>
<p>XP has a reputation for lacking documentation, which is false—but it stems from breaking away (with the Agile manifesto) from the traditional approach of the ’80s (pages and pages of documentation rarely used).</p>
<ul>
<li><strong>XP relies on clean code practices and self-explanatory code</strong> that others can understand without context.</li>
<li><strong>Today, AI tools can read XP’s clean code and automatically generate business documentation</strong>. But this only works if the team follows XP’s discipline of “Simplicity” and proper naming. If the code is a mess, AI will document a mess.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion: returning to the human to survive the artificial</h2>
<p>In the age of AI, speed is a <em>commodity</em>. Anyone can generate code quickly. A team’s competitive advantage is no longer <strong>how much</strong> code it produces, but its ability to <strong>maintain it, adapt it</strong>, and—above all—avoid imploding from burnout in the process. XP is the methodology that reminds us that software is built by people, for people.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Vanessa Davo Parreño ]]>
        </dc:creator>
        <title>How to Use GSAP to Create Particle Effects in the DOM</title>
        <link>https://en.paradigmadigital.com/dev/how-use-gsap-create-particle-effects-dom/</link>
        <pubDate>Tue, 28 Apr 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/how-use-gsap-create-particle-effects-dom/</guid>
        <description>You don’t need to fill a website with animations to improve the experience. Sometimes, a well-timed visual feedback is enough. A button, a burst of particles, and GSAP can turn a simple action like hitting “like” into something much more memorable. In this post, we’ll show you how to do it step by step.
</description>
        <content:encoded>
            <![CDATA[
                <p>Admit it: you’ve also clicked a “Like” button a thousand times just because it triggered a burst of color or a shower of hearts. I don’t blame you—I’ve done it too. These kinds of microinteractions are what create that satisfying feeling that <strong>the interface is alive</strong>.</p>
<p>This detail, which might seem trivial, is precisely what separates a well-crafted, thoughtfully designed digital product from a generic, dull application.</p>
<p>That said, there’s no need to overload your website with unnecessary animations. But when you use them in the right place (like on that primary action button), you achieve something very powerful: giving users immediate and satisfying feedback.<br>
And the truth is, creating these kinds of animations is much easier than it seems. So in today’s post, we’ll build, step by step, <strong>a particle-based microinteraction for a like button</strong>.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img data-src="https://www.paradigmadigital.com/assets/cms/like_liked_transicion_b54a037d63.gif" class="lazy-img" title="Like button" alt="like button to liked transition"></article>
<h2 class="block block-header h--h30-15-400 left  ">What tools are we going to need?</h2>
<p>For this project, we’ll use <strong>HTML, CSS, and JavaScript</strong>, but the real animation engine will be <a href="https://gsap.com/" target="_blank">GSAP</a>.</p>
<p>If you haven’t heard of GSAP, it’s one of the <strong>most popular tools for animating almost anything on the web</strong>. It saves a lot of time (and lines of code) compared to doing it natively.</p>
<p>I’ve prepared a CodePen example so you can try it out and see the final result right away, but we’ll still go through it step by step. You can check it out here: <a href="https://codepen.io/editor/vanessadavo/pen/019d7c5e-bd13-744a-b078-08f91a48f5f8" target="_blank">Like Button (GSAP)</a></p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">HTML Structure: defining the semantic structure of the button</h2>
<p>To get started, we’ll <strong>set up a basic HTML structure</strong>. The most important part here—besides linking our local style and logic files—is <strong>importing the required libraries</strong> so the particle animation can work properly.</p>
<pre><code class="language-html">&lt;html&gt;
  &lt;head&gt;
    &lt;meta charset=&quot;UTF-8&quot;&gt;
    &lt;link rel=&quot;stylesheet&quot; href=&quot;./style.css&quot;&gt;
    &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt;
    &lt;title&gt;Like Button&lt;/title&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;!-- GSAP --&gt;
    &lt;script src=&quot;https://unpkg.com/gsap@3/dist/gsap.min.js&quot;&gt;&lt;/script&gt;
    &lt;script src=&quot;https://assets.codepen.io/16327/Physics2DPlugin3.min.js&quot;&gt;&lt;/script&gt;

    &lt;!-- SCRIPT --&gt;
    &lt;script src=&quot;./script.js&quot;&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;
</code></pre>
<p>For this effect to work properly, we need to import <strong>two key pieces</strong> from the GSAP ecosystem:</p>
<ul>
<li><strong>GSAP Core</strong>: this is the main engine of the library. It allows us to create timelines and manage the basic properties of the elements we’re going to animate.</li>
<li><strong>Physics2DPlugin</strong>: this plugin acts as the physics simulation engine for the animation. It lets us define parameters such as gravity, velocity, and dispersion angle, delegating all the complex mathematical calculations to GSAP to achieve realistic trajectories.</li>
</ul>
<p><em><strong>Note:</strong> the Physics2D plugin is a premium GSAP tool. In this example, we use it via a specific URL for CodePen usage, but it requires a license if you plan to implement it in a commercial project.</em></p>
<p>Now, inside our &lt;body&gt; (and always before the &lt;script&gt; tags), we’ll add the <strong>button structure</strong>.</p>
<pre><code class="language-html">&lt;button id=&quot;like-btn&quot;&gt;
  &lt;div id=&quot;emitter&quot;&gt;
    &lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; fill=&quot;none&quot; viewBox=&quot;0 0 24 24&quot; stroke-width=&quot;2&quot; stroke=&quot;currentColor&quot;&gt;
      &lt;path stroke-linecap=&quot;round&quot; stroke-linejoin=&quot;round&quot; d=&quot;M21 8.25c0-2.485-2.099-4.5-4.688-4.5-1.935 0-3.597 1.126-4.312 2.733-.715-1.607-2.377-2.733-4.313-2.733C5.1 3.75 3 5.765 3 8.25c0 7.22 9 12 9 12s9-4.78 9-12Z&quot; /&gt;
    &lt;/svg&gt;
  &lt;/div&gt;
  &lt;span&gt;Like&lt;/span&gt;
&lt;/button&gt;
</code></pre>
<p>In this structure, we’ve defined <strong>two important identifiers</strong>:</p>
<ul>
<li><strong>like-btn</strong>: this is the ID of the main button. We’ll use it in JavaScript to listen for the click event and to manage style or text changes when the user interacts with it.</li>
<li><strong>emitter</strong>: this element is essential. It acts as the container for the icon, but its main purpose is to serve as the &quot;origin point&quot; for the particles. By keeping it separate, we can ensure that the explosion starts exactly from the position of the heart, rather than from any part of the button.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">CSS Styles: preparing the look &amp; feel and the container</h2>
<p>To define the styles, we’ll create a file called <strong>style.css</strong>. The first step is to import the typography and define a set of CSS variables (<a href="https://developer.mozilla.org/en-US/docs/Web/CSS/Guides/Cascading_variables/Using_custom_properties" target="_blank">Custom Properties</a>). This will allow us to <strong>manage colors and reusable values</strong> from a single place, making our design much easier to maintain.</p>
<pre><code class="language-css">@import url('https://fonts.googleapis.com/css2?family=DM+Sans:ital,opsz,wght@0,9..40,100..1000;1,9..40,100..1000&amp;display=swap');

:root{
  --color-bg: #e9ecef;
  --color-primary: #495057;
  --color-accent: #ff3562;
  --color-on-accent: #ffffff;

  --text-sm: 1.1em;
  --text-md: 1.8rem;
    
  --transition-default: all 0.25s ease;
}
</code></pre>
<p>Next, we'll define the <strong>global style</strong> to clear the margins and center our button on the screen.</p>
<pre><code class="language-css">*{
  font-family: &quot;DM Sans&quot;, sans-serif;
}

body,
html {
  height: 100%;
  margin: 0;
  padding: 0;
  overflow: hidden;
}

body {
  display: flex;
  align-items: center;
  justify-content: center;
  flex-direction: column;
  min-height: 100vh;
    background-color: var(--color-bg);
}
</code></pre>
<p>In addition, we'll set up the <strong>.particles-container</strong> class. Although we'll create this container dynamically in JavaScript, for the sake of code clarity, we'll define its style here:</p>
<pre><code class="language-css">.particles-container {
  position: absolute;
  left: 0;
  top: 0;
  overflow: visible;
  z-index: 2;
  pointer-events: none;
  opacity: 0;
}
</code></pre>
<p>In the .particles-container class, there are a few properties that deserve special attention:</p>
<ul>
<li><strong>z-index: 2:</strong> the z-index property controls the stacking order of elements along the Z-axis (depth). By default, elements have a value of 1 (or auto). If we want the particles to explode <em>above</em> the button and not be hidden behind it, we need to assign them a higher value—in this case, 2.</li>
<li><strong>pointer-events: none:</strong> this property is key. It prevents the particles from interfering with mouse interactions. This way, even if a particle passes over the cursor, the user can still click the button without any issues.</li>
<li><strong>opacity: 0:</strong> initially, the container is invisible. We’ll only reveal it with GSAP at the exact moment of the animation.</li>
</ul>
<p>The next step is to <strong>define the style of our button</strong>. You’ll notice that we make use of the <strong>CSS variables defined earlier</strong>, which helps keep the code cleaner and more consistent.</p>
<p>We’ll also prepare the <strong>.clicked</strong> state. This class will be added via JavaScript when the button is clicked, allowing us to modify the style of the button and its inner elements (such as the icon and the text):</p>
<pre><code class="language-css">button{
  border-radius: 8px;
  cursor: pointer;
  padding: 12px 26px;
  font-size: var(--text-md);
  border: 2px solid var(--color-primary);
  display: flex;
  align-items: center;
  gap: 12px;
  color: var(--color-primary);
  background-color: #e9ecef;
  transition: var(--transition-default);
}

button.clicked{
  border-color: var(--color-accent);
  background-color: var(--color-accent);
  color: var(--color-on-accent);
}

button svg{
  width: var(--text-sm);
  fill: var(--color-primary);
  stroke: var(--color-primary);
  transition: var(--transition-default);
}

button.clicked svg{
  fill: var(--color-on-accent);
  stroke: var(--color-on-accent);
}
</code></pre>
<p>Finally, we need to <strong>define the particle's appearance</strong>. In this example, we've chosen a design featuring circular dots. However, you can customize this style however you like.</p>
<pre><code class="language-css">.dot {
  position: absolute;
  border-radius: 50%;
}
</code></pre>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">JavaScript Logic: building the particle system with GSAP</h2>
<p>It’s time to take action. Now we’ll set up the <strong>logic required for the button to respond to clicks</strong> and generate the particle explosion. We’ll manage this entire process from our <strong>script.js</strong> file.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Step 1. Register GSAP plugins</h3>
<p>The first thing we need to do is <strong>register the plugin with the GSAP core engine</strong>. This is essential so the library can recognize the physical properties we’ll use later.</p>
<pre><code class="language-javascript">gsap.registerPlugin(Physics2DPlugin);
</code></pre>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Step 2. Declare the elements</h3>
<p>In this step, we’ll <strong>capture the DOM elements</strong> we’re going to interact with. We’ll also take the opportunity to dynamically create the container where our particles will live:</p>
<pre><code class="language-javascript">// Main emitter element (used as explosion origin)
const emitter = document.getElementById(&quot;emitter&quot;);

// Like button + text
const likeBtn = document.getElementById(&quot;like-btn&quot;);
const text = likeBtn.querySelector(&quot;span&quot;);

// Container that holds all particles
const container = document.createElement(&quot;div&quot;);
container.className = &quot;particles-container&quot;;
document.body.appendChild(container);
</code></pre>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Step 3. GSAP configuration and states</h3>
<p>At this point, we’ll <strong>define the variables that control the behavior of our animation</strong>. What’s interesting about this approach is that, by simply tweaking these values, you can completely transform the effect.</p>
<p>I encourage you to experiment with parameters like <strong>velocity or gravity</strong> to see how the physics of the motion changes. In addition, in this step we’ll also <strong>declare the initial state</strong> of the button:</p>
<pre><code class="language-javascript">const emitterSize = 2;        // spawn area radius
const dotQuantity = 12;     // number of particles
const dotSizeMax = 12;      // max particle size
const dotSizeMin = 4;       // min particle size
const speed = 0.6;            // initial velocity multiplier
const gravity = 0.1;          // gravity strength

// Toggle state (liked / not liked)
let liked = false;
</code></pre>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Step 4. Explosion setup</h3>
<p>In this step, we’ll create a <strong>reusable function</strong> responsible for generating the &quot;explosion&quot; every time it is called. The idea is to build a <strong>GSAP timeline</strong> that contains the individual animations for each particle.</p>
<pre><code class="language-javascript">const explosion = createExplosion(container);

function createExplosion(container) {
  const tl = gsap.timeline({ paused: true });

  for (let i = 0; i &lt; dotQuantity; i++) {

    // Create particle element
    const dot = document.createElement(&quot;div&quot;);
    dot.className = &quot;dot&quot;;

    // Assign random color
    const colors = [&quot;#ff4d6d&quot;, &quot;#fb6f92&quot;, &quot;#ff8fab&quot;, &quot;#ffb3c1&quot;, &quot;#cdb4db&quot;, &quot;#a2d2ff&quot;];
    dot.style.backgroundColor = colors[Math.floor(Math.random() * colors.length)];

    container.appendChild(dot);

    // Random size &amp; direction (angle in radians)
    const size = gsap.utils.random(dotSizeMin, dotSizeMax, 1);
    const angle = Math.random() * Math.PI * 2;

    // Initial offset inside emitter radius
    const length = Math.random() * (emitterSize / 2 - size / 2);


    gsap.set(dot, {
      x: Math.cos(angle) * length,
      y: Math.sin(angle) * length,
      width: size,
      height: size,
      xPercent: -50,
      yPercent: -50,
      force3D: true,
      scale: gsap.utils.random(0.8, 1.2)
    });

    tl.to(
      dot,
      {
        physics2D: {
          angle: (angle * 180) / Math.PI, // convert to degrees
          velocity: (120 + Math.random() * 200) * speed,
          gravity: 500 * gravity
        },
        duration: 1 + Math.random()
      },
      0
    )
    .to(
      dot,
      {
        opacity: 0,
        scale: 0.5,
        duration: 0.3,
        ease: &quot;power2.in&quot;,
      },
      0.3
    );
  }

  return tl;
}
</code></pre>
<p>In this function, there are a few <strong>technical concepts</strong> we need to understand:</p>
<ol>
<li><strong>Creation loop</strong>. We use a <code>for</code> loop based on the <code>dotQuantity</code> variable to generate all particles at once. Each one gets a random color and size so the explosion doesn’t look artificial or repetitive.</li>
<li><strong>Trajectory calculation</strong>. We use mathematical functions (<code>Math.cos</code> and <code>Math.sin</code>) to position the particles in a circle around the center point. This allows them to shoot out in all directions (360 degrees).</li>
<li><strong>The power of Physics2D</strong>. Instead of manually animating the <code>x</code> and <code>y</code> coordinates, we simply pass an angle, velocity, and gravity to the plugin. GSAP takes care of drawing the perfect parabolic motion.</li>
<li><strong>Synchronization (position parameters)</strong>. You’ll notice that the <code>.to()</code> functions include a <code>0</code> or a <code>0.3</code> at the end. In GSAP, this is called a <a href="https://gsap.com/resources/position-parameter/" target="_blank">Position Parameter</a>.</li>
</ol>
<ul>
<li><strong>The &quot;0&quot;</strong>: forces all particles to start their animation exactly at second zero on the timeline. Without it, particles would appear one after another; with it, they all fire simultaneously, creating the explosion effect.</li>
<li><strong>The &quot;0.3&quot;</strong>: indicates that the particle should start fading out (opacity 0) when the timeline reaches 0.3 seconds, allowing it to disappear while still moving, resulting in a much smoother and more natural effect.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Step 5. Explosion function</h3>
<p>Once we have the animation logic ready, we need a <strong>function that positions the particle container in the correct place and triggers the movement</strong>. Without this step, the particles might appear in a corner of the screen instead of originating from the center of the button.</p>
<pre><code class="language-javascript">function explode(element) {
  const bounds = element.getBoundingClientRect();

  // Move container to element center
  gsap.set(container, {
    x: bounds.left + bounds.width / 2,
    y: bounds.top + bounds.height / 2,
    opacity: 1
  });

  // Restart animation from the beginning
  explosion.restart();
}
</code></pre>
<p>This function is responsible for <strong>positioning the effect in space</strong>: first, we use <strong>getBoundingClientRect()</strong> to calculate the exact coordinates of the button on the screen, and then, using <strong>gsap.set()</strong>, we instantly move the particle container to its center.</p>
<p>Finally, we execute <strong>explosion.restart()</strong> so the animation timeline resets and plays from the beginning with each new click, ensuring the explosion always originates from the correct position.</p>
<h2 class="block block-header h--h20-175-500 left  add-last-dot">Step 6. Click event and final interaction</h2>
<p>To wrap up, we need to <strong>detect when the user interacts with the button and trigger all the logic we’ve implemented</strong>. With the following code, we handle the state change, visual update, and particle launch:</p>
<pre><code class="language-javascript">likeBtn.addEventListener(&quot;click&quot;, () =&gt; {

  // Toggle like state
  liked = !liked;
  text.textContent = liked ? &quot;Liked&quot; : &quot;Like&quot;;

  // Toggle active class (for styling)
  likeBtn.classList.toggle(&quot;clicked&quot;, liked);

  // Button press animation (quick scale feedback)
  gsap.fromTo(
    likeBtn,
    { scale: 1 },
    { scale: 0.9, duration: 0.1, yoyo: true, repeat: 1 }
  );

  // Trigger particle explosion
  liked &amp;&amp; explode(emitter);
});
</code></pre>
<p>In this final block, <strong>two key things happen</strong>:</p>
<ul>
<li><strong>Tactile feedback</strong>: using <code>gsap.fromTo</code>, we create a real press sensation by slightly scaling the button. The use of <code>yoyo: true</code> makes the button automatically return to its original size.</li>
<li><strong>Conditional logic</strong>: thanks to the <code>&amp;&amp;</code> operator, the <code>explode</code> function only runs when the user hits &quot;Like&quot;, preventing particles from triggering when the action is undone.</li>
</ul>
<p>I hope this post has helped you better understand how these kinds of effects work and that, from now on, you feel confident enough to <strong>implement your own particle animations</strong> to add a touch of “magic” to your interfaces. Time to experiment!</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ José Luis Palomino ]]>
        </dc:creator>
        <title>The Importance of Prompt Engineering</title>
        <link>https://en.paradigmadigital.com/dev/importance-prompt-engineering/</link>
        <pubDate>Thu, 23 Apr 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/importance-prompt-engineering/</guid>
        <description>A prompt is not a nicely written piece of text or an improvised instruction—it’s code. And like any other code, it requires testing, versioning, continuous evaluation, and a clear operational strategy. Especially when working with agentic systems, multiple models, and workflows where a small error can drastically increase latency, cost, and complexity. We’ll walk you through it in this post.
</description>
        <content:encoded>
            <![CDATA[
                <p>Many people think that generative AI is about throwing questions into the air and crossing your fingers hoping the answer is correct. But have you stopped to think whether the model’s response is actually accurate? Have you considered whether the execution process is efficient? Whether you’re even using the right model?</p>
<p>In this post, we break down the reality. <strong>What happens if you don’t measure latency, cost per token, and the traceability of each call?</strong> Simply put, you end up with an experiment that’s not very useful in a professional environment.</p>
<p>Integrating LLMs into production requires an engineering mindset where the prompt is just the tip of the iceberg of a complex and auditable system.</p>
<h2 class="block block-header h--h30-15-400 left  ">Is prompting a trend or an engineering discipline?</h2>
<p>There is a misconception that simplifies prompt engineering to just writing polite instructions for language models. The reality in development teams is very different. When you try to scale an agentic system, you quickly realize that <strong>the prompt is code</strong> and, as such, <strong>it requires versioning, testing, and constant monitoring</strong>.</p>
<p>We’ve seen organizations embedding prompts directly into their source code, creating an operational nightmare every time they want to tweak a comma or test a new model. <strong>Treating the prompt as an external, managed asset</strong> allows iteration without redeploying the entire microservices infrastructure.</p>
<p>The real problem arises when we move from a simple chat to a network of specialized agents. Here, a <strong>misinterpretation error</strong> in the “supervisor agent” can lead to an endless loop of API calls that skyrocket your monthly bill. Designing a robust AI system means controlling which model responds to each task based on its complexity.</p>
<h2 class="block block-header h--h30-15-400 left  ">How much does each word your model generates actually cost?</h2>
<p>If you don’t have <strong>visibility into token consumption</strong>, you’re navigating blind. Model selection is a matter of process economics. Using high-performance models for simple classification tasks is a waste of resources, negatively impacting both budget and user experience due to accumulated latency.</p>
<p>In our implementations, we <strong>segment tasks</strong>: we reserve heavier models for complex reasoning and use “Flash” versions or local models for metadata extraction or intent classification.</p>
<p>Why settle for the first response? Prompt optimization using <strong>advanced techniques like Few-Shot or Chain of Thought</strong> improves accuracy and reduces the need for retries.</p>
<p>Every failed or hallucinated call means wasted time for the user and unnecessary cost. <strong>End-to-end traceability</strong> becomes essential to identify where efficiency is lost or where the model starts drifting.</p>
<h2 class="block block-header h--h30-15-400 left  ">Why operate a black box without real metrics?</h2>
<p>Observability is the Achilles’ heel of many AI projects. This is where tools like <strong>Langfuse</strong> come into play, shedding light on what happens behind each request.</p>
<p>It’s not enough to know that the system responded—we need to know <strong>how long each node in the execution graph took, which prompt version was used, and whether the retrieved context (<a href="https://en.paradigmadigital.com/techbiz/retrieval-augmented-generation-corporate-usage/" target="_blank">RAG</a> was actually useful for the final answer</strong>.</p>
<p>Having a centralized prompt repository with caching policies reduces retrieval latency and ensures scalability under high demand.</p>
<p><strong>Evaluation cannot be subjective</strong>. We implement “LLM-as-a-judge” systems and test datasets to automatically score toxicity, conciseness, and factual accuracy against a ground truth. <strong>Automating performance evaluation</strong> allows us to detect quality regressions before end users encounter inconsistent responses.</p>
<h2 class="block block-header h--h30-15-400 left  ">How does model size influence your prompt structure?</h2>
<p><strong>Not all LLMs process information with the same level of sophistication</strong>, and this is where cost efficiency collides with technical reality.</p>
<p>In our architecture, we orchestrate flash, mini, nano, or pro versions depending on the task, confirming an inverse rule: <strong>the larger the model, the less prompting effort you need</strong>. When using a powerful model, you rarely need complex prompt structures—it resolves things through brute force.</p>
<p>The challenge arises when optimizing for millisecond-level latency with smaller models. <strong>Technical demands grow exponentially: the lighter the model, the more precise and unambiguous your prompt must be to avoid inconsistent outputs</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusions</h2>
<p>The success of a generative AI implementation does not depend on finding the “perfect prompt,” but on <strong>building an engineering ecosystem around it</strong>.</p>
<p>Controlling latency, optimizing costs through smart model selection, and measuring every interaction with observability tools are essential steps for any team aiming to move beyond the prototype phase.</p>
<p><strong>Operational transparency</strong> is the only way to build trust in systems that are, by nature, probabilistic.</p>
<p>If you still have doubts about whether anyone can work as a prompt engineer, here’s a bonus:</p>
<h3 class="block block-header h--h20-175-500 left  ">Are you talking to the real engine or a polished version?</h3>
<p>There’s a reality shock when you move from interacting with a language model through its conversational interface to integrating it via code. The interface you use daily hides a massive system prompt that shapes everything.</p>
<p>That artificial politeness and tendency toward long explanations come from a layer designed for end users. <strong>When you connect directly to the API, you face a raw environment that assumes nothing for you</strong>.</p>
<p>Proactive assistants disappear, and you encounter vague systems. Operating at this level forces you to build your own guardrails. Every detail matters.</p>
<p>If you still think this role isn’t necessary in an AI team and want to give it a try—good luck 🍀. I’ll read you in the comments 👇.</p>
<p><strong>References and links</strong></p>
<ul>
<li><a href="https://langfuse.com/docs" target="_blank">Langfuse integration documentation</a></li>
<li><a href="https://langfuse.com/docs/prompts/get-started" target="_blank">Prompt management and two-level caching guide</a></li>
<li><a href="https://langfuse.com/docs/evaluation/evaluation-methods/scores-via-sdk" target="_blank">Quality and fidelity evaluation systems in LLMs</a></li>
<li><a href="https://langchain-ai.github.io/langgraph/" target="_blank">Agent orchestration architecture with LangGraph</a></li>
</ul>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ David García Luna ]]>
        </dc:creator>
        <title>AI-XP: from the Craftsmanship Manifesto to the AI era</title>
        <link>https://en.paradigmadigital.com/organizational-transformation-rev/ai-xp-from-craftsmanship-manifesto-to-ai-era/</link>
        <pubDate>Tue, 21 Apr 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/organizational-transformation-rev/ai-xp-from-craftsmanship-manifesto-to-ai-era/</guid>
        <description>AI accelerates software delivery, but it can also accelerate technical debt, confusion, and loss of control. That’s why it makes sense to talk about AI-XP: using the speed that AI provides within a framework that ensures quality, learning, and sustainability. We’ll show you how to leverage its potential with AI.
</description>
        <content:encoded>
            <![CDATA[
                <p>In the software development ecosystem, when we talk about <strong>agile methodologies</strong>, Scrum and Kanban tend to be the undisputed protagonists, often leaving <strong>Extreme Programming (XP) as “the forgotten one.”</strong></p>
<p>However, for roles like Manager / Scrum Master / Flow Master who aim not only to organize work but to <strong>ensure technical excellence and team sustainability</strong>, overlooking XP is a strategic mistake. While other frameworks focus on management flow, XP focuses on the trenches—on <strong>how the product is built to be robust, flexible, and sustainable</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  ">What is XP?</h2>
<p>If you’ve never gone deep into XP, think of it not as a radical invention, but as a <strong>collection of what actually works</strong>. Just as J.R.R. Tolkien didn’t invent myths about elves and dwarves, but instead compiled and systematized centuries of European folklore into a coherent and functional mythology, Kent Beck did something similar with software engineering in the late ’90s.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/que_es_extreme_programming_7b616f493f.jpg"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/que_es_extreme_programming_7b616f493f.jpg 1920w,https://www.paradigmadigital.com/assets/img/resize/big/que_es_extreme_programming_7b616f493f.jpg 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/que_es_extreme_programming_7b616f493f.jpg 910w,https://www.paradigmadigital.com/assets/img/resize/small/que_es_extreme_programming_7b616f493f.jpg 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="circular diagram. starting point is &quot;on site customer&quot; and you can move right or left. to the right: test driven development (tdd), two linked minds, sustainable pace, code as documentation, collective code ownership. to the left: pair programming and collective code ownership" title="What is Extreme Programming"/></article>
<p>Beck didn’t invent testing, code review, or customer collaboration. What XP did was <strong>gather these isolated “best practices”</strong>, already known to improve quality, and <strong>push them to the extreme of their effectiveness</strong>, organizing them under a unified philosophy.</p>
<p>Beyond technical excellence, XP also emphasized the importance of having what we now call a <strong>Product Owner physically present with the team</strong> (<a href="http://www.extremeprogramming.org/rules/customer.html" target="_blank">on-site customer</a>) to improve the conversation between business and development. Additionally, one of XP’s most remembered contributions to Agile is <strong>User Stories</strong> as we know them today (with the <a href="https://ronjeffries.com/xprog/articles/expcardconversationconfirmation/" target="_blank">3 C’s</a>), replacing heavy and hard-to-understand requirement documents.</p>
<p>In short, XP is a framework designed to <strong>reduce risk in projects with vague or changing requirements</strong>, applying strong technical practices, fostering communication, and discarding what is not needed at the moment. In summary: <strong>prioritizing customer satisfaction and software quality</strong>.</p>
<p>That’s the <strong>quick overview</strong> for those who missed that class.</p>
<p><strong>But be careful</strong>: thinking XP is <em>just</em> this (a set of programming practices) is like reading the synopsis <em>“a dysfunctional family in space”</em> and assuming you understand the entire Star Wars saga.</p>
<p>What that summary misses is the real value for leadership. <strong>XP is a system for risk management and psychological safety</strong>.<br>
XP assumes that <strong>agility without technical excellence becomes a factory of technical debt</strong>. That’s why it builds an unbreakable safety net (through automated testing, continuous integration, and simple design) that allows teams to move fast, adapt constantly, and avoid burnout. It ensures that speed doesn’t destroy quality.</p>
<p>And it is precisely this <strong>tension between speed and safety</strong> that hits us today.</p>
<p>We now have in our hands the <strong>greatest productivity accelerator in history: Generative AI</strong>. But generating code at high speed is not the same as generating maintainable or correct code. <strong>How do we govern this new hyper-speed without burning out teams or breaking products?</strong> The answer has existed for nearly 30 years.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">A new hope: evolving XP with Artificial Intelligence</h2>
<p>How does this ’90s framework fit into the era of Generative AI? The reality is that AI doesn’t make XP obsolete—on the contrary, it turns it into the <strong>necessary safety structure</strong> for this new speed. We are witnessing the <strong>birth of AI-XP</strong> (<a href="https://dev.to/dev3l/ai-concept-to-code-integrating-ai-into-agile-development-5ai8" target="_blank">term used by Justin Beall</a>).</p>
<p>Integrating AI into XP is not just about using autocomplete tools—it’s a <strong>systemic evolution</strong> that enhances the framework’s original principles through three modern feedback loops:</p>
<ol>
<li><strong>VISION (Planning)</strong>. Where we once relied only on human intuition to define long-term goals, AI can now analyze market data to align XP’s “Planning Game” with real needs and trend predictions.</li>
<li><strong>ADAPT (Agile iterations)</strong>. AI acts as a facilitator that improves responsiveness. It can help break down complex user stories and ensure the team understands acceptance criteria, reducing ambiguity.</li>
<li><strong>LEAP (Daily execution)</strong>. This is where classic <em>Pair Programming</em> evolves into <strong>“AI Pairing.”</strong> AI handles routine and boilerplate code, allowing developers (the “navigator”) to focus on strategy, security, and design.</li>
</ol>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The new paradoxes of AI-XP</h2>
<p>Before celebrating and assuming AI plus ’90s practices will solve everything, as technical and delivery leaders we must reflect. This new model introduces <strong>paradoxes and uncertainties</strong> that will reshape the rules:</p>
<ul>
<li><strong>The user story paradox</strong></li>
</ul>
<p>Originally, XP freed us from heavy requirement documents with lightweight user stories (promises of conversation). Today, to prevent AI from hallucinating, we need extremely rich context (detailed specs, exhaustive prompts, explicit business rules).<br>
<strong>Are we returning to hyper-documentation just so machines understand us?</strong> The challenge is ensuring AI requirements don’t kill human conversation.</p>
<ul>
<li><strong>Redefining the “hybrid team”</strong></li>
</ul>
<p>Not long ago, hybrid meant remote vs. in-office. Today, <strong>a modern hybrid team is composed of human minds and AI agents</strong>. Managing trust, expectations, and collaboration in this new setup is the real challenge for Agile Coaches.</p>
<ul>
<li><strong>AI as the new “knowledge silo”</strong></li>
</ul>
<p>XP promoted Collective Code Ownership to avoid knowledge concentration. But what happens if AI writes most of the code and developers only review it? We risk turning AI into a <strong>black box</strong>, losing deep understanding and control of our product.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Back to humanity</h2>
<p>In conclusion, <strong>AI has the potential to accelerate software delivery—but also to accelerate chaos without proper governance</strong>, and XP provides that governance.</p>
<p>By automating technical complexity and routine code generation, AI enables XP to fulfill its most ambitious promise: <strong>humanity and respect</strong>.</p>
<p>Freed from repetitive coding tasks, teams can focus on what XP has always valued most: <strong>creativity</strong>, <strong>complex problem-solving</strong>, and <strong>high-value human collaboration</strong>. XP is not dead—it has simply found, thirty years later, the perfect tool to reach its full potential.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Raúl Martínez ]]>
        </dc:creator>
        <title>NLP pipeline for risk analysis: from unstructured documents to structured JSON</title>
        <link>https://en.paradigmadigital.com/dev/nlp-pipeline-risk-analysis-unstructured-documents-structured-json/</link>
        <pubDate>Thu, 16 Apr 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/nlp-pipeline-risk-analysis-unstructured-documents-structured-json/</guid>
        <description>Critical information rarely comes in a SQL table. It arrives in PDFs, urgent emails, or chaotic reports. In this post, we show how an NLP pipeline can classify, extract context, and turn all of that into structured JSON for real-time risk analysis.
</description>
        <content:encoded>
            <![CDATA[
                <p>In the real world (especially in banking or insurance), <strong>critical information does not come in a SQL table</strong>. It comes in PDFs, urgent emails, or incident reports written in a messy way. If you have to analyze 2,000 of these per day, either you have an army of people reading them—or you build something automated.</p>
<p>In this post, I want to show you <strong>how I designed an NLP pipeline</strong> to tackle this problem. It’s not about “adding AI just for the sake of it,” but about creating a logical flow that classifies, extracts, and cleans the information for us. The best part is that, with a few tweaks, this approach works just as well for risk analysis as it does for classifying support tickets or legal contracts.</p>
<p>What we’re going to build does four key things:</p>
<ul>
<li><strong>Prioritizes</strong>. Is this a fire or a routine alert?</li>
<li><strong>Labels</strong>. Is it fraud, a technical failure, or a legal issue?</li>
<li><strong>Detects data</strong>. Extracts IPs, accounts, emails... the “hard” stuff.</li>
<li><strong>Understands context</strong>. Identifies which laws or companies are mentioned.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The flow architecture (Keep it simple)</h2>
<p>There’s no need to overcomplicate things with huge models for everything. Here we combine <strong>business logic (rules)</strong> with <strong>statistical models</strong>.</p>
<p>The <strong>flow</strong> is:</p>
<p>Raw document → Urgency classifier → Risk labeling → Technical data extractor → NER (Entities) → <strong>Structured JSON</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">A real case to test it</h2>
<p>To avoid staying in theory, let’s use this <strong>incident alert</strong> (I made it up, but it’s very close to what you’d find in a real system):</p>
<p><strong>OPERATIONAL RISK ALERT - ID-2026-00142</strong> A critical failure has been detected in the transaction processing system... it affects credit cards. It started at 14:23 UTC and impacts around 15,000 customers. <strong>Data</strong>: IP 192.168.1.50 | Error: ERR-DB-TIMEOUT | TXNs: TXN-A7B3C9D2, TXN-F4E8A1B6. <strong>Warning</strong>: It exceeds the 0.5% MiFID II threshold. CNMV must be notified.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Classifying the fire: criticality</h2>
<p>The first step is to know whether we need to escalate immediately or if it can wait until tomorrow. For this, we use a <strong>weight-based system</strong>. If words like “penalty” or “fraud” appear, the score skyrockets.</p>
<pre><code class="language-python">class CriticalityClassifier:
    def __init__(self):
        # Define what keeps us up at night
        self.keywords_critico = {'fraud': 10, 'loss': 10, 'non-compliance': 10, 'fine': 10}
        self.keywords_alto = {'risk': 7, 'alert': 6, 'failure': 5}
        # ... (other levels)

    def _calculate_scores(self, text: str):
        text_lower = text.lower()
        # Add points based on matches
        return {
            'critical': sum(w for k, w in self.keywords_critico.items() if k in text_lower),
            'high': sum(w for k, w in self.keywords_alto.items() if k in text_lower),
            # ...
        }
</code></pre>
<p><strong>Result</strong>: in our example, it would yield a <strong>HIGH</strong> level with 57% confidence. Enough to trigger an automatic alert.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">What are we dealing with? (Categories)</h2>
<p>A document can belong to multiple categories at the same time. For example: <strong>technical</strong> (server outage) and <strong>compliance</strong> (regulatory breach). I used precompiled Regex patterns because, honestly, for specific keywords they are much faster and cheaper than a neural network.</p>
<pre><code class="language-python"># A small map of what to look for in each risk
self.category_keywords = {
    'operational': ['failure', 'error', 'outage', 'technical incident'],
    'compliance': ['regulation', 'penalty', 'cnmv', 'mifid'],
    'cybersecurity': ['attack', 'malware', 'hack', 'breach']
}
</code></pre>
<h2 class="block block-header h--h30-15-400 left  ">Extraction of technical &quot;threads&quot;</h2>
<p>This is where <strong>NLP shines by saving time</strong>. Instead of having an analyst copy and paste IPs or error codes, we let regular expressions do the dirty work.</p>
<p>We extract simultaneously:</p>
<ul>
<li><strong>IPs</strong>: 192.168.1.50</li>
<li><strong>Transaction IDs</strong>: TXN-A7B3C9D2</li>
<li><strong>Emails</strong>: incidents@sample-bank.com</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  ">NER. Who’s who?</h2>
<p>To extract entities (organizations, laws, products), I use <strong>spaCy</strong>. It’s the Swiss Army knife for this. We add a custom dictionary because spaCy knows what a “person” is, but sometimes struggles to understand what “MiFID II” or “CNMV” are unless we explicitly tell it.</p>
<pre><code class="language-python"># We combine spaCy with our own domain-specific rules
self.custom_entities = {
    'regulations': ['mifid ii', 'gdpr', 'pci dss'],
    'organizations': ['cnmv', 'ecb', 'bank of spain']
}
</code></pre>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The final result: JSON</h2>
<p>In the end, the pipeline outputs something as clean as this:</p>
<pre><code class="language-json">{
  &quot;criticality&quot;: &quot;high&quot;,
  &quot;categories&quot;: [&quot;operational&quot;, &quot;compliance&quot;, &quot;technical&quot;],
  &quot;indicators&quot;: {
    &quot;ipv4&quot;: [&quot;192.168.1.50&quot;],
    &quot;error_code&quot;: [&quot;ERR-DB-TIMEOUT&quot;]
  },
  &quot;entities&quot;: {
    &quot;regulations&quot;: [&quot;MiFID II&quot;],
    &quot;organizations&quot;: [&quot;CNMV&quot;]
  }
}
</code></pre>
<h2 class="block block-header h--h30-15-400 left  ">Does this scale?</h2>
<p>If you run it sequentially, it might put you to sleep. But using ThreadPoolExecutor in Python, I’ve managed to process around <strong>1,500 documents per minute</strong>. For most companies, that’s more than enough to handle the entire incoming flow in real time.</p>
<ul>
<li><strong>Real metrics:</strong></li>
<li><strong>Latency</strong>: 38ms (blazing fast).</li>
<li><strong>Accuracy</strong>: ~95% (the remaining 5% is usually very ambiguous language that requires human judgment).</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p>Automating this is not just about cost savings—it’s about <strong>reaction time</strong>. If a system detects a regulatory breach in 40 milliseconds, you can mitigate the risk before it turns into a million-dollar fine.</p>
<p>What do you think? Would you introduce heavier models like Transformers (BERT/LLMs), or do you believe this hybrid approach is more stable for risk classification? I’ll read you in the comments.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Cristina Redondo ]]>
        </dc:creator>
        <title>The importance of warm data in change and optimization processes</title>
        <link>https://en.paradigmadigital.com/organizational-transformation-rev/importance-warm-data-change-optimization-processes/</link>
        <pubDate>Tue, 14 Apr 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/organizational-transformation-rev/importance-warm-data-change-optimization-processes/</guid>
        <description>KPIs explain what is happening in a process, but not why it is happening. To understand that, we need warm data: relationships, trust, culture, and context. Without them, any optimization stays superficial. In this post, we show you how to balance them with cold data.
</description>
        <content:encoded>
            <![CDATA[
                <p>In the age of AI, automation, and the obsession with what can be quantified, we still face a fundamental unresolved problem: <strong>the models we build do not explain the reality we experience</strong>. Why? Because representing data and processes is not enough—they fail to capture the complexity of <strong>living systems and interactions</strong>.</p>
<p><strong>One of Sociology’s core pursuits is precisely this: getting closer to an accurate representation of reality</strong>, and therefore of complexity (<strong>because reality is always complex</strong>).</p>
<p>This is where <strong>warm data</strong> comes into play: relational and transcontextual information that allows us to understand how a system is sustained (or blocked) beyond KPIs.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Cold data vs. warm data (a useful distinction)</h2>
<ul>
<li><strong>Cold data</strong> (Hard, Cold Data): what is measurable and comparable. Time, costs, throughput, errors, queues, deviations.</li>
<li><strong>Warm data</strong>: what connects and gives meaning to the above. Trust, implicit norms, tensions, commitments, “what is left unsaid,” fears, shared stories, social and cultural context—what Sociology refers to as “interactions.”</li>
</ul>
<p>Both are necessary. The problem begins when we try to optimize only what can be “easily” measured.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Three theses for viewing organizations as living systems</h2>
<p>This topic was a fundamental part of the work of <strong>Gregory Bateson</strong>, philosopher, sociologist, and cyberneticist, and later of his daughters Cathy and <a href="https://neurocognitiveacademy.org/nora-bateson/" target="_blank">Nora Bateson</a>. The <a href="https://batesoninstitute.org/" target="_blank">Bateson family’s theses</a> on what to consider when analyzing ecosystems such as organizations are as follows:</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">1 <span class="enum-header"></span> The transcontextuality thesis</h3>
<p><strong><em>“No process exists in a single context.”</em></strong></p>
<p>In traditional process design (<a href="https://en.wikipedia.org/wiki/Business_Process_Model_and_Notation" target="_blank">Business Process Model and Notation</a> - BPMN), we tend to isolate processes (e.g., “credit approval”). For the Batesons, this is a <strong>functional simplification error</strong>. Warm data provides that <strong>transcontextual information</strong>: the approval process is “marinated” in economic context, workplace climate, the employee’s personal life, and the company’s technological culture.</p>
<p><strong>Application in processes</strong></p>
<p>Process Mining may detect technical inefficiencies such as delays (cold data). Warm data may explain that the delay occurs because the employee prioritizes the client relationship over system metrics to avoid cultural conflict.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">2 <span class="enum-header"></span> The relationality thesis</h3>
<p><strong><em>“Intelligence is not in the parts, but in the relationships between them.”</em></strong></p>
<p>Gregory Bateson argued that to understand a system, we must look not at the subjects but at the messages they exchange. Nora extends this idea, stating that warm data describes the <strong>interdependencies</strong> that keep the system alive.</p>
<p>Cold data measures silo performance; warm data measures the health of the connections between silos.</p>
<p><strong>Application in processes</strong></p>
<p>In a DTO (Digital Twin of the Organization), it is not enough to model tasks. You must also model “adhesion” and trust in the process. A process may be technically optimal but relationally toxic, ensuring long-term failure.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">3 <span class="enum-header"></span> The coalescence thesis</h3>
<p><strong><em>“Real changes in complex systems are invisible processes of coalescence.”</em></strong></p>
<p>This thesis states that before a change becomes visible in a KPI (cold data), an “<a href="https://journals.isss.org/index.php/jisss/article/view/3887" target="_blank">Aphanipoiesis</a>” occurs: an accumulation of small variations in relationships and perceptions (warm data), subtle but system-shaping. They “coalesce” (like raindrops merging into a larger drop). If we only manage what is measured (cold data) and ignore this coalescence, we will react too late to crises or opportunities.</p>
<p><strong>Application in processes</strong></p>
<p>For example, in their work, an Agile Coach does not only look for “burn rate velocity,” but for changes in team communication that precede efficiency gains. Before a team becomes formally “agile” (events, artifacts… cold data), there is a “coalescence of trust, shared language, and tacit rule understanding.” Without seeing this, a consultant risks deploying processes on unstable ground.</p>
<p>Warm data enables <strong>proactive orchestration</strong>, detecting system fatigue before logs reflect performance drops.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Warm data in a process: a practical example enriching VSM</h2>
<p>Imagine you <strong>map a process using a Value Stream Map</strong>: you extract a SIPOC matrix, measure activity times, classify inefficiencies, and validate hypotheses. You detect that an administrative process has <strong>30% waste in waiting time</strong>. Cold data may suggest automation, but warm data may reveal that delays stem from lack of trust between teams, fear of mistakes, underused tools, or implicit norms slowing agility.</p>
<p><strong>Without relational data, any optimization is superficial—and often counterproductive.</strong></p>
<p>For an optimization expert, ignoring warm data is like trying to understand a forest by analyzing only the wood (cold data), without considering root symbiosis and climate (warm data).</p>
<p><strong>A comprehensive orchestration in 2026 requires both</strong>. This approach can mean the difference between a solution that works on paper and a real, sustainable transformation.</p>
<p>Warm data has traditionally been dismissed as “soft,” subjective, or unscientific. Relational sociology proves otherwise. It captures <strong>relationships</strong>, not isolated variables. Combined with cold data (costs, time, performance), it helps explain <strong>how human and cultural dynamics shape those numbers</strong>, including interactions with processes and machines—as anticipated by Lean TPS autonomation (<a href="https://www.paradigmadigital.com/rev/lean-it-eficiencia-optimizacion/" target="_blank">Jidoka</a>).</p>
<h3 class="block block-header h--h20-175-500 left  ">How do we integrate warm data into process analysis?</h3>
<p>Detecting and measuring warm data is challenging, which is why it is often ignored. As a starting point:</p>
<ul>
<li><strong>Observe interactions, not just outcomes</strong>. How are decisions made? What conversations never happen? Why are certain tools chosen? How do teams relate? Are there implicit rules?</li>
<li><strong>Analyze behavioral patterns, not just outputs</strong>. Why do inefficiencies persist despite good solutions? Are we automating hidden inefficiencies?</li>
<li><strong>Include multiple perspectives</strong>. Operational data is not enough; we must understand lived experiences, emotions, and leadership dynamics. <em>#diversitymatters</em></li>
<li>Finally, <strong>challenge your own optimization efforts</strong>. Does optimization reduce cognitive load or just redistribute it? Is change adopted or merely tolerated?</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Warm data and agility: a natural connection</h2>
<p>Agile and Lean frameworks emphasize adaptation and iterative improvement. <strong>Defining a framework is defining a way of working together</strong>, and agility has invested heavily in this. However, without understanding invisible relationships and contexts, improvement remains superficial.</p>
<p>Warm data reveals resistance to change, cultural values, and systemic impacts of interventions.</p>
<p>At <strong>Paradigma Lean Papers</strong>, we have discussed the <a href="https://www.paradigmadigital.com/rev/kata-ba-evolucion-lean-gestion-conocimiento/" target="_blank">importance of BA (context)</a> to understand systems, daily team life, and the observation of Gen-ba as a living ecosystem.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p>In a world obsessed with numerical precision, we must remember that organizations and their processes are living systems.</p>
<p><strong>If you work in optimization, remember: complexity is an experience. If your models do not explain the reality teams live, you are not optimizing—you are dangerously simplifying.</strong></p>
<p>How do you integrate relational data into your agile projects? I’ll read you in the comments 👇</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">References</h3>
<ul>
<li><a href="https://neurocognitiveacademy.org/nora-bateson/" target="_blank">Nora Bateson, Neurocognitive Academy</a></li>
<li><a href="https://batesoninstitute.org/" target="_blank">Bateson Institute</a></li>
<li><a href="https://www.warmdata.life/international-bateson-institute" target="_blank">Warm Data Labs, Bateson Institute</a></li>
<li><a href="https://journals.isss.org/index.php/jisss/article/view/3887" target="_blank">“Aphanipoiesis”, Nora Bateson</a></li>
</ul>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Nacho Badenes ]]>
        </dc:creator>
        <title>Purpose-Driven Technology Platforms: Governance, Security, and Digital Resilience</title>
        <link>https://en.paradigmadigital.com/techbiz/purpose-driven-technology-platforms-governance-security-digital-resilience/</link>
        <pubDate>Thu, 09 Apr 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/techbiz/purpose-driven-technology-platforms-governance-security-digital-resilience/</guid>
        <description>A purpose-driven technology platform doesn’t end with architecture. Data governance, cybersecurity, traceability, and software efficiency are key decisions to truly align technology, business, and ESG criteria.
</description>
        <content:encoded>
            <![CDATA[
                <p>In the previous post, we explored in depth <a href="https://en.paradigmadigital.com/techbiz/purpose-driven-technology-platforms-aligning-technology-business-sustainability/" target="_blank">how decisions around cloud, AI, inclusive design, and interoperability lay the foundations for a purpose-driven architecture</a>. However, for a platform to truly have purpose and align with ESG principles, it’s not enough to focus on how it is built; it must also be <strong>managed and protected</strong> with the same level of awareness.</p>
<p>Today, we complete our <strong>map of 8 key technology decisions</strong> by analyzing the final four pillars that ensure the integrity, security, and long-term impact of our digital ecosystem.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/esg_technological_decision_map_b2b84140e8.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/esg_technological_decision_map_b2b84140e8.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/esg_technological_decision_map_b2b84140e8.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/esg_technological_decision_map_b2b84140e8.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/esg_technological_decision_map_b2b84140e8.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="ESG technology decision map" title="undefined"/><figcaption>ESG technology decision map</figcaption></figure>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">5 <span class="enum-header"></span> Data governance and traceability</h2>
<p>This is the engine that ensures data remains a trustworthy asset, directly impacting the company’s <strong>Governance (G)</strong>. Reliable data doesn’t just ensure compliance—it accelerates decision-making and builds market trust.</p>
<p><strong>Key selection factors</strong></p>
<p>Data must have a clear identity and ownership. It is essential to choose technologies that guarantee <strong>end-to-end traceability</strong> for critical data and provide governance capabilities that ensure quality and integrity.</p>
<ul>
<li><strong>Key usage factors</strong></li>
</ul>
<p>Transparency is non-negotiable. It requires clearly defined <strong>access and retention policies</strong>, along with continuous security monitoring and audit processes for responsible usage.</p>
<ul>
<li><strong>Organizational impact</strong></li>
</ul>
<p>Faster decisions driven by trusted data. The result is strong regulatory compliance and <strong>support for ESG transparency</strong> through verifiable and auditable reporting.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/data_governance_traceability_8e21dee232.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/data_governance_traceability_8e21dee232.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/data_governance_traceability_8e21dee232.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/data_governance_traceability_8e21dee232.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/data_governance_traceability_8e21dee232.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Data governance and traceability" title="undefined"/><figcaption>Data governance and traceability</figcaption></figure>
<blockquote class="block-blockquote -like-cms-text-width"><p><em>“<strong>Trusted data</strong> doesn’t just ensure compliance: it accelerates decisions, builds market confidence, and opens the door to <strong>new opportunities</strong>.”</em></p>
</blockquote>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">6 <span class="enum-header"></span> Cybersecurity and resilience</h2>
<p>Security is not a cost; it is the insurance policy that protects business continuity and value, making it a critical pillar of <strong>Governance (G)</strong> and <strong>Social Responsibility (S)</strong>.</p>
<ul>
<li><strong>Key selection factors</strong></li>
</ul>
<p>Prepare for “when,” not “if.” We must evaluate <strong>incident response and recovery capabilities</strong>, ensuring compliance with standards such as <strong>ISO 27001, NIS2, or DORA</strong>.</p>
<ul>
<li><strong>Key usage factors</strong></li>
</ul>
<p>No one enters without permission. Implement <strong>Zero Trust-based access management</strong> and foster a security culture through continuous training and strict protocols.</p>
<ul>
<li><strong>Organizational impact</strong></li>
</ul>
<p>Peace of mind knowing your business is resilient. It minimizes financial risks from <strong>cyberattacks</strong>, ensures <strong>operational continuity</strong>, and strengthens trust among customers and investors.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/cybersecurity_resilience_e5f78b2756.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/cybersecurity_resilience_e5f78b2756.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/cybersecurity_resilience_e5f78b2756.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/cybersecurity_resilience_e5f78b2756.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/cybersecurity_resilience_e5f78b2756.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Cybersecurity and resilience" title="undefined"/><figcaption>Cybersecurity and resilience</figcaption></figure>
<blockquote class="block-blockquote -like-cms-text-width"><p><em>“Security is no longer a cost: it is the insurance policy that protects <strong>continuity, reputation, and business value</strong>.”</em></p>
</blockquote>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">7 <span class="enum-header"></span> ESG integration in the supply chain</h2>
<p>Our technological responsibility extends to every partner; a supply chain aligned with <strong>Environmental (E), Social (S), and Governance (G)</strong> criteria is more competitive and reliable.</p>
<ul>
<li><strong>Key selection factors</strong></li>
</ul>
<p>Tell me who you work with, and I’ll tell you who you are. It is essential to adopt platforms that enable <strong>supplier traceability</strong> and integrate with ESG risk management systems.</p>
<ul>
<li><strong>Key usage factors</strong></li>
</ul>
<p>Your suppliers’ commitment is your own commitment. It requires continuous monitoring of their <strong>ESG performance</strong> and clear criteria for <strong>selection and retention</strong>.</p>
<ul>
<li><strong>Organizational impact</strong></li>
</ul>
<p>A “clean” platform end to end. It reduces <strong>reputational risks</strong> across the value chain and ensures compliance with governance best practices.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/esg_integration_supply_chain_fcb00ca9db.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/esg_integration_supply_chain_fcb00ca9db.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/esg_integration_supply_chain_fcb00ca9db.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/esg_integration_supply_chain_fcb00ca9db.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/esg_integration_supply_chain_fcb00ca9db.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="ESG integration in the supply chain" title="undefined"/><figcaption>ESG integration in the supply chain</figcaption></figure>
<blockquote class="block-blockquote -like-cms-text-width"><p><em>“An ESG-aligned supply chain is <strong>more competitive</strong>, more reliable, and more attractive to customers and investors.”</em></p>
</blockquote>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">8 <span class="enum-header"></span> Software lifecycle optimization</h2>
<p>Efficient code means lower costs and faster delivery; it contributes positively to the <strong>Environmental (E)</strong> pillar by optimizing resource consumption.</p>
<ul>
<li><strong>Key selection factors</strong></li>
</ul>
<p>Build today with tomorrow in mind. Choose frameworks that facilitate maintainability and scalability, while supporting energy efficiency metrics.</p>
<ul>
<li><strong>Key usage factors</strong></li>
</ul>
<p>Deploy fast, but thoughtfully. Implement <strong>CI/CD processes</strong> for agile delivery and apply refactoring strategies to eliminate obsolete code that consumes unnecessary resources.</p>
<ul>
<li><strong>Organizational impact</strong></li>
</ul>
<p>Sustainability translated into profitability. It reduces operational costs, improves delivery quality, and extends the lifespan of technological assets.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/software_lifecycle_optimization_1086f3863f.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/software_lifecycle_optimization_1086f3863f.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/software_lifecycle_optimization_1086f3863f.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/software_lifecycle_optimization_1086f3863f.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/software_lifecycle_optimization_1086f3863f.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Software lifecycle optimization" title="undefined"/><figcaption>Software lifecycle optimization</figcaption></figure>
<blockquote class="block-blockquote -like-cms-text-width"><p><em>“<strong>Efficient code</strong> means lower costs, faster delivery, and development that leaves a… <strong>positive footprint</strong>.”</em></p>
</blockquote>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion: choosing technology is choosing the future</h2>
<p>As I shared at the <strong>Smart Energy Congress 2025</strong>, choosing technology is choosing the future. By integrating these 8 decision points into our strategy, we transform technology from a support function into a driver that aligns business with social, environmental, and governance challenges.</p>
<ul>
<li><strong>Enabler of real impact</strong>: technology drives everything from energy efficiency to digital inclusion.</li>
<li><strong>Competitive advantage</strong>: conscious decisions in architecture, data, security, AI, and accessibility enable new business models.</li>
<li><strong>Investment in resilience</strong>: purpose-driven design is not a cost, but an investment that fosters innovation and trust.</li>
<li><strong>Engine of transformation</strong>: platforms don’t just support the business—they connect it to a positive global impact.</li>
</ul>
<p><em><strong>&quot;True technological disruption lies not in what machines can do, but in the purpose we choose when designing them.&quot;</strong></em></p>
<p>After everything shared in this series of three posts on technology platforms, <strong>what will you do from today onwards?</strong> I’ll read you in the comments.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Simón Rodríguez ]]>
        </dc:creator>
        <title>Running LLMs Locally: Docker</title>
        <link>https://en.paradigmadigital.com/dev/running-llms-locally-docker/</link>
        <pubDate>Tue, 07 Apr 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/running-llms-locally-docker/</guid>
        <description>With the launch of Model Runner, Docker officially enters the world of generative AI, enabling you to run models locally, integrate them into pipelines, and deploy them as just another service within your stack. In this post, we’ll show you how.
</description>
        <content:encoded>
            <![CDATA[
                <p>Closing this series of posts on running LLMs locally, we now arrive at <strong>one of the latest players to join the trend of local LLM/AI execution: Docker!</strong></p>
<p>Being one of the latest to arrive does not mean it should be overlooked. Given its track record as a true <em>game-changer</em>—especially when it comes to transparent application execution—Docker has completely revolutionized the development world.</p>
<p><a href="https://docs.docker.com/ai/model-runner/" target="_blank">Model Runner</a> is the new tool that <a href="https://www.docker.com/blog/announcing-docker-model-runner-ga/" target="_blank">Docker has released</a> for running AI models locally, and in this article we will explore its main features.</p>
<p>As a quick reminder, in case you missed any of the previous articles, you can check out the rest of the <strong>local LLM execution series</strong> here:</p>
<ul>
<li><a href="https://en.paradigmadigital.com/dev/running-llms-locally-getting-started-ollama/" target="_blank">Running LLMs locally: getting started with Ollama</a></li>
<li><a href="https://en.paradigmadigital.com/dev/running-llms-locally-getting-started-ollama/" target="_blank">Running LLMs locally: advanced Ollama</a></li>
<li><a href="https://en.paradigmadigital.com/dev/running-llms-locally-lm-studio/" target="_blank">Running LLMs locally: LM Studio</a></li>
<li><a href="https://en.paradigmadigital.com/dev/running-llms-locally-llamafile/" target="_blank">Running LLMs locally with Llamafile</a></li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">How it works and key features</h2>
<p><strong>Docker Model Runner enables AI model execution by embedding an inference engine</strong> (built on top of the llama.cpp library) <strong>as part of the Docker runtime environment</strong>. At a high level, the <strong>architecture is composed of three main components</strong>:</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/arquitectura_docker_model_runner_54c4c99cfb.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/arquitectura_docker_model_runner_54c4c99cfb.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/arquitectura_docker_model_runner_54c4c99cfb.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/arquitectura_docker_model_runner_54c4c99cfb.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/arquitectura_docker_model_runner_54c4c99cfb.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Docker Model Runner architecture" title="undefined"/><figcaption>Docker Model Runner architecture</figcaption></figure>
<ul>
<li><strong>Model distribution</strong> (model storage and client): the <em>model store</em> is the core component of the architecture, where tensor files are stored. The <em>client</em> performs operations (such as downloading) against <a href="https://opencontainers.org/" target="_blank">OCI</a> registries.</li>
<li><strong>Model Runner</strong>: maps API requests to processes that run inference engines (/engines) and models (/models). It includes components such as the <em>scheduler, loader</em>, and <em>runner</em>, which coordinate loading and unloading models from memory (both inference engines and models operate as ephemeral processes). For each combination of inference engine (e.g., llama.cpp) and model (e.g., ai/llama3.2:3B-Q4_0), a separate process is executed depending on incoming API requests.</li>
<li><strong>Model CLI</strong>: the main user interaction component. This is a Docker CLI plugin that provides an interface similar to running Docker images. Under the hood, the CLI communicates with the Model Runner API to execute most operations.</li>
</ul>
<p>An important note is that, although the overall architecture remains the same, <strong>depending on the platform where it is deployed, these three components are packaged, stored, and executed differently</strong> (sometimes on the host, sometimes in a virtual machine, and sometimes inside a container).</p>
<p>Some of the <strong>main features of Docker Model Runner</strong> include:</p>
<ul>
<li>Ability to <strong>download and upload</strong> models to/from <a href="https://hub.docker.com/u/ai" target="_blank">Docker Hub</a>.</li>
<li><strong>Model execution</strong> via endpoints compatible with the OpenAI API.</li>
<li><a href="https://www.docker.com/blog/oci-artifacts-for-ai-model-packaging/" target="_blank">Packaging GGUF files as OCI artifacts</a> to publish them in any container registry.</li>
<li><strong>Running and interacting</strong> with models directly from the command line.</li>
<li><strong>Managing</strong> local models.</li>
<li>Defining <strong>input prompt details</strong> as well as model responses.</li>
<li><strong>Support</strong> for multi-turn interactions (chat).</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Installation</h2>
<p><strong>Model Runner is available for major operating systems</strong> (Windows, macOS, and Linux), either through <a href="https://docs.docker.com/ai/model-runner/get-started/#docker-desktop" target="_blank">Docker Desktop</a> or <a href="https://docs.docker.com/ai/model-runner/get-started/#docker-engine" target="_blank">Docker Engine</a>. In this article, we will run Docker Model Runner on <strong>Ubuntu</strong> using <strong>Docker Engine</strong>.</p>
<p>After installing <a href="https://docs.docker.com/engine/install/" target="_blank">Docker Engine</a> if necessary, you can proceed to install Model Runner by executing the following command:</p>
<pre><code class="language-none">sudo apt-get install docker-model-plugin
</code></pre>
<p>Verifying the installation using the command:</p>
<pre><code class="language-none">docker model version
</code></pre>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comprobacion_instalacion_model_runner_7fd9b2d8ab.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comprobacion_instalacion_model_runner_7fd9b2d8ab.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comprobacion_instalacion_model_runner_7fd9b2d8ab.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comprobacion_instalacion_model_runner_7fd9b2d8ab.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comprobacion_instalacion_model_runner_7fd9b2d8ab.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Model Runner installation verification" title="undefined"/><figcaption>Model Runner installation verification</figcaption></figure>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">CLI Commands</h2>
<p>Once Docker Model Runner is installed, you can <strong>interact with models</strong> using the following commands:</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">1 <span class="enum-header"></span> INSPECT</h3>
<p>This command displays <strong>detailed information about a model</strong>.</p>
<pre><code class="language-none">docker model inspect ai/llama3.2:3B-Q4_0

docker model inspect ai/llama3.2:3B-Q4_0 --openai #Presentar la información en formato OpenAI
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_inspect_1537dbd737.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_inspect_1537dbd737.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_inspect_1537dbd737.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_inspect_1537dbd737.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_inspect_1537dbd737.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="CLI Commands: inspect docker model" title="Inspect"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">2 <span class="enum-header"></span> LIST</h3>
<p>Command to <strong>list the models</strong> downloaded to the local environment.</p>
<pre><code class="language-none">docker model list

docker model list --json #List the models in JSON format

docker model list --openai #List the models in OpenAI format

docker model list --quiet #Show only the model IDs
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_list_cd905f5b70.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_list_cd905f5b70.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_list_cd905f5b70.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_list_cd905f5b70.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_list_cd905f5b70.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="CLI commands: list" title="List"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">3 <span class="enum-header"></span> LOGS</h3>
<p>Command to <strong>display</strong> logs.</p>
<pre><code class="language-none">docker model logs

docker model logs --follow#View logs in real time
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_logs_1b567aa4b5.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_logs_1b567aa4b5.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_logs_1b567aa4b5.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_logs_1b567aa4b5.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_logs_1b567aa4b5.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Cli Commands: logs" title="Logs"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">4 <span class="enum-header"></span> PACKAGE</h3>
<p>Command to <strong>package a file in GGUF format</strong> into a Docker Model OCI artifact.</p>
<pre><code class="language-none">docker model package --gguf &lt;path&gt; [--license &lt;path&gt;...] [--context-size &lt;tokens&gt;] [--push] MODEL

docker model package --gguf /home/simonrodriguez/dockerModelRunner/model.gguf my_new_llama_model
</code></pre>
<p>The <strong>available options</strong> for this command are:</p>
<ul>
<li><strong>--chat-template</strong>: absolute path to the chat template file (the template must be in Jinja format).</li>
<li><strong>--context-size</strong>: size of the context window.</li>
<li><strong>--gguf</strong> (required): absolute path to the file in GGUF format.</li>
<li><strong>--license</strong>: absolute path to the license file.</li>
<li><strong>--push</strong>: upload to the registry.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_package_b70c56f82c.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_package_b70c56f82c.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_package_b70c56f82c.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_package_b70c56f82c.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_package_b70c56f82c.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="CLI command: package" title="Package"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">5 <span class="enum-header"></span> PULL</h3>
<p>Command to download a model from <a href="https://hub.docker.com/u/ai" target="_blank">Docker Hub</a> or <a href="https://huggingface.co/models?library=gguf" target="_blank">Hugging Face</a>.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_pull_4cf9846bc6.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_pull_4cf9846bc6.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_pull_4cf9846bc6.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_pull_4cf9846bc6.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_pull_4cf9846bc6.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="CLI commands: pull" title="Pull"/></article>
<p>When downloading from Hugging Face, <strong>if no tag is specified</strong>, it will attempt to download the <strong><em>Q4_K_M</em></strong> version of the model. If this version does not exist, it will download the <strong>first GGUF file found in the model’s <em>Files</em> section</strong> on Hugging Face. To specify the model quantization, you simply need to <strong>add the corresponding tag</strong>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">6 <span class="enum-header"></span> PUSH</h3>
<p>Command to <strong>upload a model</strong> to Docker Hub.</p>
<pre><code class="language-none">docker model push ai/llama3.3
</code></pre>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">7 <span class="enum-header"></span> RM</h3>
<p>Command to <strong>delete</strong> local models.</p>
<pre><code class="language-none">docker model rm ai/llama3.2:3B-Q4_0

docker model rm ai/llama3.2:3B-Q4_0 --force #Force model deletion
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_rm_5d2619a29f.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_rm_5d2619a29f.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_rm_5d2619a29f.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_rm_5d2619a29f.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_rm_5d2619a29f.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="comandos cli: RM" title="RM"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">8 <span class="enum-header"></span> RUN</h3>
<p>Command to <strong>run a model and interact with it</strong> by sending a prompt or via chat mode.</p>
<pre><code class="language-none">docker model run ai/llama3.2:3B-Q4_0 #A prompt opens for an interactive chat, which you can exit with the command /bye

docker model run ai/llama3.2:3B-Q4_0 “Hello, what can you tell me about Docker Model Runner?”

docker model run ai/llama3.2:3B-Q4_0 --debug #Enables debug mode

docker model run ai/llama3.2:3B-Q4_0 --ignore-runtime-memory-check #Option to prevent the download from being blocked if the model is estimated to exceed system memory
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_run_1_a0832a9dba.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_run_1_a0832a9dba.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_run_1_a0832a9dba.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_run_1_a0832a9dba.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_run_1_a0832a9dba.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Comando cli: run" title="Run"/></article>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_run_2_0ce0199301.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_run_2_0ce0199301.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_run_2_0ce0199301.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_run_2_0ce0199301.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_run_2_0ce0199301.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="comando cli: run" title="Run"/></article>
<p>When a Docker model is executed, it calls the API endpoint of the inference server hosted by Model Runner. The model will remain in memory until another model is loaded or the inactivity timeout is reached.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">9 <span class="enum-header"></span> PS</h3>
<p>Command that <strong>displays the models</strong> currently running.</p>
<pre><code class="language-none">docker model ps
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_ps_c235a35577.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_ps_c235a35577.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_ps_c235a35577.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_ps_c235a35577.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_ps_c235a35577.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="CLI command: ps" title="PS"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">10 <span class="enum-header"></span> UNLOAD</h3>
<p>Command to <strong>unload a running model</strong>.</p>
<pre><code class="language-none">docker model unload ai/llama3.2:3B-Q4_0
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_unload_be1e8a6d63.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_unload_be1e8a6d63.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_unload_be1e8a6d63.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_unload_be1e8a6d63.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_unload_be1e8a6d63.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="comandos cli: unload" title="Unload"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">11 <span class="enum-header"></span> DF</h3>
<p>Command that displays the <strong>disk space</strong> occupied by the models.</p>
<pre><code class="language-none">docker model df
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_df_6c72548655.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_df_6c72548655.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_df_6c72548655.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_df_6c72548655.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_df_6c72548655.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="comando cli: DF" title="DF"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">12 <span class="enum-header"></span> STATUS</h3>
<p>Command to <strong>check</strong> if Docker Model Runner is running.</p>
<pre><code class="language-none">docker model status

docker model status --json #Display the information in JSON format
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_status_1cddd8195e.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_status_1cddd8195e.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_status_1cddd8195e.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_status_1cddd8195e.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_status_1cddd8195e.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="comando cli: status" title="Status"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">13 <span class="enum-header"></span> TAG</h3>
<p>Command to <strong>create a specific tag</strong> for a model.</p>
<pre><code class="language-none">docker model tag ai/llama3.2:3B-Q4_0 quantized-model
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_tag_932b73aab5.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comando_cli_tag_932b73aab5.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comando_cli_tag_932b73aab5.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comando_cli_tag_932b73aab5.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comando_cli_tag_932b73aab5.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="cli-command: tag" title="TAG"/></article>
<p>If the tag is not specified, the default value is <em>latest</em>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">14 <span class="enum-header"></span> VERSION</h3>
<p>Command to <strong>check which version</strong> of Docker Model Runner is installed on the system.</p>
<pre><code class="language-none">docker model version
</code></pre>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_version_71473b2cc3.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/comandos_cli_version_71473b2cc3.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/comandos_cli_version_71473b2cc3.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/comandos_cli_version_71473b2cc3.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/comandos_cli_version_71473b2cc3.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Comando cli: version" title="Version"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">API</h2>
<p>Once Model Runner is enabled, <strong>API endpoints are automatically exposed</strong> (both native Docker Model Runner endpoints and OpenAI-compatible endpoints), which can be used to <strong>interact</strong> with models programmatically.</p>
<p><strong>When making requests to the exposed API, it is important to consider the origin of the request</strong>:</p>
<ul>
<li><strong>From other containers</strong>: send requests to http://172.17.0.1:12434/. This interface may not always be available for calls from containers. If that is the case, you must include the <em>extra_hosts</em> instruction in the Docker Compose configuration file:</li>
</ul>
<pre><code class="language-none">extra_hosts:
  - &quot;model-runner.docker.internal:host-gateway&quot;
</code></pre>
<p>With the previous instruction, the API can be accessed through the address http://model-runner.docker.internal:12434/</p>
<ul>
<li><strong>From the host</strong>: send requests to http://localhost:12434/</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Native endpoints</h3>
<p>The <strong>available endpoints</strong> are:</p>
<ul>
<li><strong>/models/create (POST)</strong>: endpoint to <strong>download</strong> a model.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_model_create_1_c73b16e565.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/endpoints_propios_model_create_1_c73b16e565.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/endpoints_propios_model_create_1_c73b16e565.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/endpoints_propios_model_create_1_c73b16e565.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_model_create_1_c73b16e565.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="custom endpoints: /models/create" title="/models/create"/></article>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_create_2_6a4577f348.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/endpoints_propios_models_create_2_6a4577f348.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/endpoints_propios_models_create_2_6a4577f348.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/endpoints_propios_models_create_2_6a4577f348.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_create_2_6a4577f348.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="custom endpoints: /models/create" title="/models/create"/></article>
<ul>
<li><strong>/models (GET)</strong>: endpoint to <strong>list</strong> existing models in the system along with their information.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_get_81094cb4f6.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/endpoints_propios_models_get_81094cb4f6.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/endpoints_propios_models_get_81094cb4f6.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/endpoints_propios_models_get_81094cb4f6.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_get_81094cb4f6.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="custom endpoints: /models (get)" title="/models (get)"/></article>
<ul>
<li><strong>/models/{namespace}/{name} (GET)</strong>: endpoint to <strong>display information</strong> about a model.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_namespace_name_0f800d2955.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/endpoints_propios_models_namespace_name_0f800d2955.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/endpoints_propios_models_namespace_name_0f800d2955.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/endpoints_propios_models_namespace_name_0f800d2955.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/endpoints_propios_models_namespace_name_0f800d2955.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="custom endpoints: /models/{namespace}/{name} (get)" title="/models/{namespace}/{name} (get)"/></article>
<ul>
<li><strong>/models/{namespace}/{name} (DELETE)</strong>: endpoint to <strong>delete</strong> a local model.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/models_namespace_name_delete_3e95e27ab3.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/models_namespace_name_delete_3e95e27ab3.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/models_namespace_name_delete_3e95e27ab3.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/models_namespace_name_delete_3e95e27ab3.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/models_namespace_name_delete_3e95e27ab3.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/models/{namespace}/{name} (DELETE)" title="/models/{namespace}/{name} (DELETE)"/></article>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">OpenAI-compatible endpoints</h3>
<p>The exposed endpoints are:</p>
<ul>
<li><strong>/engines/llama.cpp/v1/models (GET)</strong>: endpoint to <strong>list</strong> available models in the system.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/engines_llama_v1_models_79f4d33c1d.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/engines_llama_v1_models_79f4d33c1d.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/engines_llama_v1_models_79f4d33c1d.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/engines_llama_v1_models_79f4d33c1d.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/engines_llama_v1_models_79f4d33c1d.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/engines/llama.cpp/v1/models (GET)" title="/engines/llama.cpp/v1/models (GET)"/></article>
<ul>
<li><strong>/engines/llama.cpp/v1/models/{namespace}/{name} (GET)</strong>: endpoint to <strong>expose information</strong> about a model.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_models_name_09b51bb285.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/engines_v1_models_name_09b51bb285.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/engines_v1_models_name_09b51bb285.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/engines_v1_models_name_09b51bb285.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_models_name_09b51bb285.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/engines/llama.cpp/v1/models/{namespace}/{name} (GET)" title="/engines/llama.cpp/v1/models/{namespace}/{name} (GET)"/></article>
<ul>
<li><strong>/engines/llama.cpp/v1/chat/completions (POST)</strong>: endpoint to <strong>send</strong> a chat interaction and receive the assistant’s response. Multiple parameters can be specified, such as temperature, stream, seed, etc.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_chat_completions_109f433701.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/engines_v1_chat_completions_109f433701.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/engines_v1_chat_completions_109f433701.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/engines_v1_chat_completions_109f433701.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_chat_completions_109f433701.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/engines/llama.cpp/v1/chat/completions (POST)" title="/engines/llama.cpp/v1/chat/completions (POST)"/></article>
<ul>
<li><strong>/engines/llama.cpp/v1/completions (POST)</strong>: model response to user input. This endpoint is <strong>already deprecated</strong> by OpenAI.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_completions_22bcc1fea1.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/engines_v1_completions_22bcc1fea1.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/engines_v1_completions_22bcc1fea1.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/engines_v1_completions_22bcc1fea1.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_completions_22bcc1fea1.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/engines/llama.cpp/v1/completions (POST)" title="/engines/llama.cpp/v1/completions (POST)"/></article>
<ul>
<li><strong>/engines/llama.cpp/v1/embeddings (POST)</strong>: endpoint to <strong>retrieve</strong> embeddings from a text.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_embeddings_30040dd096.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/engines_v1_embeddings_30040dd096.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/engines_v1_embeddings_30040dd096.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/engines_v1_embeddings_30040dd096.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/engines_v1_embeddings_30040dd096.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="/engines/llama.cpp/v1/embeddings (POST)" title="/engines/llama.cpp/v1/embeddings (POST)"/></article>
<p>Since currently only one inference engine (llama.cpp) is supported, <strong>this part can be omitted from the URLs above</strong> (for example, /engines/llama.cpp/v1/models becomes /engines/v1/models).</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/omitir_parte_url_2cd5efa0c0.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/omitir_parte_url_2cd5efa0c0.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/omitir_parte_url_2cd5efa0c0.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/omitir_parte_url_2cd5efa0c0.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/omitir_parte_url_2cd5efa0c0.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="URL simplification" title="URL simplification"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Docker Compose</h2>
<p><strong>Docker Compose allows you to define models as core components of your application, so they can be declared alongside services</strong>, enabling the application to run on any platform compatible with the Compose specification. <strong>To run models in Docker Compose, you need at least version 2.38.0 of the tool</strong>, as well as a platform that supports models in Compose, such as Docker Model Runner.</p>
<p><strong>For using models in Docker Compose, the <em>models</em> element has been introduced</strong>, which allows you to:</p>
<ul>
<li><strong>Declare AI models</strong> required by the application.</li>
<li><strong>Specify configurations and requirements</strong> for each model.</li>
<li><strong>Make the application portable</strong> across different platforms.</li>
<li>Allow the platform to <strong>manage</strong> the model lifecycle.</li>
</ul>
<p>The <strong>configuration options for the <em>models</em> element</strong> are:</p>
<ul>
<li><strong>model</strong> (required): the OCI artifact identifier for the model. This is what will be downloaded and executed by Model Runner.</li>
<li><strong>context_size</strong>: defines the maximum context window size for the model.</li>
<li><strong>runtime_flags</strong>: list of parameters passed to the inference engine when the model starts. For example, for llama.cpp, the parameters can be found <a href="https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md#usage" target="_blank">here</a>.</li>
<li><strong>x-</strong>*: extensible properties for platform-specific options.</li>
</ul>
<p>A simple example of a <em>models</em> definition could be:</p>
<pre><code class="language-none">models:
  llm:
    model: ai/llama3.2:3B-Q4_0
    context_size: 4096
    runtime_flags:
      - &quot;--temp&quot;                # Temperature
      - &quot;0.1&quot;
      - &quot;--top-p&quot;               # Top-p sampling
      - &quot;0.9&quot;
</code></pre>
<p><strong>Services can reference models in two ways</strong>:</p>
<ul>
<li><strong>Short form</strong>: the simplest approach. With this method, the platform automatically generates environment variables based on the model name:
<ul>
<li><strong>LLM_URL</strong>: URL to access the LLM model.</li>
<li><strong>LLM_MODEL</strong>: identifier of the LLM model.</li>
<li><strong>EMBEDDING_MODEL_URL</strong>: URL to access the embedding model.</li>
<li><strong>EMBEDDING_MODEL_MODEL</strong>: identifier of the embedding model.</li>
</ul>
</li>
</ul>
<pre><code class="language-none">services:
  app:
    image: my-app
    models:
      - llm
      - embedding-model

models:
  llm:
    model: ai/llama3.2:3B-Q4_0
  embedding-model:
    model: ai/embeddinggemma
</code></pre>
<ul>
<li><strong>Long form</strong>: with this configuration, the service is explicitly provided with:
<ul>
<li><strong>AI_MODEL_URL</strong> and <strong>AI_MODEL_NAME</strong> for the LLM model.</li>
<li><strong>EMBEDDING_URL</strong> and <strong>EMBEDDING_NAME</strong> for the embedding model.</li>
</ul>
</li>
</ul>
<pre><code class="language-none">services:
  app:
    image: my-app
    models:
      llm:
        endpoint_var: AI_MODEL_URL
        model_var: AI_MODEL_NAME
      embedding-model:
        endpoint_var: EMBEDDING_URL
        model_var: EMBEDDING_NAME

models:
  llm:
    model: ai/llama3.2:3B-Q4_0
  embedding-model:
    model: ai/embeddinggemma
</code></pre>
<p>Here you can find <a href="https://docs.docker.com/ai/compose/models-and-compose/#common-runtime-configurations" target="_blank">some configurations for specific use cases</a> of the <em>models</em> element in Docker Compose.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Demo</h2>
<p>To see this new <em>models</em> element in Docker Compose in action, we created a <strong>simple application to interact with an LLM</strong>. The application uses the following components:</p>
<ul>
<li>Java 21</li>
<li>Spring Boot 3.4.4 (with built-in support for <a href="https://buildpacks.io/" target="_blank">Buildpacks</a> to create Docker images for applications)</li>
<li><a href="https://www.paradigmadigital.com/dev/deep-learning-spring-ai-primeros-pasos/" target="_blank">Spring AI</a></li>
<li>Maven 3.8.5</li>
<li>Docker version 28.4.0</li>
<li>Docker Model Runner 0.1.40</li>
<li>Docker Compose 2.39.4</li>
</ul>
<p>The application simply exposes a <strong>/chat endpoint</strong> that receives user input and sends it to the corresponding LLM.</p>
<figure class="block block-caption  -inline-block -like-text-width -center"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/docker_demo_models_2245d3e6ce.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/docker_demo_models_2245d3e6ce.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/docker_demo_models_2245d3e6ce.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/docker_demo_models_2245d3e6ce.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/docker_demo_models_2245d3e6ce.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Demo Models" title="undefined"/><figcaption>Demo Models</figcaption></figure>
<p>Here you can <a href="https://github.com/paradigmadigital/local-llms" target="_blank">download the sample application code and the README file with the steps to run it</a>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusions</h2>
<p>In this final post of the series, we explored how to run LLMs with Docker and the <strong>ease it provides</strong> to integrate them into our applications thanks to <strong>Docker Compose integration</strong>.</p>
<p>Throughout this series focused on running LLMs locally, we have reviewed the most widely used tools and their particularities, all of which offer core functionalities such as commands and API endpoints to interact with models. Currently, <strong><a href="https://www.paradigmadigital.com/dev/ejecutando-llms-local-primeros-pasos-ollama/" target="_blank">Ollama</a> arguably stands out among the rest in terms of available features and advanced model customization</strong>.</p>
<p>Based on what we have seen with Ollama and Docker, will we soon see custom AI models (containerized or not) running in the cloud alongside our microservices? Only time will tell.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">References</h3>
<ul>
<li><a href="https://docs.docker.com/ai/model-runner/" target="_blank">Docker Model Runner Documentation</a></li>
</ul>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Santiago López ]]>
        </dc:creator>
        <title>The Green QA Framework: Quality that Breathes</title>
        <link>https://en.paradigmadigital.com/dev/green-qa-framework-quality-breaths/</link>
        <pubDate>Tue, 31 Mar 2026 06:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/green-qa-framework-quality-breaths/</guid>
        <description>Green QA is not just about reducing tests or consumption, but about changing how we understand software quality. It involves integrating energy and carbon metrics into the entire software development lifecycle. Sustainability becomes another quality criterion, just like performance or functionality.
</description>
        <content:encoded>
            <![CDATA[
                <p>After understanding <a href="https://en.paradigmadigital.com/dev/what-is-green-qa-quality-that-breathes/" target="_blank">what it means to be eco-conscious in the world of QA</a> and recognizing the regulatory and corporate pressures pushing us toward more sustainable practices, the inevitable question arises: <strong>how do we actually implement it?</strong> The transition to a sustainable testing model requires structure, methodology, and, above all, a clear objective.</p>
<p>In this second article of our Green Quality Assurance (GQA) series, we leave behind the “why” and move into the <strong>“how.”</strong> If in the <a href="https://en.paradigmadigital.com/dev/what-is-green-qa-quality-that-breathes/" target="_blank">first article</a> we established that measuring quality in watts and CO2 is just as important as ensuring software works properly, now it is time to <strong>build the foundations that will support this new way of working</strong>. It is not just about reducing the energy consumption of our tests or running fewer test cases; it is about <strong>completely reimagining our approach to software quality</strong>.</p>
<p>The software industry must understand that <strong>technical excellence and environmental responsibility are not mutually exclusive goals</strong>. In fact, the most innovative organizations are discovering that sustainable QA practices often lead to more efficient processes, more productive teams, and, surprisingly, <strong>better final product quality</strong>.</p>
<p>But to achieve this balance, we need a <strong>robust framework</strong> that allows us to assess, implement, and continuously improve our practices.</p>
<p><strong>The journey toward Green QA is evolutionary</strong>. We cannot expect an organization to go from zero to one hundred overnight. That is why, in the following sections, we explain a <strong>gradual approach</strong> that recognizes different maturity levels, provides concrete tools for each stage, and enables us to measure our progress.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Framework layers</h2>
<p>Addressing cultural change in an organization regarding quality is usually an evolutionary process in which awareness plays a major role. The shift proposed by GQA (Green QA) introduces a <strong>new dimension to that awareness</strong>: the goal of doing things in the greenest way possible. But are we really aware of what must change to make this happen? Let’s explore it.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Governance</h3>
<p>First of all, it is important to have governance that is appropriate for this context. This is the organization’s official “mandate” and the basis of the entire process. Without it, Green QA remains an isolated initiative. Governance addresses the following points:</p>
<ul>
<li><strong>Sustainable quality policy</strong></li>
</ul>
<p>This is not just a document; it is about defining the “Green Acceptance Threshold.” It establishes the sustainability goals the company is pursuing, setting targets by resource type. Once these goals are defined, everything else is aimed at meeting them. For example: <em>“No production deployment may increase the energy consumption of the microservice by more than 5%.”</em></p>
<ul>
<li><strong>Responsibility matrix</strong></li>
</ul>
<p>This defines what each member of the organization involved in the delivery lifecycle is responsible for.</p>
<ol>
<li><strong>QA</strong>: designs efficiency test cases, defines baseline metrics, and is responsible for executing measurements in each cycle.</li>
<li><strong>Architecture</strong>: validates that design decisions do not introduce structural energy debt before reaching the testing phase.</li>
<li><strong>DevOps / Platform Engineering</strong>: ensures that the instrumentation needed to measure consumption is available in test environments. Without observability infrastructure, QA cannot measure.</li>
<li><strong>Sustainability</strong>: provides energy-to-CO2 conversion factors and ensures data traceability into the ESG reporting system.</li>
<li><strong>Compliance</strong>: verifies that data collection, processing, and reporting comply with current regulations, particularly the CSRD.</li>
<li><strong>Product Owner</strong>: formally accepts that acceptance criteria include efficiency metrics, not only functionality and traditional performance.</li>
</ol>
<ul>
<li><strong>Alignment with ESG and corporate strategy</strong>. In the <a href="https://www.paradigmadigital.com/dev/que-es-green-qa-calidad-que-respira/" target="_blank">previous post</a>, we discussed ESG extensively and its connection to Green QA, and it is something that must be considered within governance.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Processes</h3>
<p>This layer establishes how Green QA is integrated into the project’s day-to-day work.</p>
<ul>
<li><strong>Shift-Left Green</strong>: integrating environmental criteria during the Refinement phase. If a feature consumes too much unnecessary data, it is rejected before development begins.</li>
<li><strong>Green Gateways in the Pipeline</strong>: adding “quality gates” into CI/CD. If automated tests detect an unusual spike in CPU or RAM usage, the build fails. Environmental controls must be present in design, development, testing, and deployment.</li>
<li><strong>Suppliers</strong>: evaluating whether our service providers (Cloud, SaaS) operate using renewable energy.</li>
</ul>
<p>It is important that governance also defines the consequences of non-compliance with the policy, whether by blocking a release, creating technical debt, or establishing the appropriate mechanisms to prevent bypassing the defined goals.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Data and metrics</h3>
<p>This framework layer analyzes the <strong>quality of the data</strong> that will support decision-making.</p>
<ul>
<li><strong>Data traceability</strong>: ensuring that ESG data has a clear lineage, from the sensor or server log to the annual report.</li>
<li><strong>QA Data Cleaning</strong>: a process for deleting test environments, temporary databases, and old execution logs that worsen storage conditions and consume unnecessary energy.</li>
<li><strong>Accuracy vs. estimation</strong>: defining which data is measured (real) and which is estimated (mathematical models), applying different QA approaches to each.</li>
<li><strong>Definition of environmental and ESG KPIs</strong></li>
<li><strong>Data quality controls</strong> (accuracy, completeness, traceability)</li>
<li><strong>Audit and reporting</strong></li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Technology</h3>
<p>This is the layer of <strong>tools</strong>. Green QA needs technical eyes to “see” energy. Therefore, it is advisable to have tools to <strong>measure the impact of programming languages</strong> (for example, comparing Python vs. Rust consumption in critical processes), optimize test suites so they do not run 2,000 tests if only 2 lines of code changed (risk-based test selection), configure the framework so test environments “self-destruct” immediately after execution, and have measurement tools (energy, carbon, resources), green test automation, and infrastructure optimization (cloud, hardware).</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Continuous improvement</h3>
<p>Within the framework layers, it is advisable to invest in the <strong>cultural and improvement aspect</strong> so that the framework becomes circular rather than linear.</p>
<p>Elements such as “Retro-Green,” where each sprint retrospective can include a question like: “what process or code did we make more efficient this month?”, gamification through rankings of development/QA teams that have reduced their digital carbon footprint the most, and the updating of standards.</p>
<p>As ESG laws evolve, this layer changes and is typically reviewed quarterly. In this way, the framework continues to comply with new regulations and allows organizations to establish measurable reduction goals, whether quantitative or qualitative.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The path toward Zero-waste Testing</h2>
<p>One of the points mentioned above is <strong>measurement</strong>. It is essential to understand at every moment where we stand relative to the objectives in order to <strong>take the necessary actions and decisions</strong> while working within the defined governance. If you are familiar with <a href="https://www.tmmi.org/" target="_blank">TMMi</a>, it is a process that <strong>measures maturity levels</strong> in relation to testing within an organization.</p>
<p>In this regard, below is an <strong>analysis of existing levels in relation to GQA</strong>, so that we can understand where we are and what the goals should be to consolidate or advance through them. <strong>Each of these levels should be further developed to make evaluations more concrete.</strong></p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Level 1: initial (we start walking)</h3>
<p>At this level, the company is not aware of the environmental impact of its testing. If there is efficiency, it is for cost savings, not because of purpose. Compliance is reactive, without metrics.</p>
<p>At this early stage, all tests are always executed in environments where we have no visibility into their consumption and which generally remain on 24/7.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Level 2: basic (awareness has awakened)</h3>
<p>Stakeholders understand what Green QA is, and the first manual efforts begin to document impact. Manual controls and basic reporting appear.</p>
<p>At this intermediate level, assets such as <strong>servers and tools</strong> are identified in order to request sustainability reports from providers, creating an inventory of digital assets. Testing begins to be handled more consistently with this policy, for example by deleting old test data.</p>
<p>At this stage, actions still depend on people rather than automated processes.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Level 3: defined (the green standard)</h3>
<p>Sustainability is officially integrated into the quality manual. The first standardized processes and KPIs appear.</p>
<p>At this level, having a green production readiness checklist should be one of the goals. To support this, <strong>green KPIs</strong> are defined (for example, watts per test suite).</p>
<p>At the same time, this level seeks to ensure that the QA team has sufficient training in Green Coding and energy efficiency.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Level 4: managed (automated and measurable quality)</h3>
<p>A set of metrics has been established, and continuous reporting automation through pipelines is in place.</p>
<p>Once certain aspects are consolidated, including cultural ones, the goal becomes the use of “real-time carbon dashboards” integrated into tools like Jira or Grafana, so teams can see their daily impact.</p>
<p>The challenge is <strong>integrating this data with the rest of the company (ESG)</strong>. CI/CD pipelines include tools that automatically measure the CPU/RAM consumption of tests. If a test is inefficient, an alert is generated.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Level 5: optimized (Green QA as DNA)</h3>
<p>GQA is no longer an “extra”; it is the only way of working. <strong>It is aligned with the company’s overall strategy.</strong></p>
<p>Now the company uses <strong>AI to predict and minimize the energy consumption</strong> of tests. The carbon savings achieved by the QA team are reported directly in the company’s annual sustainability report (ESG). The challenge is <strong>maintaining innovation and leading standards in the industry</strong>.</p>
<p>Aspects such as the “circular economy of data,” where test data is intelligently reused to avoid generating new loading processes, begin to be taken seriously in order to consolidate green goals.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Strategy and methodology</h2>
<p>The <strong>strategy</strong> is the <strong>decision-making framework</strong> in which the elements needed to address GQA in practice are defined, aligned with all the framework layers described above.</p>
<p>The <strong>methodology</strong> describes <strong>how we implement the strategy</strong> we have defined and is responsible for defining <strong>how to execute GQA</strong>.</p>
<p>In the context of <strong>GQA</strong> (or sustainability in the software lifecycle), these tools do not only measure emissions, but also integrate into the quality process to ensure that software is efficient and complies with environmental regulations (ESG).</p>
<p>Once a strategy aligned with objectives has been established, the <strong>tools and frameworks</strong> that will be used for its implementation are selected. Below are some examples.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Measuring software efficiency (Green Testing)</h3>
<p>This is where <strong>QA has direct control</strong>. The energy consumption of a process or test suite is measured. These measurements are used to establish the baseline.</p>
<p>Before optimizing a single line of code, QA needs to know <strong>how many grams of CO2 the hardware running the application generates</strong>. Without that, we cannot measure improvement after an optimization.</p>
<p><strong>Tool: Scaphandre (energy metrology)</strong></p>
<p>An open-source power consumption metrics agent designed for Kubernetes and bare-metal servers.</p>
<ul>
<li><strong>What it is used for</strong></li>
</ul>
<p>It measures exactly how many watts a specific process consumes (for example, your Selenium suite or a microservice under load).</p>
<ul>
<li><strong>Setup and use</strong></li>
</ul>
<ol>
<li><strong>Installation</strong>: it is installed as a binary or Docker container on the server where tests run.</li>
<li><strong>Usage</strong>: it exposes metrics in Prometheus format.</li>
<li><strong>Green QA Step</strong>: configure a Grafana dashboard that crosses “CPU consumption” with “Watts consumption.” If, after a code optimization, the tests take the same time but consume fewer watts, Green QA has succeeded.</li>
</ol>
<p><strong>Tool: Eco-Code / SonarQube (Green Rules)</strong></p>
<ul>
<li><strong>What it is used for</strong></li>
</ul>
<p>Static code analysis focused on energy efficiency.</p>
<ul>
<li><strong>Setup</strong></li>
</ul>
<ol>
<li><strong>Installation</strong>: add the &quot;Green IT&quot; or &quot;Eco-Code&quot; plugin to your SonarQube instance.</li>
<li><strong>Usage</strong>: QA defines quality gateways. If the code contains patterns that wake up the CPU unnecessarily (inefficient loops, redundant API calls), the quality test fails.</li>
</ol>
<p><strong>Tool: SimaPro and GaBi</strong></p>
<p>These are industrial standards that can be adapted to the QA ecosystem in the following way:</p>
<ul>
<li><strong>What they mean for IT environments</strong></li>
</ul>
<p>They allow us to model the impact of our digital infrastructure (servers, test mobile devices, or networks). They do not only measure energy expenditure, but also the “carbon debt” of the hardware supporting our software.</p>
<ul>
<li><strong>Their strategic use in Green QA</strong></li>
</ul>
<p>They are used to establish the baseline. Before optimizing a single line of code, QA needs to know how many grams of CO2 the hardware generates. Without this, we cannot measure improvement after optimization.</p>
<ul>
<li><strong>Setup and data sources</strong></li>
</ul>
<p>To control the environmental footprint of our applications, Green QA maps:</p>
<ul>
<li><strong>Hardware inventories</strong>: CPUs, RAM, and storage systems used in staging and production environments.</li>
<li><strong>Environmental databases</strong>: sources such as Ecoinvent are integrated to calculate the impact of the energy mix (it is not the same to run a test on a server in Norway powered by hydroelectric energy as in a coal-dependent region).</li>
<li><strong>Practical decision example</strong></li>
</ul>
<p>Thanks to these tools, the QA team can make comparisons based on real data:</p>
<p>Case study: is it more sustainable to run our regression suite on old on-premise servers or migrate testing to a cloud instance with Energy Star certification and auto-scaling? Green QA uses LCA to demonstrate that migration reduces carbon footprint by X%.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Measuring Cloud Carbon Footprint (CCF)</h2>
<p>If you do not want to rely on native tools (which are sometimes opaque), <strong>Cloud Carbon Footprint is the open standard</strong>.</p>
<p><strong>Tool: Cloud Carbon Footprint (CCF)</strong></p>
<ul>
<li><strong>What it is used for</strong></li>
</ul>
<p>To visualize emissions from AWS, Azure, and GCP in one place with a transparent calculation methodology.</p>
<ul>
<li><strong>Setup</strong></li>
</ul>
<ol>
<li><strong>Connection</strong>: you need read permissions for billing files (CUR in AWS, Billing Export in GCP).</li>
<li><strong>Usage</strong>: it allows the QA team to compare regions.</li>
<li><strong>Technical decision</strong>: QA can demonstrate that moving the staging environment from a coal-based region (for example, Virginia, US-East-1) to one powered by cleaner energy (for example, Sweden or France) instantly reduces carbon footprint without changing a single line of code.</li>
</ol>
<p><strong>Tools: Watershed, Persefoni, Plan A</strong></p>
<p>These are SaaS platforms for measuring corporate carbon footprint.</p>
<ul>
<li><strong>What they are</strong></li>
</ul>
<p>They automate the calculation of Scope 1, 2, and 3 emissions.</p>
<ul>
<li><strong>Usage in Green QA</strong></li>
</ul>
<p>Software falls under Scope 3 (indirect emissions). These tools collect data from your electricity bills and cloud providers.</p>
<ul>
<li><strong>Setup</strong></li>
</ul>
<p>They connect via API to your inventory systems. The QA team reports the energy consumption of test server farms here so the company has real data on IT department impact.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Measuring sustainability in the Frontend</h2>
<p>GQA also measures the <strong>impact on the end-user device</strong>.</p>
<p><strong>Tool: GreenFrame.io or Lighthouse (Carbon Indicator)</strong></p>
<ul>
<li><strong>What it is used for</strong></li>
</ul>
<p>To measure the carbon footprint of a user session in the browser.</p>
<ul>
<li><strong>Setup and use</strong></li>
</ul>
<ol>
<li><strong>CI/CD Integration</strong>: it integrates with GitHub Actions or Jenkins.</li>
<li><strong>Usage</strong>: every time a visual regression test is launched, GreenFrame estimates the grams of CO2 produced by loading the page (data transfer + JS execution on the client).</li>
<li><strong>QA Metric</strong>: “this new Home version weighs 2MB more and generates 0.5g of extra CO2 per visit.” This is reported as a Sustainability Bug.</li>
</ol>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">QA and audit: green quality management</h2>
<p>This is where <strong>you connect data with the testing process</strong>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Jira and TestRail</h3>
<ul>
<li><strong>Usage in Green QA</strong>: they do not measure carbon by themselves, but they are configured to manage green requirements.</li>
<li><strong>Setup</strong>
<ul>
<li><strong>Jira</strong>: create custom fields such as Estimated Carbon Cost in User Stories.</li>
<li><strong>TestRail</strong>: create a “Green Test Cases” section where you validate that the app enters power-saving mode or does not make unnecessary API requests.</li>
</ul>
</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">ESG data platforms (Environmental, Social, and Governance)</h3>
<ul>
<li><strong>What they are</strong></li>
</ul>
<p>Repositories where all evidence for legal audits is stored.</p>
<ul>
<li><strong>Usage in Green QA</strong></li>
</ul>
<p>The result of your green tests is uploaded here as proof of regulatory compliance (for example, to comply with the CSRD directive in Europe).</p>
<ul>
<li><strong>Pipeline configuration</strong></li>
</ul>
<p>For this to be true “Green QA,” the flow must be:</p>
<ol>
<li><strong>Define thresholds</strong>: in Jira, set an energy consumption limit per feature.</li>
<li><strong>Measure</strong>: during execution (Cucumber/Playwright), monitor consumption with tools such as Scaphandre or Intel Power Gadget.</li>
<li><strong>Visualize</strong>: cross that data with the Azure Emissions Dashboard.</li>
<li><strong>Audit</strong>: export reports to Persefoni or Plan A for annual accounting.</li>
</ol>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusions</h2>
<p>The transition process within an organization to embrace GQA involves a series of unavoidable steps and an adaptation process at every level.</p>
<p>It requires involvement from both the business and technical sides, beginning with cultural change and continuing through methodological and strategic adaptation.</p>
<p>All these changes will help the company comply with the legal framework already mentioned in the previous post.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Eva Ferrer ]]>
        </dc:creator>
        <title>What Should Be Considered Before the First Sprint?</title>
        <link>https://en.paradigmadigital.com/organizational-transformation-rev/what-should-be-considered-before-first-sprint/</link>
        <pubDate>Thu, 26 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/organizational-transformation-rev/what-should-be-considered-before-first-sprint/</guid>
        <description>Many times, a project doesn’t fail because of execution, but due to a lack of initial alignment. Misaligned expectations, unclear goals, or decisions made without context can create friction. Taking the time to understand each other before starting brings clarity and leads to better outcomes. In this post, we show you how to do it.
</description>
        <content:encoded>
            <![CDATA[
                <p>If you are familiar with Agile methodologies in general and Scrum in particular (and especially if you practice it), I know you will tell me that <strong>Scrum does not recognize the existence of a Sprint Zero</strong>. This post is not about explaining what a Sprint is or clarifying what the term “time-box” implies or does not imply, but about <strong>giving a place to this period of time that is needed at the start of a project</strong>. We can call it whatever we want, but <a href="https://www.paradigmadigital.com/techbiz/sprint-0-clave-la-gestion-proyectos-agiles/" target="_blank">Sprint Zero</a> is a necessary good if we want to avoid surprises halfway through… or even at the end.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">My experience</h2>
<p>I have been managing projects for almost fifteen years — first under a waterfall methodology, and now I am <strong>an agilista</strong> (or at least I try), working with <strong>Scrum</strong>. When humanity took that great leap toward true value delivery, leaving behind endless Project Management Plan documents printed in A3, we also forgot to ensure, through a time-box, this <strong>space to understand the project, the client, and the team</strong> — to make initial contact, put faces to names, and start getting to know each other.</p>
<p>Defining the rules of the game, understanding needs, feeding the backlog, making initial estimates… <strong>all of this is just as important as delivering a Minimum Viable Product (MVP)</strong>. In fact, it is <strong>essential</strong> for the MVP to meet the expectations of all stakeholders. Because sometimes — just sometimes — in that first contact, one side realizes the other cannot deliver what was expected, and we avoid discovering this when half the work is already done.</p>
<p>It is true that <strong>working in consulting is not the same as working for a final client</strong>. Consultants serve clients who have a product to sell, which somehow also becomes our product. But this service has an end date, so generally <strong>in consulting we tend to request these time-boxes</strong> to align expectations and ensure deliverables with greater reliability. On the client side, where information is more direct and immediate, there is often a tendency to think that these spaces — where the team reflects, plans, aligns, and gathers context — are unnecessary.</p>
<p>This initial phase of the project provides a series of <strong>advantages</strong> that allow us to start the <strong>first Sprint with a higher chance of success</strong>, compared to diving straight into it without fully understanding what we are facing:</p>
<ol>
<li><strong>You will get to know your client</strong></li>
</ol>
<p>And to do so, you will make time — even when there is none — for kickoff meetings where the team presents what has been understood from the agreed scope (the project charter) and the conclusions drawn from this initial phase, where sometimes people only understand what they want (or are able) to understand.</p>
<p>From these meetings, questions will arise that may (or may not) change the product scope and even increase its value — because, honestly, they had not even been considered before. And what better value delivery than adding value to your own product?</p>
<ol start="2">
<li><strong>You will get to know the team and start building relationships</strong></li>
</ol>
<p>I’m not talking about having “a good team” in abstract terms, but about <strong>creating the feeling of moving as one</strong>, rowing in the same direction. In those early days, you begin to see — and be seen: strengths, habits, styles, and how each person reacts under pressure or uncertainty.</p>
<p>That knowledge is invaluable because it allows you to anticipate and manage risks and <a href="https://www.paradigmadigital.com/transformacion-organizacional-rev/toma-control-estrategias-gestion-dependencias-activa/" target="_blank">dependencies</a> that are not visible at first: misunderstandings, misaligned expectations, misinterpreted silences, or decisions no one dares to make without a clear framework.</p>
<ol start="3">
<li><strong>You will help the team and the client get to know each other</strong></li>
</ol>
<p>In those early meetings filled with questions and clarifications (business, technical, and “how do we actually do this?”), the <strong>real starting point</strong> is built. If mutual trust is established during this period, everything flows better afterward: when the team proposes something unexpected, the client will at least listen; and when business priorities change mid-way, the team will treat it as what it is — an adjustment, not an impossible drama.</p>
<ol start="4">
<li><strong>Together, we establish the rules of the game</strong></li>
</ol>
<p>This already accounts for half the success of the deliverable: sprint duration, daily stand-up time, planning/review/retro agendas, estimation approach, and how we write user stories. We also set up the basics: repositories, access, tools (Jira or not?), and initial technical decisions to avoid improvisation. With all this, we outline a <strong>lightweight initial roadmap</strong> to understand where we are starting and what milestones lie ahead.</p>
<ol start="5">
<li><strong>The Product Owner can leave this phase with an initial Product Backlog</strong></li>
</ol>
<p>A first version — but enough to build an MVP that becomes the foundation for everything else. As Scrum dictates, the backlog <strong>will evolve throughout the product lifecycle</strong>, but defining initial epics, stories, and tasks gives us a solid starting point to begin building (and learning) from Sprint 1.</p>
<ol start="6">
<li><strong>From all meetings, impressions, ideas, and decisions, an initial <a href="https://www.paradigmadigital.com/dev/gestion-riesgos-entornos-agiles/" target="_blank">risk management</a> can be established</strong></li>
</ol>
<p>This allows us to create an <strong>initial picture</strong> of potential scenarios the team may face and try to eliminate or mitigate them before they become real problems.</p>
<p>All of this <strong>lays the foundation of the project</strong>, from which the path begins — not only toward successful deliverables but toward doing things properly, always, always through alignment.</p>
<h2 class="block block-header h--h30-15-400 left  ">In real life…</h2>
<p>Let me share a real case that reinforced all of this for me. I was assigned to a project with a client we had worked with before — in theory, nothing should have surprised us. <strong>A short, urgent project</strong>: the client wanted a deliverable by early year, and we received it in mid-November. December in between, with holidays already scheduled… so the margin was tight.</p>
<p>Looking ahead, we realized it was <strong>necessary to discover each other</strong> to strengthen the product and build as much as possible at the highest possible speed, delivering a quality MVP that could grow incrementally.</p>
<p>We spent two weeks getting to know each other: 10 working days where, based on the project initiation document, <strong>we worked closely with the client</strong> to resolve requirement doubts, extract all necessary User Stories, estimate them, and therefore estimate the entire project.</p>
<p>Given the impossibility of delivering everything desired by the client’s deadline, the idea was to <strong>propose an MVP</strong> that would evolve through value deliveries every two weeks, with new features and deployment milestones until completing everything defined in the requirements document.</p>
<p>We started with enthusiasm, determined to do it right. We explained that sprints would be 2-week time-boxes, used every available moment to gather more information, clarify doubts, define how features could be more effective, and build a roadmap with milestones and value deliveries.</p>
<p>The <strong>first week went smoothly</strong>: committed client, motivated team… what could go wrong? By the <strong>second week</strong>, the client stopped attending meetings, became vague in responses, and stopped replying. Before we could finalize the roadmap and delivery plan, they canceled the project.</p>
<p>A failure? For me, quite the opposite: it was an early signal that saved us weeks of pressure and wasted effort. Without that initial phase, we would have discovered it much later — with work already done, an exhausted team, and a frustrated client.</p>
<h2 class="block block-header h--h30-15-400 left  ">What conclusion do we draw?</h2>
<p>So, <strong>was Scrum wrong for not “inventing” a Sprint Zero?</strong> From a strict Scrum perspective, it may not make sense: a Sprint is a Sprint. But real life — especially in consulting — rarely starts under ideal conditions, and that’s where this discovery time-box becomes an ally.</p>
<p>In today’s context, with the growing <strong>need for adaptability</strong>, it makes sense to go through this discovery phase with the client — something that has existed since the origins of project management.</p>
<p>What would have happened in that project where the client withdrew in week two? Would we have realized so quickly that we were not aligned? Probably not. We would have gone through <strong>six weeks of intense pressure</strong>, trying to deliver something unachievable, with a stressed team, a frustrated client, and quality and methodology both questioned.</p>
<p>Traditional project management already accounted for something similar: <strong>before execution, you needed to initiate and plan — understand the context, align expectations, and define a roadmap</strong>.</p>
<p>Call it what you want, but the need to understand the client, the problem, and the project conditions is not new — what has changed is how we do it. Today, we don’t aim for exhaustive waterfall-style planning, but for a lightweight and adaptive start: empathizing with goals, clarifying needs, making assumptions visible, identifying risks, and leaving with a first working vision.</p>
<p>In short: <strong>think just enough at the beginning so you don’t pay later for not thinking at all</strong>.</p>
<p>At Paradigma, we apply this mindset through <a href="https://en.paradigmadigital.com/offering/polaris/" target="_blank">Polaris</a>, our framework, where we maintain Agile principles while adapting to each client’s lifecycle:</p>
<ul>
<li><strong>We keep the Agile essence</strong>, applying the <strong>80/20 rule</strong>: 80% is team attitude, 20% is practices, tools, and experience.</li>
<li><strong>The Scrum Master evolves into an Agile Delivery Leader</strong>, ensuring coordination, value focus, and clarity.</li>
<li><strong>We follow a shared True North</strong>, guiding decisions and maintaining direction.</li>
<li><strong>We work in a 100% adaptive framework</strong> with clear phases: pre-project, kickoff (Sprint Zero), iterations, and closure.</li>
<li><strong>We proactively manage risks</strong> from day one.</li>
<li><strong>We aim for productivity</strong>, doing what is necessary first, then what is possible, and eventually what seemed impossible.</li>
<li><strong>We rely on metrics to make decisions</strong>, adapting them to each context.</li>
<li><strong>Quality is non-negotiable</strong>, ensuring superior products.</li>
<li>We understand that software is about solving business problems, so <strong>our approach is always aligned with the client</strong>.</li>
<li><strong>Team experience matters</strong>, and relationships with clients are key to success.</li>
</ul>
<p>And I’ll close with something simple: life gives us the ability to choose — also when managing projects and trusting others with our products. <strong>Nothing guarantees 100% success</strong>, but if we can choose, I’m clear: better to get to know each other before jumping to conclusions, better to empathize before blaming.</p>
<p>Better to discover things early than end up with a list of “I told you so” that leads nowhere… and certainly not to product success.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Andrés Macarrilla ]]>
        </dc:creator>
        <title>AI Platforms: From Theory to Practice</title>
        <link>https://en.paradigmadigital.com/techbiz/ai-platforms-from-theory-to-practice/</link>
        <pubDate>Tue, 24 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/techbiz/ai-platforms-from-theory-to-practice/</guid>
        <description>Artificial Intelligence has already proven its potential in proofs of concept. Now the challenge is different: bringing it into production in an efficient, scalable, and secure way. In this post, we explain how to move from theory to practice in applying AI.
</description>
        <content:encoded>
            <![CDATA[
                <p>In the previous post, we were overwhelmed by the sheer number of <a href="https://en.paradigmadigital.com/techbiz/from-poc-to-production/" target="_blank">ticketing tools, quality issues, and bottlenecks when trying to bring everything into production</a>. We analyzed how the gap between a successful PoC and industrialized production is real — and very costly.</p>
<p>The good news is that the industry has already found the solution, and it’s not something invented overnight. The answer, <a href="https://en.paradigmadigital.com/techbiz/2026-the-year-ai-platform/" target="_blank">as we have been advocating</a>, lies in <strong>applying the most rigorous discipline we have</strong>: <a href="https://en.paradigmadigital.com/techbiz/how-to-release-platform-as-a-product/" target="_blank">Platform Engineering</a>. The days of <em>“testing with soda”</em> (the AI preparation phase) are over — now it’s time for the <strong>application of AI in production</strong>, at scale, with responsibility and speed.</p>
<p>The need for an AI platform is not just a trend — it is the <strong>AI-native infrastructure</strong> that leading companies, including some of our clients, are already building in the market today.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The industry is already there, and there are reference solutions</h2>
<p>The truth is, we don’t need to be guinea pigs. The heavyweights are already clearly stating what we are seeing: <strong>AI needs Platform Engineering</strong>.</p>
<p><strong>Google and Thoughtworks</strong> are just two examples of how the industry’s vision is converging. The message is clear: if you want to scale MLOps consistently, you need <a href="https://en.paradigmadigital.com/techbiz/understanding-golden-paths-practical-guide/" target="_blank">Golden Paths</a> and the abstraction that only Platform Engineering can provide. It’s not just about having more GPUs — it’s about <strong>making those GPUs consumable</strong> without a 300-page manual.</p>
<p><strong>What does this mean?</strong> It means that the goal is not just to support current models, but to <strong>create AI-native infrastructure</strong>. In other words, a platform designed from the ground up to orchestrate the complexity of AI agents, vector data, and the MLOps lifecycle. We are talking about a platform where automation and intelligence are built in.</p>
<p>If industry leaders have already validated this <em>blueprint</em>, the question is not <em>if</em> we should build it, but <strong>when</strong> we start.</p>
<p>An AI platform is rarely built from scratch using a single product. It is based on an <strong>intelligent orchestration of components</strong> that reduce friction and provide flexibility. For teams starting this journey, it is <strong>crucial to identify the stack</strong> that will serve as the foundation. An example of an industrialized Golden Path includes tools such as:</p>
<ul>
<li><strong>Pipeline orchestration: Kubeflow or MLflow</strong>, to standardize workflows (training, packaging, and model deployment).</li>
<li><strong>Training data management (Feature Store)</strong>: tools like <strong>Feast</strong>, ensuring data consistency between training (offline) and prediction (online).</li>
<li><strong>Model serving and MLOps core</strong>: using Kubernetes for deployment is just the beginning. Specialized solutions like <strong>KServe</strong> help manage scaling and model monitoring complexity.</li>
</ul>
<p>The key is not adopting all these tools, but ensuring that <strong>the AI Platform abstracts them</strong>, so Data Scientists only interact with the simplified Golden Path we define.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Real use cases: tangible ROI</h2>
<p>CxOs love the word <strong>ROI</strong>. How do we evangelize the value of this platform? Simple: <strong>with facts</strong> — showing where self-service and traceability generate revenue or mitigate risk.</p>
<p>Let’s ground this with three examples where the <strong>AI Platform provides the strongest foundation</strong>:</p>
<ul>
<li><strong>Retail and personalization at scale</strong></li>
</ul>
<p>A platform that enables deploying a <strong>recommendation engine</strong> for each customer segment (or even each individual) in a matter of hours. The advantage: if Data Scientists can iterate on recommendations ten times faster thanks to an automated Golden Path, the company sells more. The business focus is on improving the algorithm’s Time-to-Market.</p>
<ul>
<li><strong>Dynamic fraud detection</strong></li>
</ul>
<p>Risk analysis. Here, the AI platform not only deploys the detection model but also <strong>ensures auditability</strong> and <strong>real-time monitoring of model drift</strong>. If the model starts to fail, the platform detects it and automatically rolls it back, protecting the company from multimillion losses or regulatory failures.</p>
<ul>
<li><strong>Internal IT and automation</strong></li>
</ul>
<p>We can use the AI platform to enable Data Scientists to build internal AI agents that automate support ticket management or optimize cloud resources. Moving from “we have an AI agent in testing” to “the agent resolves 20% of L1 incidents” is only possible with a platform that reliably manages its lifecycle and state.</p>
<p>There are countless use cases. As experts in your vertical and business domain, you will surely identify many more opportunities to capitalize on.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The CxO roadmap: beyond code and technology</h2>
<p>This is where the CxO shifts into strategic consultant mode. Technology is the <strong>how</strong>, but strategy and people are the <strong>what</strong>. We cannot talk about an AI Platform without addressing <strong>change management</strong>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Strategic alignment: connecting DORA to the balance sheet</h3>
<p>We need to translate <em>Feature Store</em> jargon into business language — the language of those who unlock investment.</p>
<p>Using principles from our <a href="https://en.paradigmadigital.com/techbiz/what-platform-engineering-is-what-it-is-not/" target="_blank">Platform Engineering series</a>:</p>
<ul>
<li><strong>Business KPIs and MLOps metrics</strong>: demonstrating that reducing Lead Time for Changes (a DORA metric we should use to measure the platform) directly translates into faster product launches and reduced risk. This is the only way to justify the investment with evidence.</li>
</ul>
<p>Additionally, for <strong>AI and models</strong>, we can introduce specific metrics:</p>
<ul>
<li><strong>PoC-to-Prod Ratio</strong>: the percentage of Proofs of Concept that reach production within 90 days. The platform should increase this ratio from the current 10–20% to over 70%.</li>
<li><strong>Model Lift</strong>: measuring the incremental improvement in business metrics (e.g., conversion increase, fraud reduction) before and after deploying the model through the platform.</li>
<li><strong>MLOps TCO Reduction</strong>: quantifying operational cost savings achieved through self-service and automation, eliminating dependency on TicketOps and manual infrastructure configuration.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Change management: bridging Data Science and engineering</h3>
<p>This is probably the hardest part. We need to <strong>reconcile and empower both worlds</strong>:</p>
<ul>
<li><strong>Data Scientists are not DevOps — and they shouldn’t be</strong>. That’s why the platform is essential as a bridge. We must establish an <strong>internal product culture</strong>: the platform team provides services and capabilities, and the Data Science team is the customer.</li>
<li><strong>Training and new roles</strong>: invest in developing <strong>Machine Learning Engineers</strong>, the glue between Data Scientists and Platform Engineers.</li>
</ul>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/gestion_cambio_data_scientist_ingenieria_f0be4bd0fa.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/gestion_cambio_data_scientist_ingenieria_f0be4bd0fa.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/gestion_cambio_data_scientist_ingenieria_f0be4bd0fa.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/gestion_cambio_data_scientist_ingenieria_f0be4bd0fa.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/gestion_cambio_data_scientist_ingenieria_f0be4bd0fa.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Change management, ROI use cases and the CxO roadmap: strategy and people" title="Change management"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The first step is technological abstraction and addressing real business needs</h2>
<p>The industry is doing it. At Paradigma, we are already doing it. We see that <strong>the benefits are tangible</strong>, and above all, we see that <strong>the risk of falling behind is unacceptable</strong>. AI is the new competitive battlefield.</p>
<p>So, where do we start?</p>
<p><strong>My recommendation for all C-level executives, decision-makers, and business unit leaders responsible for technology decisions is simple</strong>: start with the <strong>highest-friction components</strong> — those that consume the most effort without directly impacting the business.</p>
<p>Focus on what is <strong>most valuable to optimize and industrialize</strong>, and what accelerates the journey from idea to production.</p>
<ul>
<li><strong>Do not try to build the entire IDP at once</strong>. That would be a mistake. Be pragmatic.</li>
<li>Start with capabilities like a <strong>Feature Store</strong> to solve data quality issues, or a <strong>Model Registry</strong> to address traceability and governance of model usage within your organization.</li>
</ul>
<p>These are just a couple of examples, but those familiar with the process and its pain points will be able to identify “small wins” that gradually build momentum and shape the AI Platform.</p>
<p>Any effort that reduces manual self-service and increases standardization is paving the way toward that <strong>inevitable AI Platform</strong>. The platform is not built in a day — it is a journey. The best way to evangelize and generate traction is to demonstrate value through small but solid Golden Paths.</p>
<p>I hope this series of posts has sparked your curiosity about why this topic is so important, provided strategic justification through real-world challenges, and most importantly, given you ideas on where to start building a practical roadmap that delivers value and helps lead the AI conversation in your organization.</p>
<p>If you’d like to revisit the previous two posts in the series, here they are:</p>
<ul>
<li><a href="https://en.paradigmadigital.com/techbiz/2026-the-year-ai-platform/" target="_blank">2026 will be the year of AI platforms</a></li>
<li><a href="https://en.paradigmadigital.com/techbiz/from-poc-to-production/" target="_blank">From PoC to production: many proofs of concept, but few in production</a></li>
</ul>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Santiago López ]]>
        </dc:creator>
        <title>What Is Green QA? Quality That Breathes</title>
        <link>https://en.paradigmadigital.com/dev/what-is-green-qa-quality-that-breathes/</link>
        <pubDate>Tue, 17 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/what-is-green-qa-quality-that-breathes/</guid>
        <description>Green QA introduces a new way of measuring software quality. In addition to validating functionality and performance, it aims to analyze the energy consumption and carbon footprint of the process. This makes it possible to build more efficient digital products aligned with sustainability criteria.
</description>
        <content:encoded>
            <![CDATA[
                <p>Making software work correctly and ensuring that work processes follow a quality-oriented methodology are not the only goals that should be pursued within QA, QC &amp; Testing and in the software world in general. There is a new frontier: <strong>measuring quality in watts and CO₂</strong>.</p>
<p>Testing everything at all times may not be the most optimal or the most ecological approach. Improving <strong>testing cycles</strong> and adapting both <strong>process quality and product quality</strong> to reduce emissions becomes a new challenge for teams.</p>
<p>In this series of three posts about <strong>Green Quality Assurance</strong>, we will explore this new perspective to understand what it consists of, which frameworks may be most suitable to achieve our goals, and how to establish KPIs and metrics within an organization or project to guide our practices toward greater efficiency and sustainability.</p>
<p>In this first part, we will focus on explaining <strong>what it means to be environmentally conscious in the QA world and how this need arises</strong>, driven by European regulations such as CSR (Corporate Sustainability Reporting Directive) and concepts like ESG (Environmental, Social, and Governance).</p>
<h2 class="block block-header h--h30-15-400 left  ">What is Green QA?</h2>
<p>Imagine your quality process becoming an “athlete”: faster, stronger, and much more efficient. <strong>Green Quality Assurance (GQA)</strong> redefines QA, QC, and testing processes so that each one matters, <strong>reducing energy consumption and carbon footprint</strong> without losing the rigor required.</p>
<p>Integrating sustainability into the <strong>quality of the process</strong> (QA activities during software construction) and into the <strong>quality of the product</strong> (QC and testing, validating and verifying the built product) means ensuring that software is not only good in terms of quality, but also <strong>efficient in terms of energy consumption and sustainability</strong>, and aligned with <strong>ESG principles</strong>, which we will explore later.</p>
<p>Organizations that <strong>integrate social responsibility into their development lifecycle</strong>, while maintaining performance and controlling costs, present a <strong>competitive advantage</strong> to potential clients. This allows those clients to improve their sustainability metrics without sacrificing other objectives.</p>
<p>Aligned with the concept of GQA is <strong>GreenCode</strong>, which focuses on coding practices that seek energy efficiency through techniques such as <strong>lazy loading</strong>, <strong>microservices instead of monoliths</strong>, and <strong>lightweight code</strong> that extends the lifespan of the devices on which it runs, among other aspects.</p>
<p><strong>Green IT</strong> goes hand in hand with the above concepts and serves as their foundation. It refers to the use of <strong>hardware with high energy efficiency certifications</strong> and the use of <strong>cloud infrastructure</strong>, since large providers such as Amazon and Google often rely partly on renewable energy sources like solar power to maintain their data centers.</p>
<p>Currently, from a legal perspective, there are <strong>regulations</strong> that require companies to present CSR reports (or Sustainability Reports). At the European level, this is known as the <strong>CARD directive</strong>, which in Spain has been mainly integrated through the <strong>Corporate Sustainability Reporting Law (LIES)</strong>. It is important to remember that the CARD directive requires the use of <strong>ESRS standards (European Sustainability Reporting Standards)</strong>. These standards require certain companies to <strong>break down their energy consumption and emissions</strong> depending on the size and type of company. As you can see, Green QA has legal coverage and, if its objectives are achieved, it helps companies comply with these requirements.</p>
<p>As an example of the use of these regulations and Green QA, a traditional CSR report might say: “we want to be green,” while a report under the <strong>CARD directive</strong> would require something like: “our <strong>Green QA</strong> suite reduced CPU consumption by 12%, saving X tons of CO₂ this year.”</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">ESG as DNA</h2>
<p>Continuing with the concepts surrounding GQA, and since we will refer to it frequently, let’s briefly explain what ESG is.</p>
<p><strong>ESG (Environmental, Social, and Governance)</strong> is the set of criteria that investors, governments, and customers use to <strong>measure</strong> whether a company is responsible in terms of <strong>environmental impact, social responsibility, and internal governance</strong>.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/esg_software_development_13ef1bb249.jpeg"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/esg_software_development_13ef1bb249.jpeg 1920w,https://www.paradigmadigital.com/assets/img/resize/big/esg_software_development_13ef1bb249.jpeg 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/esg_software_development_13ef1bb249.jpeg 910w,https://www.paradigmadigital.com/assets/img/resize/small/esg_software_development_13ef1bb249.jpeg 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="ESG in software development" title="ESG in software development"/></article>
<p>When we talk about <strong>Environmental</strong>, we refer to the environmental impact a company has on the planet. Green QA plays a significant role here, helping with aspects such as:</p>
<ul>
<li><strong>Key topics</strong>: carbon footprint, energy efficiency, waste management, and climate change.</li>
<li><strong>In software</strong>: how much energy do your servers consume? Is your code optimized to avoid unnecessarily heating processors?</li>
</ul>
<p>When we talk about <strong>Social</strong>, we refer to how the company manages its relationships with people and society.</p>
<ul>
<li><strong>Key topics</strong>: diversity, human rights, workplace safety, and data protection.</li>
<li><strong>In software</strong>: this includes accessibility (ensuring that people with disabilities can use your application) and ethical data practices (user privacy).</li>
</ul>
<p>Finally, when we talk about <strong>Governance</strong>, we refer to how the company is managed internally — the rules and transparency that guide its operations.</p>
<ul>
<li><strong>Key topics</strong>: business ethics, executive compensation transparency, anti-corruption measures, and legal compliance.</li>
<li><strong>In software</strong>: quality audits, compliance with software regulations, and transparency in development processes.</li>
</ul>
<p>Now that we understand ESG, we can see how the technical quality role can evolve — from guaranteeing the method and the product to also ensuring <strong>ESG compliance</strong>, thereby becoming a guardian of sustainability as well.</p>
<p>Understanding the major ESG pillars (environmental impact, social impact, and internal governance), GQA helps <strong>achieve objectives in the technological domain</strong>:</p>
<ul>
<li><strong>🍃 Environmental</strong>: optimizing automated test suites and improving cloud efficiency to directly reduce the software’s carbon footprint.</li>
<li><strong>🤝 Social</strong>: promoting digital inclusion. Efficient code consumes fewer resources and performs better on older devices, helping combat planned obsolescence.</li>
<li><strong>⚖️ Governance</strong>: generating technical metrics and transparent reports on energy consumption, facilitating compliance with green audits.</li>
</ul>
<p>Organizations that align their quality lifecycle management processes with ESG goals without compromising speed, cost, or performance will gain a <strong>decisive advantage in a market increasingly saturated with high-performance CSR requirements</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Main objectives</h2>
<p>The main objectives of this philosophy are <strong>resource optimization, operational efficiency, and profitability</strong>. Among the goals pursued are <strong>reducing environmental impact</strong> by lowering consumption, <strong>reducing waste and emissions</strong>, and <strong>optimizing the use of material and computational resources</strong>.</p>
<p>We know that every time a test suite runs (especially in the cloud or on large servers), electricity is consumed, which generates a carbon footprint. Reducing this impact through <strong>selective testing</strong> and <strong>efficient test code</strong> should be a key goal.</p>
<p>Another objective of Green QA is <strong>alignment with ESG</strong>, providing environmental metrics:</p>
<ul>
<li><strong>Enabling auditable sustainability reports</strong> (test cycle consumption, etc.)</li>
<li><strong>Providing insights for sustainable data governance</strong> (reducing data duplication in TDM test environments), which lowers physical storage requirements in data centers.</li>
<li><strong>Providing audit trails</strong> proving that quality processes comply with the organization’s “Net Zero” policies.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Impact areas: where quality becomes green</h2>
<p>To achieve Green QA objectives, we must <strong>intervene in key digital assets</strong>. It’s not only about verifying that software works — it’s about <strong>ensuring it is sustainable</strong>. To do so, we must act in the following areas:</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Validation of digital products and processes</h3>
<ul>
<li><strong>Software lifecycle analysis (S-LCA)</strong>: QA no longer only validates the “Go-Live”; it audits environmental impact from development and testing through deployment and eventual software decommissioning. Its role is to ensure that energy consumption calculations at each stage are accurate and realistic.</li>
<li><strong>Code sustainability verification</strong>: implementing protocols to measure the “energy density” of functions. QA performs efficiency tests to prevent bloatware and ensure the product does not force user hardware obsolescence.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Sustainability audits in infrastructure</h3>
<ul>
<li><strong>Cloud provider validation</strong>: QA verifies that the infrastructure hosting the software complies with renewable energy certifications (PUE – Power Usage Effectiveness). We don’t just test in the cloud — we audit that the cloud is green.</li>
<li><strong>Optimization of the digital supply chain</strong>: reviewing third-party libraries and dependencies. Green QA detects inefficient dependencies that consume resources in the background without delivering value.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Validation of eco-design in software</h3>
<ul>
<li><strong>Repairability and modularity criteria</strong>: validating that code is structured so it can be easily maintained and updated without refactoring the entire system, saving unnecessary computation cycles.</li>
<li><strong>Data transfer efficiency</strong>: specific tests to reduce the weight of API requests and data traffic, directly lowering electricity consumption in data centers and telecom networks.</li>
<li><strong>Carbon-aware load testing</strong>: measuring not only how many users the system supports but also how much CO₂ the server emits under that load.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Ensuring ESG data integrity</h3>
<p>For a sustainability report to be valid, the data must be highly <strong>accurate</strong>. Green QA becomes the <strong>technical auditor</strong> ensuring the integrity of every metric, which requires proper training for this role to certify the provided information.</p>
<ol>
<li><strong>Validation of emissions KPIs</strong></li>
</ol>
<p>The QA profile must ensure that calculation algorithms and data sources reflect the real energy consumption of the digital ecosystem:</p>
<ul>
<li><strong>Direct emissions</strong>: first level. Validation of data from proprietary infrastructure and on-premise servers.</li>
<li><strong>Purchased energy</strong>: second level. Verification of electricity consumption reports from contracted data centers and their energy mix (renewable vs. fossil).</li>
<li><strong>Value chain</strong>: third level. The biggest challenge. QA audits the efficiency of third-party APIs and the energy consumption generated by the software on end-user devices.</li>
</ul>
<ol start="2">
<li><strong>Accuracy and reliability of reports</strong></li>
</ol>
<p>Having data is not enough — it must be correct. Techniques are applied to prevent bias in sustainability reports:</p>
<ul>
<li><strong>Stress testing</strong> of carbon footprint calculation models.</li>
<li><strong>Validation</strong> to ensure there are no duplicated emissions counted across departments.</li>
</ul>
<ol start="3">
<li><strong>Traceability and auditing (Data Lineage)</strong></li>
</ol>
<p>Implementation of traceability tests to ensure that every data point in the annual report can be traced back to its technical origin (server logs, CPU metrics, etc.). If an auditor asks where a number comes from, Green QA has the documented answer.</p>
<ol start="4">
<li><strong>Consistency in ESG disclosure</strong></li>
</ol>
<p>Ensuring that the data published on the website, in the app, and in the legal <strong>CARD directive report</strong> are identical. Automated cross-validation processes should be implemented to avoid discrepancies that could result in legal penalties.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Application of standards and regulations</h2>
<p>Several frameworks and standards support the implementation of these “green” processes. Here we briefly review them; in future articles we will explain how they can be applied strategically and methodologically:</p>
<ul>
<li><strong>ISO frameworks (14001 and 50001)</strong>: ensuring QA processes are documented to pass external environmental and energy management audits.</li>
<li><strong>CARD (Corporate Sustainability Reporting Directive)</strong>: the new European regulation ensuring legal sustainability reporting requirements are technically fulfilled.</li>
<li><strong>EU Taxonomy</strong>: validating that company activities are correctly classified as “green” according to EU technical criteria.</li>
<li><strong>GHG Protocol (Greenhouse Gas Protocol)</strong>: establishing the standard methodology for calculating carbon emissions derived from the software lifecycle. QA must validate energy consumption data collection in Scope 2 (server/cloud energy) and Scope 3 (third-party cloud services and end-user software usage), ensuring emission factors are accurate for carbon footprint reports.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p>We have seen how, within the world of software quality, there is an aspect that is rarely considered yet has a significant impact.</p>
<p>Applying GQA not only <strong>improves sustainability</strong> but also <strong>enhances efficiency</strong>, which ultimately reduces costs in both processes and product development.</p>
<p>All these processes are supported by a set of European regulations and their transposition into Spanish law, making them even more relevant since non-compliance can result in financial penalties for companies.</p>
<p>Ultimately, <strong>Green QA is about doing our part to improve life through quality practices.</strong></p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Andrés Macarrilla ]]>
        </dc:creator>
        <title>From PoC to Production: Many Proofs of Concept, but Few in Production</title>
        <link>https://en.paradigmadigital.com/techbiz/from-poc-to-production/</link>
        <pubDate>Tue, 17 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/techbiz/from-poc-to-production/</guid>
        <description>AI only generates value when it reaches production and becomes integrated into real products and processes. Without platforms that automate deployments, data governance, and security, models remain experiments. The difference between an interesting PoC and a competitive advantage lies in the ability to bring AI into production. The question is: how? We explain it in this post.
</description>
        <content:encoded>
            <![CDATA[
                <p>We have spent more than two very intense years with Artificial Intelligence everywhere and at every level. This has triggered and driven <strong>an enormous number of proof of concepts</strong>, applying <strong>different technologies to different use cases</strong> to demonstrate potential returns for businesses and customers.</p>
<p>All of this has worked relatively well. Beyond the constant change and evolution across the entire technological landscape related to AI, <strong>there are no major problems until the moment comes to make it production-ready</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The reality of AI in production</h2>
<p>As we already mentioned in the first post of this three-part series, <a href="https://en.paradigmadigital.com/techbiz/2026-the-year-ai-platform/" target="_blank">the AI Platform is no longer an option but a necessity</a>. But <strong>why is this need so urgent?</strong> In my opinion, it’s because we are losing money, time, and opportunities every single day.</p>
<p>Most companies are trapped in a discouraging <strong>“Groundhog Day” loop</strong>: <strong>proofs of concept tend to succeed, but moving them into production is painful</strong>. The reality is that Data Scientists perform magic in their notebooks and the results are promising, but when trying to move those models into the real world, they hit a wall of bureaucracy, tickets, lack of standardized tools, missing automation, and security team bottlenecks.</p>
<p>It becomes a nightmare. And that nightmare translates into <strong>very concrete bottlenecks and problems</strong> that the AI Platform must eliminate.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Problem 1: self-service and the pain of TicketOps</h2>
<p>Let’s put ourselves in the shoes of the AI team. They have just trained a model that could save millions, and all they need is surprisingly <strong>to deploy it</strong>.</p>
<p>What happens next? In 90% of cases, they need to contact an infrastructure team, open a ticket, wait three days to get a namespace assigned, request network permissions, and hope for access to a feature database. This is what we call <strong>&quot;TicketOps&quot;</strong>, and it is a major problem for Time-to-Market.</p>
<p><strong>⚠️ Important:</strong> this process is necessary. We should not underestimate the potential security or regulatory compliance issues that could arise.</p>
<p>However, the situation is becoming even worse. With the rise of LLMs, the problem has grown significantly. Deploying an LLM is very different from deploying a simple regression model. It may require GPUs, specialized storage, and APIs that manage the complexity of prompting and memory usage. Without a self-service tool that allows this to be deployed in minutes, it simply does not happen. The democratization of LLMs ends right there.</p>
<p>Previously, a Data Scientist could request a server and that solved everything. Today, it is necessary to deploy a microservice that consumes an LLM, uses a <strong>vector database</strong> for long-term memory, and also requires <strong>dedicated access to a GPU node</strong> in the cluster. If that process cannot be reduced to a single command in our IDP, we are creating a <strong>technical barrier</strong> for every new AI idea — directly impacting the business and, therefore, our customers.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/autoservicio_ticketops_8e39da2e00.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/autoservicio_ticketops_8e39da2e00.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/autoservicio_ticketops_8e39da2e00.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/autoservicio_ticketops_8e39da2e00.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/autoservicio_ticketops_8e39da2e00.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="Self-service and TicketOps" title="Self-service and TicketOps"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Problem 2: compliance and regulatory requirements</h2>
<p>When we talk about AI, <strong>the risk is enormous</strong>. A model that makes biased decisions or operates with outdated (or private) data can cost a fortune in fines or even <strong>destroy a company’s reputation</strong>.</p>
<p>Here, the problem is twofold:</p>
<ol>
<li><strong>Quality</strong></li>
</ol>
<p>The AI Platform must guarantee that <strong>training data is consistent</strong>. If we do not standardize how data is ingested and versioned, the production model will inevitably drift. It is <strong>Garbage In, Garbage Out</strong> taken to the extreme.</p>
<p>The consistency problem is caused by the absence of a <strong>Feature Store</strong>. Data Science teams often calculate the mean or standard deviation of a data column during training, but the code that calculates that feature in real-time in production is different.</p>
<p>A <strong>Feature Store</strong>, managed by the platform, guarantees that the code used to compute a feature is <strong>the same during training and serving (production)</strong>. It is the only way to ensure mathematical consistency.</p>
<ol start="2">
<li><strong>Security</strong></li>
</ol>
<p>Who has access to the model? How are changes in code and datasets tracked? Without <strong>traceability and security policies by design</strong> (integrated into the <a href="https://en.paradigmadigital.com/techbiz/understanding-golden-paths-practical-guide/" target="_blank">Golden Paths</a>), auditing becomes impossible.</p>
<p>And honestly, the idea of a Data Science team manually configuring firewall rules gives me chills. It’s a disaster waiting to happen.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/cumplimiento_compilance_normativo_16cc14c0c1.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/cumplimiento_compilance_normativo_16cc14c0c1.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/cumplimiento_compilance_normativo_16cc14c0c1.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/cumplimiento_compilance_normativo_16cc14c0c1.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/cumplimiento_compilance_normativo_16cc14c0c1.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="compliance and regulatory requirements" title="Compliance / regulatory requirements"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Problem 3: Time-to-Market (TTM) pressure</h2>
<p>This is the point that matters most to the business. <strong>What is the value of having an innovative model if it takes six months to reach the customer?</strong> Competitors are not waiting, and Time-to-Market becomes the most critical business metric.</p>
<p>Today, platform teams face enormous pressure to <strong>operationalize AI</strong> (that is, to make it reliable and fast). They must move at the speed of innovation, not at the speed of infrastructure.</p>
<p>The solution — and this is where <a href="https://en.paradigmadigital.com/techbiz/how-to-release-platform-as-a-product/" target="_blank">Platform Engineering</a> starts to feel almost like science fiction, is moving from TicketOps to <strong>Intent-to-Infrastructure</strong>.</p>
<p>In other words, giving a Data Scientist the ability to tell the platform: <em>&quot;I want an environment to train a classification model with a specific dataset.&quot;</em></p>
<p>The platform automatically <strong>translates that intent into all the required infrastructure</strong> (Kubernetes, networking, secure storage) in a matter of minutes.</p>
<p>This removes the biggest bottleneck: <strong>friction and dependency between teams</strong>. It allows us to move from idea to production at the speed the business demands.</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/presion_time_to_market_237899e3e0.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/presion_time_to_market_237899e3e0.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/presion_time_to_market_237899e3e0.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/presion_time_to_market_237899e3e0.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/presion_time_to_market_237899e3e0.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="from TicketOps (bottleneck) to Intent-to-Infrastructure (fast, automated and self-service)" title="Intent-to-Infrastructure"/></article>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p>If we combine bureaucracy and rigid processes (often tied to ticket-based workflows), the importance of data quality, and relentless Time-to-Market pressure, <strong>we quickly understand that AI without a platform becomes an expensive and frustrating research project, not a competitive advantage</strong>.</p>
<p>AI platforms are the only tool that allows organizations to maintain <strong>control</strong> without sacrificing <strong>speed</strong>.</p>
<p>We have seen the real problems and bottlenecks that slow down the ability of AI to deliver value to businesses and customers. In the final post of this series, we will stop talking about pain and start focusing on <strong>solutions</strong>.</p>
<p>We will explore <strong>what the industry is doing</strong>, with concrete examples and use cases, and define the <strong>first steps of a practical roadmap</strong> so your team can start building that model factory today.</p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Iria Calvelo ]]>
        </dc:creator>
        <title>From Vision to Action: Build Your First Roadmap</title>
        <link>https://en.paradigmadigital.com/organizational-transformation-rev/from-vision-to-action-build-first-roadmap/</link>
        <pubDate>Tue, 10 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/organizational-transformation-rev/from-vision-to-action-build-first-roadmap/</guid>
        <description>Have you ever wondered what you should do next, what path to follow, or how to move forward while creating a digital product? If so, today we’ll tell you what you need: a roadmap! In this post, we’ll explain what a roadmap is, and what it isn’t, and you’ll also learn how to build one, always keeping the focus on achieving your product vision and delivering value to your users.
</description>
        <content:encoded>
            <![CDATA[
                <p><strong>Have you ever, while creating a digital product, wondered what you should do next, what path to follow, or how to move forward?</strong> If so, today we’ll tell you what you need: <strong>a roadmap!</strong> In this post, we’ll explain what a roadmap is — and what it isn’t — and you’ll also learn how to build one, always keeping the focus on <strong>achieving your product vision and delivering value</strong> to your users.</p>
<h2 class="block block-header h--h30-15-400 left  ">Before we begin…</h2>
<p>Throughout this post we will use some concepts that are very common when talking about digital product development, but that’s okay if you’re not familiar with all of them. Here are a few to make sure we can provide as much value as possible:</p>
<p><strong>Product vision</strong>: what we want to achieve with our product and how we see it in the future. It’s an inspiring statement that indicates where we want to go and where we want to be in the future.</p>
<p><strong>Product strategy</strong>: the <strong>how</strong> we will achieve our vision. Strategy includes aspects such as differentiation from competitors, the type of users we are creating the product for, and the value proposition. Strategy aligns what we can do with what we should do.</p>
<p><strong>Product backlog</strong>: a prioritized list of the features we need to develop to create solutions for our users’ problems and needs — in other words, the product.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">What is a roadmap</h2>
<p>A roadmap is the <strong>plan to follow</strong> in the development of your product in order to <strong>achieve the vision and meet the objectives</strong>.</p>
<p>Contrary to what some may think, <strong>a roadmap and a project plan are two different tools</strong>, although both are used in software product management and can complement each other.</p>
<p>While a <strong>project plan</strong> is very <strong>operational</strong> and reflects aspects such as scope, budget, dependencies, and tasks in great detail, a roadmap is much more strategic and focuses on <strong>how to achieve the product vision</strong>.</p>
<p>Here are some of the <strong>most significant characteristics</strong> of roadmaps:</p>
<ul>
<li><strong>It is the plan to achieve the vision.</strong> It is not a list of tasks but an action plan that guides the path toward our vision.</li>
<li><strong>Flexible and adaptable tool.</strong> It is dynamic and evolves over time as we receive feedback from users, allowing us to adapt our strategy to achieve the vision.</li>
<li><strong>A communication tool.</strong> It helps align different stakeholders through transparency, ensuring that everyone shares a common vision.</li>
<li><strong>Connects business goals with the plan to achieve them.</strong> The initiatives included in the roadmap, individually or together, aim to achieve the objectives of the product and the company.</li>
</ul>
<p>We can say that a <strong>roadmap</strong> is how we present our <strong>strategy</strong> to achieve our <strong>vision</strong>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">What a roadmap is NOT</h2>
<p>Now that we’ve discussed what a roadmap is, let’s also clarify what it is not:</p>
<ul>
<li><strong>It is not a list of tasks.</strong> Our roadmap will not include every task we need to develop, but rather a set of initiatives that will guide product or project development.</li>
<li><strong>It is not a timeline.</strong> Although a roadmap may include timeframes and dates, they are more flexible and can change if the strategy evolves based on feedback. A roadmap should always be agile, flexible, and adaptable.</li>
<li><strong>It is not an execution plan.</strong> It does not explain how to carry out each initiative; instead, it focuses on what needs to be achieved.</li>
</ul>
<p>For example, an <strong>initiative in the roadmap</strong> could be improving the onboarding process. The <strong>tasks required to achieve it</strong> might include adding an interactive demo or integrating an assistant to guide users through the process. These tasks might have specific dates, but those would not appear directly in the roadmap.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Steps to create a roadmap</h2>
<p>To <strong>start working on your roadmap</strong>, consider the following:</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Product vision</h3>
<p>The first thing you need to do is ensure that <strong>you clearly understand your product vision</strong>. If not, try asking yourself the following questions to help define it:</p>
<ul>
<li>Why does our product exist?</li>
<li>What problems does it solve or what needs does it cover?</li>
<li>Who will use it?</li>
<li>What would have to happen for us to consider it a success?</li>
</ul>
<p>For example, Slack’s vision is: <em>Make work life simpler, more pleasant and more productive.</em></p>
<p>Here is a <a href="https://www.romanpichler.com/tools/product-vision-board/" target="_blank">template to help you develop your product vision</a>.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Themes</h3>
<p>These are <strong>initiatives or specific projects that help achieve business goals</strong>. These themes will later be broken down into more specific tasks such as user stories.</p>
<p>Example: “Redesign the purchase funnel.”</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Choosing the type of roadmap</h3>
<p>There are several ways to design your roadmap, which will determine the next steps you need to follow. Here are two examples to help you get an idea — although many more exist.</p>
<p><strong>Now–Next–Later roadmap</strong></p>
<p>If we focus on the flexibility of roadmaps, we can talk about a well-known type: <strong>Now, Next, Later</strong>.</p>
<p>In this type of roadmap, no specific timeframes are defined. Instead, we focus on <strong>what we are currently working on</strong> (Now), <strong>what we will work on next</strong> (Next), and <strong>what we plan to do in the future</strong> (Later).</p>
<p>Here is a visual example of a Now–Next–Later roadmap 👇</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/roadmap_now_next_later_113cebf3dd.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/roadmap_now_next_later_113cebf3dd.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/roadmap_now_next_later_113cebf3dd.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/roadmap_now_next_later_113cebf3dd.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/roadmap_now_next_later_113cebf3dd.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="example of a now next later roadmap" title="Now–Next–Later"/></article>
<p>As you can see, each theme is represented, and within each column (Now / Next / Later), projects are shown in the color corresponding to their initiative.</p>
<p>Additionally, below the roadmap we display the product vision, since it is essential to keep it in mind when building the roadmap.</p>
<p><strong>Timeline-based roadmap</strong></p>
<p>These roadmaps use a <strong>time scale</strong> or timeframe, which can vary depending on the needs of your product or project: <strong>3 months</strong> (short term), <strong>6 months</strong> (mid term), <strong>12 months</strong> (long term), or even <strong>per sprint</strong> if you work with Scrum.</p>
<p>Our advice is <strong>not to plan your roadmap more than six months ahead</strong>, because the idea behind a roadmap is agility. You should be able to adapt it based on feedback and new needs that arise.</p>
<p>Here is a visual example of a <strong>timeline-based roadmap</strong> 👇</p>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img src="https://www.paradigmadigital.com/assets/img/defaults/lazy-load.svg"
          data-src="https://www.paradigmadigital.com/assets/img/resize/small/timeline_based_roadmap_765df01d25.png"
          data-srcset="https://www.paradigmadigital.com/assets/img/resize/huge/timeline_based_roadmap_765df01d25.png 1920w,https://www.paradigmadigital.com/assets/img/resize/big/timeline_based_roadmap_765df01d25.png 1280w,https://www.paradigmadigital.com/assets/img/resize/medium/timeline_based_roadmap_765df01d25.png 910w,https://www.paradigmadigital.com/assets/img/resize/small/timeline_based_roadmap_765df01d25.png 455w"
          class="lazy-img"  
                  sizes="(max-width: 767px) 80vw, 75vw"
                  alt="example of a timeline based roadmap" title="Timeline-based roadmap"/></article>
<p>In this example, we see a <strong>roadmap with four different initiatives</strong> that will be developed during Q2. Each initiative or theme has a different color indicating which project or epic it belongs to.</p>
<p>Just like in the previous type of roadmap, the vision is included to ensure it is always considered during roadmap development.</p>
<p>In this case, the <strong>months</strong> are also included, helping provide transparency and alignment with stakeholders and the team.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Key elements</h3>
<p>As you’ve seen, regardless of the type of roadmap that best fits your product or project, <strong>its core elements are the same</strong>:</p>
<ul>
<li><strong>Time scale</strong>, whether a timeline or a Now–Next–Later format.</li>
<li><strong>Themes or initiatives</strong> at a high level.</li>
<li><strong>Projects or epics</strong> for each initiative.</li>
</ul>
<p>And don’t forget to <strong>reflect your product vision</strong>!</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Next steps</h2>
<p>Although we haven’t mentioned it earlier in this post, there are <strong>other important aspects</strong> when creating a roadmap:</p>
<ul>
<li><strong>Prioritization.</strong> We cannot do everything, and certainly not all at once. That’s why we must make decisions and prioritize initiatives. There are different techniques and frameworks that can help with this, as we explained in this post 👉 <a href="https://www.paradigmadigital.com/transformacion-organizacional-rev/desbloqueando-potencial-priorizacion-frameworks-mas-importantes/" target="_blank">Unlocking the potential of prioritization: the most important frameworks</a>.</li>
<li><strong>Communication.</strong> The main value of building a roadmap is that it acts as a tool for alignment and transparency. It helps everyone understand the focus and why certain decisions are made.</li>
<li><strong>Connection with the business.</strong> Every initiative must be linked to a business objective; otherwise, we risk investing time in developments that generate no real value. To achieve this connection, it’s important to stop thinking about outputs and start focusing on outcomes — generating real impact by solving specific problems and needs.</li>
</ul>
<p>There is still much more to say about this last point, which we’ll cover in future posts 😉.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p>A roadmap is a <strong>tactical tool</strong> that aligns product vision with strategy.</p>
<p>Also keep in mind that <strong>no roadmap is better than another</strong>. You simply need to adapt it to the specific needs of each project and continue <strong>iterating</strong> until you find the roadmap that best fits your product.</p>
<p>A roadmap is a compass. <strong>Build yours!</strong></p>

            ]]>
        </content:encoded>
    </item><item>
        <dc:creator>
            <![CDATA[ Eider Ogueta ]]>
        </dc:creator>
        <title>Angular Signals: The Evolution of Reactivity and Change Detection in Angular</title>
        <link>https://en.paradigmadigital.com/dev/angular-signals-evolution-reactivity-change-detection/</link>
        <pubDate>Tue, 10 Mar 2026 07:00:00 GMT</pubDate>
        <guid isPermaLink="true">https://en.paradigmadigital.com/dev/angular-signals-evolution-reactivity-change-detection/</guid>
        <description>Angular Signals changes the reactivity model in Angular. Instead of traversing the entire component tree on every event, the framework updates only the bindings that depend on the state that has changed. This improves performance and simplifies state management, especially in large applications. Here’s how it works.
</description>
        <content:encoded>
            <![CDATA[
                <p><strong>Reactivity in Angular</strong> is the process by which the framework determines that the <strong>application state has changed</strong> and, if necessary, refreshes the view or the DOM. If you’ve read the article <a href="https://www.paradigmadigital.com/dev/estrategia-deteccion-cambios-la-magia-de-angular/" target="_blank">Change detection strategy, the magic of Angular</a>, you’ll know that Angular bases this detection on a <strong>component tree</strong> and mechanisms such as <strong>NgZone</strong> or <strong>ChangeDetectionStrategy</strong> to control when and how the UI is updated.</p>
<p>Today we’ll go one step further: we’ll see how <strong>Angular Signals</strong>, the new <strong>reactivity API in Angular</strong> introduced experimentally in <strong>Angular 16</strong>, makes it possible to improve UI efficiency, update only what is necessary, and <strong>simplify state management in Angular components</strong>, optimizing the performance of your applications.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">The problem with classic change detection in Angular</h2>
<p>In the <strong>traditional approach</strong>:</p>
<ul>
<li>Each component has its own change detector.</li>
<li>Angular traverses the component tree every time an asynchronous event occurs (click, timer, HTTP request).</li>
<li>In applications with many components or large lists, this creates <strong>unnecessary change detection cycles</strong>, affecting performance.</li>
</ul>
<p>For example, in the previous article we created a <strong>clock that updates every second and a table of users</strong>. The clock update triggered a full <strong>Angular change detection cycle</strong>, recalculating random values in each row, even though the data had not changed.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Angular Signals: how they improve reactivity</h2>
<p>With Angular Signals:</p>
<ul>
<li>Each signal represents a <strong>reactive value</strong>.</li>
<li>Angular updates <strong>only the bindings that depend on that signal</strong>.</li>
<li>Derived values (computed) are recalculated automatically.</li>
<li>There is no need for ChangeDetectionStrategy.OnPush or ChangeDetectorRef to optimize performance.</li>
</ul>
<p>This makes Signals a <strong>key tool for improving performance in Angular</strong>, especially in <strong>large applications</strong> with lists and complex components.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Signals, effects, and derived values</h2>
<p>To better understand it, let’s see how Signals work in Angular.</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Create a signal</h3>
<pre><code class="language-javascript">import { signal } from '@angular/core';

const counter = signal(0);
</code></pre>
<ul>
<li>A signal represents a reactive value.</li>
<li>To read it, it is invoked as a function: counter().</li>
<li>To write to it, .set() or .update() is used.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Effects (effect)</h3>
<p>Effects are functions that run automatically when the signals they use <strong>change</strong>.</p>
<pre><code class="language-javascript">import { signal, effect } from '@angular/core';
const counter = signal(0);
effect(() =&gt; {
  console.log(`The counter value is ${counter()}`);
});
counter.set(1); // The effect runs and prints “The counter value is 1”
</code></pre>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Computed values</h3>
<p>You can compute a value from one or more signals without having to rewrite additional logic.</p>
<pre><code class="language-javascript">import { computed } from '@angular/core';

const double = computed(() =&gt; counter() * 2);
</code></pre>
<ul>
<li>When counter changes, Angular automatically recalculates double.</li>
<li>computed is <strong>read-only</strong> and cannot be modified manually.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Advanced features of Angular Signals</h2>
<p>In addition to what we have seen, Angular Signals includes several <strong>important features</strong>, according to the official documentation:</p>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Computed is lazy and memoized</h3>
<ul>
<li>It does not execute until someone reads it.</li>
<li>It memorizes its last value.</li>
<li>It only recalculates if one of its dependencies changes.</li>
</ul>
<pre><code class="language-javascript">const counter = signal(0);
const double = computed(() =&gt; {
  console.log('Recalculando...');
  return counter() * 2;
});
</code></pre>
<ul>
<li>If you never call double(), the function is not executed.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Dynamic dependency tracking</h3>
<p>Angular only tracks signals that are actually read within a computed or effect.</p>
<pre><code class="language-javascript">const show = signal(false);
const counter = signal(0);

const conditional = computed(() =&gt; show() ? counter() : 0);
</code></pre>
<ul>
<li>As long as show() is false, conditional <strong>does not depend on counter</strong>.</li>
<li>This optimizes performance and avoids unnecessary recalculations.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">Signals in templates</h3>
<pre><code class="language-html">{{ counter() }}
</code></pre>
<ul>
<li>Angular automatically detects that the template depends on the signal.</li>
<li>The UI updates only when the value changes.</li>
<li>You don't need async pipe, ChangeDetectorRef, or markForCheck.</li>
</ul>
<h3 class="block block-header h--h20-175-500 left  add-last-dot">untracked(): avoid reactive dependencies</h3>
<p>Allows you to read a signal without registering it as a dependency within an effect or computed.</p>
<pre><code class="language-javascript">effect(() =&gt; {
  console.log(`User set to ${currentUser()} and the counter is ${untracked(counter)}`);
});
</code></pre>
<ul>
<li>The effect runs only when <strong>currentUser</strong> changes.</li>
<li>Changes in <strong>counter</strong> do not trigger the effect, but we can still read its current value.</li>
<li>This is useful when you want to read data incidentally without turning it into a trigger for reactivity.</li>
</ul>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Live practical example: clock and user table with Angular Signals</h2>
<article class="block block-image  -inline-block -like-text-width -center lazy-true"><img data-src="https://www.paradigmadigital.com/assets/cms/angular_caso_real_a01d6014fb.gif" class="lazy-img" title="Practical example" alt="Real case Angular Signals"></article>
<p>Instead of showing all the code here, we’ve prepared an <strong>interactive example on StackBlitz</strong> so you can see <strong>Angular Signals</strong> working in real time.</p>
<p>In this example you will observe:</p>
<ul>
<li>A <strong>clock</strong> that updates every second.</li>
<li>A <strong>user table</strong> where each row depends on a <strong>signal</strong>, avoiding unnecessary updates.</li>
<li>How the UI updates <strong>only when the relevant state changes</strong>, improving performance compared to classic change detection.</li>
</ul>
<p>Try the live example on <a href="https://stackblitz.com/edit/stackblitz-starters-mpxjrrta?file=src%2Fuser%2Frow%2Frow.component.ts" target="_blank">StackBlitz</a>.</p>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Visual comparison</h2>
<table>
<thead>
<tr>
<th style="text-align:center">Before (Default CD)</th>
<th style="text-align:center">Now (Signals)</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">Full change detection cycle every second</td>
<td style="text-align:center">Only what depends on a signal is updated</td>
</tr>
<tr>
<td style="text-align:center">randomId was recalculated unnecessarily</td>
<td style="text-align:center">randomId remains stable</td>
</tr>
<tr>
<td style="text-align:center">OnPush required for optimization</td>
<td style="text-align:center">No need for OnPush or ChangeDetectorRef</td>
</tr>
<tr>
<td style="text-align:center">Performance affected with many rows</td>
<td style="text-align:center">Performance remains stable even when the clock updates</td>
</tr>
</tbody>
</table>
<h2 class="block block-header h--h30-15-400 left  add-last-dot">Conclusion</h2>
<p><strong>Angular Signals</strong> represents the evolution of the change detection model explained in <a href="https://www.paradigmadigital.com/dev/estrategia-deteccion-cambios-la-magia-de-angular" target="_blank">our previous article</a>.</p>
<p>Previously, any asynchronous event could trigger a full change detection cycle. Now, Angular updates <strong>only the bindings that depend on the signals that change</strong>, avoiding unnecessary computations and improving efficiency.</p>
<p>With Signals and derived values (computed), we can build <strong>faster, more predictable, and easier-to-maintain applications</strong>, especially as the component tree grows.</p>

            ]]>
        </content:encoded>
    </item>
</channel>
</rss>
