If you’ve been following the news from AWS re:Invent 2025, you might feel like everything starts and ends with Generative AI.
But what about the “engine room”? Where are the improvements that actually make our apps run 364.999… days a year, cut our bills, and secure our infrastructure today?
At Paradigma, we dove into the AI announcement wave to surface the real cloud engineering gems that will power your business this year. From the raw power of the new Graviton5 to the financial relief of the Database Savings Plans, and the end of the Serverless vs EC2 dilemma.
Get ready — in this post, we bring you the pragmatic updates: the ones that optimize costs, reduce latency, and improve developer experience. To make it easier to focus on what matters to you, we’ve tagged each announcement based on the real business value it delivers:
- 🚀 Performance: more power and speed.
- 💸 Cost savings: directly reduces your bill.
- ✨ Operational simplicity: less maintenance and overhead.
- 🛡️ Security: stronger protection and compliance.
- 👁️ Observability: centralized control and visibility.

Compute
1 AWS Lambda Managed Instances
- Value: ✨ Flexibility (Serverless + EC2) | 💸 Up to 72% cost savings (goodbye cold starts).
- Hashtags: #Serverless #AWSLambda #EC2 #CloudArchitecture
One of the major announcements from AWS at this re:Invent was the launch of Lambda Managed Instances. With a single stroke, they’ve eliminated the dilemma of having to choose between Serverless and EC2. This new model allows us to run Lambda functions on EC2 instances, even on specific architectures like Graviton4, while keeping the operational simplicity of Serverless. AWS still manages the infrastructure: provisioning, OS patches, load balancing, and auto scaling.
By using managed instances, we can run multiple requests concurrently in the same environment, eliminating “cold starts” and maximizing vCPU utilization. It also optimizes costs for steady workloads. Since the execution runs on EC2, billing no longer depends on invocation duration—compute savings plans and reserved instances apply, enabling savings of up to 72%.
Both models keep the same flat per-request fee (20 cents per million requests), but the compute model differs significantly: Standard charges based on duration and memory (GB-seconds), increasing the bill for every extra millisecond you need. Managed Instances removes duration-based billing and replaces it with instance rental (+15%), enabling reservation discounts and allowing long or concurrent processes without duration inflating your costs.
With this service, we can move high-performance and high-demand workloads to a cost-efficient serverless model, combining the flexibility of EC2 with the operational and developer advantages of Lambda.
📜 See full technical details on AWS News.

2 Lambda Durable Functions
- Value: ✨ Simplicity in complex workflows | 💸 No cost during waiting periods.
- Hashtags: #Serverless #Workflow #DevOps #Python #NodeJS
Another innovation in serverless development is Lambda Durable Functions. Until now, orchestrating multi-step workflows (like human approvals or multi-phase payments) required state machines or building your own expensive state management layer.
With Lambda Durable Functions, you can write your business logic sequentially inside your Lambda (in Python or Node.js), and the runtime handles durability, checkpoints, and automatic retries on failure. When waiting for an external event (a callback or an approval), the function can pause execution efficiently for up to a year, with no charges during idle waiting.
📜 See full technical details on AWS News.
3 AWS Graviton5
- Value: 🚀 Maximum performance (+25%) | 💸 Better energy efficiency and cost savings.
- Hashtags: #AWSCompute #Graviton5 #EC2 #ARM #Sustainability
AWS released Graviton5, which is not just a more powerful and more efficient CPU—it’s a statement of intent: maximum performance at the lowest cost with best-in-class energy efficiency.
It offers 25% better performance over Graviton4. It features 192 cores, a 5× larger L3 cache, and improved compute density, reducing critical latencies and sending application performance soaring. It also introduces a new security standard for workload isolation.
The new Amazon EC2 M9g instances are already in preview, and early customers (Airbnb, SAP, Siemens) report performance gains between 20% and 60%.
📜 See full technical details on AWS News.
4 EC2 X8aedz Instances (AMD EPYC)
- Value: 🚀 Extreme performance (5 GHz) | 👁️ Ideal for memory-intensive workloads.
- Hashtags: #HPC #AMD #EC2 #EDA #HighPerformance
AWS also unveiled the Amazon EC2 X8aedz instances, designed for memory-intensive workloads requiring top-tier single-thread performance.
These instances use 5th-generation AMD EPYC processors reaching 5 GHz, the highest frequency in the cloud. This results in up to 2× more compute performance than the previous generation. They offer a 32:1 memory-to-vCPU ratio, making them ideal for EDA workloads and relational databases—especially those with vCPU-based licensing models. They come with up to 8 TB of local NVMe SSD storage and bandwidth up to 75 Gbps (with EFA support) and 60 Gbps for Amazon EBS.
📜 See full technical details on AWS News.
Containers
5 Amazon EKS Capabilities
- Value: ✨ Simplified infrastructure | 🛡️ Enterprise-grade reliability for Open Source tools.
- Hashtags: #Kubernetes #EKS #ArgoCD #GitOps #PlatformEngineering
Amid all the excitement around GenAI, one announcement deserves special recognition—particularly for the many engineering teams relying on EKS: the launch of EKS Capabilities. From now on, you can run Argo CD, AWS Controllers for Kubernetes (ACK), and Kube Resource Orchestrator (KRO) as fully managed capabilities, without dealing with the nightmare of installing, patching, and maintaining all that infrastructure yourself (or leaving it to your platform team).
The best part? These capabilities run inside AWS accounts, fully abstracted from your cluster, meaning no operational burden—you focus on: deploying apps with zero friction.
This announcement is especially impactful because these are tools widely adopted by the Kubernetes community (for example, GitOps with Argo CD is used by nearly 45% of K8s users according to CNCF), now paired with AWS-grade operational reliability. Everything is integrated, scalable, and—crucially—vendor lock-in free, since you're still using native Kubernetes tooling. It’s like having a 24/7 platform team working for you. Available in all commercial regions.
📜 See full technical details on AWS News

Databases
6 Database Savings Plans
- Value: 💸 Direct 35% bill reduction | ✨ No architecture changes required.
- Hashtags: #FinOps #AWSCostOptimization #RDS #DynamoDB #Aurora
One of the most celebrated announcements (especially by FinOps teams) was the launch of Database Savings Plans. Projects with predictable Aurora, RDS, or DynamoDB needs can reduce compute costs by 35% with zero architecture changes.
You estimate and commit to a processing capacity for one year, but you pay monthly. AWS provides a tool within Cost Management to simplify this estimation. Crucially, you don’t lose flexibility: you can switch database engines or instance families without losing the discount. A huge relief for database-heavy workloads without the fear of modifying critical resources.
📜 See full technical details on AWS News.

7 License Optimization
- Value: 💸 Savings on commercial licenses (Oracle/SQL) | 🚀 Powerful hardware without waste.
- Hashtags: #Licensing #Oracle #SQLServer #CostSavings
To understand the real impact of this AWS announcement, you have to look at the problem many organizations face with commercial licenses (Oracle/SQL Server). Historically, if your database grew and required more memory or IOPS, you had to move to a larger instance—bringing more cores than you actually need, dramatically increasing licensing costs due to per-vCPU billing.
AWS has now smashed the storage ceiling again, going from 64 TiB to an impressive 256 TiB. This allows large monolithic databases to continue scaling vertically without forcing complex manual sharding.
But the real game changer is the “Optimize CPUs” feature: you can now deploy powerful instances (like the new M7i/R7i), use all their RAM and bandwidth, but limit the active cores so you only pay for the vCPUs you need. This can lead to savings of up to 55%.
It’s the AWS philosophy we love: pay only for what you use. Not a CPU more. Not a license more.
📜 See full technical details on AWS News.
8 Amazon OpenSearch Improvements
- Value: 🚀 10× faster searches | 💸 Cut indexing costs in half.
- Hashtags: #VectorDB #OpenSearch #GenAI #RAG
If you work with generative AI and embeddings, you already know that scaling vector databases usually means painful bills and/or unacceptable latencies.
Good news: AWS has supercharged Amazon OpenSearch Service with GPU acceleration and new auto-optimization capabilities. What’s new?
- Serverless GPU acceleration: AWS handles GPU usage when needed. According to official figures, this enables building and querying indexes up to 10× faster, and cuts indexing costs by 75%.
- Intelligent auto-optimization: no more manual index tuning. The system now balances search quality, speed, and resource usage automatically based solely on your defined business goals (recall vs. latency).
A much-needed abstraction layer to strike the perfect balance between performance and budget in large-scale RAG systems.
📜 See full technical details on AWS News.
Management & Governance
9 Unified Observability in CloudWatch
- Value: 👁️ Full visibility (Logs + Metrics) | ✨ No more complex ETL pipelines.
- Hashtags: #Observability #CloudWatch #SecOps #S3Tables #Iceberg
Anyone who has managed a cloud project knows the avalanche of data it generates: gigabytes of VPC logs, application traces, security events, and infrastructure metrics.
We’ve always known true observability lies in blending these datasets—mixing a bit of network logs with identity events to uncover the full picture. But until now, achieving this required building and maintaining fragile, expensive ETL pipelines just to make different “worlds” of logs talk to each other.
AWS has taken a giant step forward. The new unified data management in Amazon CloudWatch enables native interoperability by supporting open standards like OCSF (Open Cyber Security Framework) and Apache Iceberg.
What does this mean? You can now ingest data from external sources (CrowdStrike, SentinelOne, Microsoft 365, etc.), and CloudWatch automatically normalizes it and stores it in the new S3 Tables.
Farewell to complex pipelines. You now get a centralized repository to investigate incidents or look for anomalies by correlating data from multiple sources with no friction. Finally, true unified observability inside AWS.
📜 See full technical details on AWS News.
Networking
10 AWS Interconnect - Multicloud (Preview)
- Value: 🚀 Direct private connectivity | ✨ Simplified multicloud.
- Hashtags: #MultiCloud #Intercloud #Azure #GoogleCloud #Networking
Our favorite networking-related announcement came the week before re:Invent with the preview bombshell: AWS Interconnect - Multicloud.
This solution enables true private communication between cloud providers, something that previously required complex and costly architectures to integrate AWS Direct Connect, Azure ExpressRoute, and Google Cloud Interconnect for private routing across providers.
It’s a bittersweet announcement since, being a preview, it’s clearly not yet production-ready, and Azure integration isn't expected until 2026. We’ll have to wait before a fully private network exists between the three major clouds.
📜 See full technical details on AWS News.

11 Route 53 Global Resolver
- Value: ✨ Simplified hybrid DNS management | 🛡️ Unified global security.
- Hashtags: #Networking #DNS #Route53 #HybridCloud
Amid all the GenAI buzz, a networking announcement that nearly went unnoticed was the preview of Route 53 Global Resolver—a genuinely useful service for securely resolving DNS globally, both for public internet domains and private domains via globally distributed anycast IPs and proximity-based routing.
The problem it solves is significant: organizations with hybrid deployments often maintain complex and costly split-DNS setups, custom forwarding, Route 53 Resolver private rules, multi-Region failover… a true operational headache.
This service also includes multiple security features, such as those in Route 53 Resolver DNS Firewall: filtering through AWS Managed Domain Lists (malware, phishing, spam), protection against DGA patterns and DNS tunneling, DNSSEC validation, and authentication via IP allow lists or tokens for DoH/DoT.
📜 See full technical details on AWS News.
Partner Network
12 AWS Partner Central in the Management Console
- Value: ✨ Unified and automated management | 🛡️ Identity Center security.
- Hashtags: #AWSPartners #Marketplace #IAM #Automation
The most impactful announcement for partners isn’t just a UI change—it's an operational transformation. AWS Partner Central is now integrated into the unified AWS Management Console, simplifying access and improving security with IAM Identity Center.
But the real revolution lies in the API. AWS now exposes solution management, opportunities, and Marketplace listings through APIs. This means we can leave behind manual portal workflows. It’s now possible to connect your CRM directly to AWS to automate opportunity tracking and solution publishing. It’s applying “infrastructure as code” principles to business processes: programmable, frictionless, scalable.
Less admin work, more time selling.
📜 See full technical details on AWS News.

Security & Compliance
13 AWS Security Hub (Real time)
- Value: 👁️ Real-time risk prioritization | 🛡️ Attack path visualization.
- Hashtags: #SecurityHub #CSPM #SecOps #CloudSecurity
Another key security announcement was the general availability of Security Hub’s new analytics capabilities (source), bringing major improvements to how security teams detect and respond to critical risks.
Security Hub now includes near real-time risk analytics, automatically correlating findings from AWS services (GuardDuty, Inspector, Macie, and Security Hub’s CSPM) to identify complex issues where multiple vulnerabilities and misconfigurations combine into a critical risk. It organizes findings by severity and enriches them with context, including a Potential Attack Path visualization showing how an attacker could exploit the environment. A new Summary Dashboard with a Trends feature enables tracking up to a year of historical data on threats, exposures, and resources—helping teams monitor whether security is improving or degrading over time.
It also adds integration support for incident management systems like Jira and ServiceNow and adopts the OCSF standard for smoother security data exchange.
📜 See full technical details on AWS News.

14 GuardDuty Extended Threat Detection
- Value: 🛡️ Detection of complex attacks | 👁️ Unified visibility (EC2 + ECS).
- Hashtags: #Cybersecurity #GuardDuty #ThreatDetection #InfoSec
AWS has expanded the capabilities of Amazon GuardDuty Extended Threat Detection to now cover Amazon EC2 instances and Amazon ECS tasks (in addition to EKS, IAM, and S3). GuardDuty not only detects isolated events—it correlates multiple signals (runtime activity, malware, network logs, and CloudTrail events) to unify them into a single critical attack sequence.
The system can group activity observed across multiple EC2 instances or ECS tasks (sharing configurations like Auto Scaling groups or AMIs), giving security teams a clear, consolidated view of the full scope of a multi-stage attack.
AWS is taking a major step forward by enabling full attack detection, not just individual alerts—reducing noise and improving security response effectiveness.
📜 See full technical details on AWS News.
Storage
15 FSx for NetApp ONTAP + S3
- Value: ✨ Unified access to legacy data | 🚀 Enabler for modern analytics without moving data.
- Hashtags: #Storage #NetApp #DataLake #S3 #DataAnalytics
For those who worked in systems departments before the public cloud era, AWS just launched the integration of Amazon FSx for NetApp ONTAP with Amazon S3. This announcement tears down the wall between traditional enterprise file storage and the entire cloud-native data ecosystem.
You can now access FSx for ONTAP data using S3 Access Points, performing operations as if your FSx volumes were S3 objects. This means long-standing enterprise data living in NetApp systems can now be accessed directly by services like Amazon Bedrock for RAG, SageMaker for ML training, Athena for serverless SQL, AWS Glue for ETL, and QuickSight for BI with AI.
What we love most is that your data stays within your FSx filesystem—no movement, duplication, or exposure required. Pricing is yet to be fully clear, but theoretically (as with most AWS services) it should rely mainly on S3 request costs plus your normal FSx charges. We’ll be testing it soon!
📜 See full technical details on AWS News
16 S3 Tables (Tiering & Replication)
- Value: 💸 Automatic cost optimization | 🛡️ Simplified Replication & Disaster Recovery.
- Hashtags: #DataLakehouse #ApacheIceberg #S3 #DataEngineering
Last year’s re:Invent saw massive excitement around S3 Tables, enabling S3 data to be accessed like Apache Iceberg tables. This year is no different, with the addition of Intelligent-Tiering and automatic replication support—hugely impactful for Lakehouse architectures.
With Intelligent-Tiering, your Apache Iceberg tables automatically transition across three storage tiers (Frequent, Infrequent, Archive Instant Access) based on real usage. After 30 days without access, a table moves to Infrequent; after 90 days, to Archive. Maintenance tasks (compaction, snapshot expiration) only process Frequent tier data, reducing operational costs without sacrificing performance.
Cross-Region and cross-account replication allows read-only replicas of your Iceberg tables across regions or accounts without building complex metadata sync architectures. Replication preserves snapshot parent-child relationships and updates within minutes. Replicas can use independent encryption and retention policies. It’s perfect for globally distributed datasets, low-latency access, compliance, DR, backup…
📜 See full technical details on AWS News.
17 S3 Storage Lens Updates
- Value: 👁️ Deep performance visibility | ✨ Analytics at the scale of trillions of objects.
- Hashtags: #StorageLens #S3 #CloudOps #PerformanceTuning
AWS continues to invest in Storage Lens—and for good reason, considering how essential it is for large-scale storage operations.
First, AWS added eight new categories of performance metrics, enabling detection of bottlenecks and performance troubleshooting: request size distribution (read/write), object size distribution, 503 errors for concurrent PUTs, cross-Region transfer, unique objects accessed, and latency metrics (FirstByteLatency, TotalRequestLatency) previously available only in CloudWatch.
Additionally, AWS has removed previous prefix analysis limits, which restricted analysis to prefixes with at least 1% size and a maximum depth of 10 levels. You can now analyze literally trillions of prefixes per bucket regardless of size or depth with the new “Expanded prefixes metrics report”.
Finally, Storage Lens now supports automatic export of all metrics to S3 Tables, enabling SQL analytics using Athena, QuickSight, EMR, or Redshift without building separate pipelines. All features are included in the Storage Lens Advanced tier at no extra cost (you only pay for S3 Tables storage and queries if used).
📜 See full technical details on AWS News.
Conclusion
AWS re:Invent 2025 leaves us with one message: cloud maturity is here to stay. While AI steals the headlines, AWS has delivered surgical tools to solve chronic pain points: licensing costs, Kubernetes operational overhead, and the eternal tension between flexibility and simplicity in serverless architectures.
These updates aren't just technical features—they're direct levers to improve your company's profitability and agility in Madrid and worldwide. Whether migrating to Graviton5 to get more for less or implementing the new Database Savings Plans, the optimization opportunity is now.
At Paradigma, we're already testing these architectures in real environments. Want to explore how to apply these improvements to your infrastructure before the next bill arrives?
Comments are moderated and will only be visible if they add to the discussion in a constructive way. If you disagree with a point, please, be polite.
Tell us what you think.