It seems like it was yesterday but more than 2 months have passed since re:Invent and, as is well known, all the announcements are released in re:Invent, right? Well, actually no. AWS launches services and features every month of the year. Although re:Invent gives us clues and anticipates launches, AWS is always releasing new things that are interesting to know.

Keeping up with AWS releases is overwhelming. There are more than 100 releases in a typical month. This makes staying up to date with AWS very hard. It is not uncommon to get lost in the storm of news.

The idea behind ​​this series is to make life for you easier and analyze the most important news from AWS month by month, so we are going to review the most interesting things that took place in January 2023.

To make this Post easier, the news will be divided into 3 blocks:

Top News

Default Encryption in Amazon S3

Let’s start with the announcement of the month: Amazon S3 will encrypt all new objects by default and will not allow any new non-encrypted objects.

Amazon S3 was born in 2006, at a time when encrypting all stored data was impossible because there was not enough computational power to do it.

This has hampered the service ever since, although it was possible to encrypt the buckets and their content, it was not mandatory, and the objects were not uploaded encrypted by default.

Just like a gift from the Three Wise Men (a Spanish Christmas tradition like Santa Claus), encryption by default has been activated in all buckets (new and old), thus encrypting all those that were not encrypted to be encrypted.

By default, encryption in S3 will be done with an SSE-S3 key, which is the key managed by AWS.Buckets that had any type of encryption will remain unchanged.

This entails greater security by not allowing unencrypted objects.The already existing and unencrypted objects will continue in that state, so managing these objects and uploading them again encrypted is recommended.

The impact of certain organizational policies and config rules that are no longer validl, as this possibility does not exist, is interesting, so it is advisable to review them since they have become obsolete.

This new feature is complemented by AWS’s announcement that, in April 2023, ACLs will be disabled by default (in favor of IAM policies) and public access will be blocked.

With this combo, S3 Buckets gain a lot of security, something which is very important considering that S3 is the most important AWS service (since many other services use it internally).

You have more information, see the launch notice and in the S3 documentation.

Amazon OpenSearch Serverless

One of the great announcements at re:Invent becomes widely available.

Amazon OpenSearch is a great product, and being able to use it in a serverless model is amazing.

After a dispute with Elasticsearch, Amazon collaborated in the release of an OpenSource fork called OpenSearch.

Although at first it was basically a Elasticsearch, fork that was missing some features from the latest versions, over this time it has evolved very well, adding very interesting features and becoming a solution that can compete with the original version.

The idea of ​​having a serverless version in which the scaling is managed by AWS and the deployment and management is fully delegated is a huge advantage.

It is important to understand the serverless model of this service and other similar services such as EMR or Redshift.

In this case, AWS measures size based on a unit called OpenSearch Compute Unit (OCU), which is equivalent to one Virtual CPU and 6 GiB of memory.

The minimum that can be deployed are 2 OCUs for Indexing and 2 OCUs for Searches, and from there it will grow automatically depending on the load. If there is no load, our cluster will drop to the minimum state of 4 OCUs in total.

It is not possible for the cluster to shut down completely, and this is something that we need to take into account if we are going to use this service.

Additionally, there is a cost for the storage used.

Given this operating model, we can take advantage of Amazon Opensearch Serverless.

You have more information, see the launch notice and in the OpenSearch documentation.

AWS Clean Rooms is available in Preview Mode

One of the big problems of working with data models and third parties is that you cannot work with raw data for different reasons:.

Due to data confidentiality, due to legal requirements, because it is critical data for the business, etc.

This new functionality makes it possible to make masked data available in order to be able to work together on these datasets without having to copy them to a third party.
We can generate masked data zones without having to duplicate information and without giving access to the raw data.

This solution was presented at re:Invent. It is quite interesting to use AWS services to create these clean rooms and share data securely, thereby allowing this clean data to be directly consumed without leaving AWS services and without generating extra costs for duplication of storage or data transfer charges.

You have more information, see the launch notice and in the Clean Rooms documentation

AWS Systems Manager Patch Policies

Patch Manager is a very interesting feature within AWS Systems Manager that allows us to keep our systems up to date by applying patches automatically in the windows that we define, but it had a small GAP at the organizational level, which was the impossibility of managing the process directly from an organizational point of view.

With the Patch Policies, we can deploy patches to different accounts and regions within an AWS Organization. It also comes with the Patch Manager features, which allows you to define windows to apply patches, define patching levels, etc.

It also has an additional advantage: Being able to see the compliance status of the entire organization from a single centralized point. This new functionality will solve many problems.

For more information, see the launch notice, the Patch Manager documentation and the AWS Blog.

Amazon Route 53 Application Recovery Controller Zonal Shift

This amazing service was presented at the re:Invent and was being made widely available.

One problem that we may run into when managing an application with redundancy in multiple zones is a zone beginning to have intermittent failures that a simple health check cannot detect. These can be detected by a more complex monitor, but managing these types of errors is complicated.

This new functionality allows the temporary shutdown of entire availability zones from a load balancer to be managed at no additional cost.

Thus, if e.g. we use a synthetic monitor, and it detects high latency in our requests in availability zone X, we can disable this entire zone from the load balancer and continue providing service to the other 2 zones until we manage to recover availability zone X.

It is a simple functionality, but it gives a lot of power when it comes to error recovery in an availability zone, something which, although unusual, can happen and be critical for certain applications with high SLAs.

For more information, see the launch notice, the Route 53 Application Recovery Controller documentation and the AWS Blog

AWS Lambda Runtime Management Controls

One of the great advantages of Lambda is that patch management within a context is managed by AWS, so we don't need to do anything when patches are applied.
But this great advantage can be a problem in specific cases, e.g. when a patch that has some incompatibility with our Lambda functions is applied. Until now, it was not possible to manage these exceptions.

With this new functionality, we can decide if the context update model should be automatic, which only applies in case of a code upload to our function, or manual, where we choose the runtime version we want. Thus, we can also roll back to a previous version of runtime if our function stops working due to incompatibilities.

It is important to keep in mind that this new functionality can give us more flexibility, but if we always choose the runtime version manually, we will be missing out one of the great advantages of Lambda. We should only use this functionality in cases of incompatibility and try to return to the most updated version as soon as possible. With this functionality, however, we save time that we can use to solve the problem while our lambdas continue to work correctly in an older version.

For more information, see the launch note, the lambda documentation and the AWS Blog.

News in the Region of Spain

Our beloved region of Spain continued to grow little by little with 4 different releases during this month of January:

The latest releases make it possible to migrate large loads to AWS, thus allowing the EFS to be used for loads that need it (which is still very high) and Storage Gateway, which is a very interesting service for Hybrid deployments.

Given that Spain is a new region, there are still many services that have not arrived yet, but in view of those currently available and those that will arrive in the coming months, we can predict a great future ahead of it.
This is why we are going to monitor the launches that take place in the coming months.

Have any of you tried this new region? Do you miss any service that you are currently using?
If you want to, you can tell us in the comments section. We have tried it, and we love it, especially at the cost level.

More news

Not all the news can be top news, but there have been more interesting news during this month of January:

Amazon CloudWatch Cross-Account Metric Streams

This is a new CloudWatch metrics functionality that allows us to send metrics to Observability tools, thereby allowing them to be accessed from multiple accounts so as not to require only one endpoint per account. Multiple observability tools allow integration with this new functionality.

A functionality that must be thoroughly tested to review its pricing. Initially, the cost of a metric update is 3 times lower than a GetMetricData (which is the method commonly used for integration), but it would be necessary to review the volume of metric updates to make an estimate.

Graph Explorer

This is an OpenSource tool that allows us to visualize and explore our data in Amazon Neptune in a simple, graphic way.

Amazon EC2 Auto Scaling

EC2's predictive scaling improves substantially, generating predictions 4 times a day, instead of once a day, thus allowing it to be more efficient in terms of more granular scaling.

Amazon EC2 Auto Scaling is a very useful functionality if we have demand peaks and want our autoscaling to be available before an event that requires more computing capacity occurs. With this announcement, the service gains in effectiveness.

Amazon S3 Storage Lens

Amazon S3 Storage Lens is a service that provides us with data on the use of S3 so that we can analyze said data and optimize the use of S3. AWS has announced a price improvement (Obviously cheaper).

Amazon SageMaker Canvas

Amazon SageMaker Canvas, the AWS no-code service for Machine Learning, improves your efficiency by allowing models to be generated in less time; it is up to 3 times faster for the simplest models and 2 times faster for standard models.

Permissions in AWS Billing, Cost Management and Account.
A change in permissions for the AWS Billing, Cost Management, and Account information consoles. This modification will provide more granular access to billing services to limit the scope of permissions.

Currently, if you have reading permissions in billing, you can see the costs of the account —but also all the tax and payment information.

The old permissions model will remain in force until July 6, 2023, in order to give time to migrate to the new model, which is already available.

EC2 Image Builder

If you don't know about EC2 Image Builder yet, it is surely taking you sweet time to check it . This service lets us automatically generate EC2 Images with our own customization and automatically update the images and make them available for use in our deployments.
Now it adds a functionality to “hardenize” our systems using the CIS Benchmarks at level 1.
More security and more automation is a great combo that EC2 Image Builder allows us to implement.

Amazon OpenSearch Service

Another improvement in OpenSearch in addition to the launch of the service in the Serverless model.

With this functionality, we can validate configuration changes before applying them.
This was not possible up tntil now, and in the event of a bad configuration, the cluster was still operational but was stuck in the "processing” state until we fixed the configuration mistakes.

Now you can launch a dry run to validate the changes before applying them.
This is something that solves a lot of headaches, so as not to leave our cluster in an inconsistent state while we apply configuration changes.

Porting Advisor for Graviton

Migrating to Graviton (ARM) is a great idea since ARM processors are more efficient in terms of power/performance and provide us with very interesting cost savings, as well as being more energy efficient and therefore minimizing our carbon footprint.

But it can also be a headache to migrate from an x86 processor architecture to an ARM architecture.

Porting Advisor for Graviton is a cli-type tool that allows us to analyze our code and review incompatibilities, missing or outdated libraries, and parts of our code that require refactoring, giving us alternatives for our code to work in Graviton.

Although this tool does not solve all the problems when migrating to Graviton, it does facilitate the migration process and will indeed speed up a big part of it.

Conclusion

What a full month! These are some of the main news that AWS has presented to us in January. As you can see, we have many options to investigate and start working with. Do you have any preference? What do you think of these new features?

Tell us what you think.

Comments are moderated and will only be visible if they add to the discussion in a constructive way. If you disagree with a point, please, be polite.

Subscribe