At a time when the adoption of cloud technology seems unstoppable, when many companies are taking the plunge from virtualisation to a private cloud or to a public cloud, there are still doubts about if it is really secure.
Here we will analyse why we can eradicate once and for all this unfounded fear and begin to value cloud as the most secure option.
Physical security
The datacenters of Google, Amazon and other cloud providers are known for their physical security model that includes the following protections:
- Non-public exact locations in places not prone to natural disasters.
- Perimeter fence.
- Interior and exterior cameras.
- Barriers for vehicle access.
- Metal detectors.
- Biometric and card access.
- Laser intrusion detection systems.
- Security guard patrols.
- Access Logs, activity records.
- Redundant electric and network systems.
- Diesel power generators.
- Refrigeration systems to maintain optimal temperature.
- Fire detectors.
- Fire-fighting equipment.
- Very strict protocols for the removal of hardware, especially hard disks.
Can your facilities be at that level?
Confidentiality and data security
Data in transit is encrypted be default with SSL or TLS. And there are usually numerous data encryption options for each of their storage services on persistent disk or in databases. Generally, the same mechanisms, technologies, and services used by the providers themselves can be used in production.
It is guaranteed at all times that the ownership of the data is the customer's, who can take it whenever they want. Since it is possible to use your own encryption keys, you can even assure that no third parties access the data without authorisation, not even the cloud provider themselves. In addition, there are services and tools that facilitate the migration of data to the cloud as well as their extraction.
Better access control
Access to services is realised through secured global APIs, only accessible through encrypted channels with SSL/TLS and utilising time-limited authentication tokens.
All requests to the APIs from the platform, such as web requests, access to the storage bucket and access to user accounts are mapped out. You can consult all the operations performed and access the logs of the IaaS, the PaaS and the rest of the managed services of databases, networks, and storage.
When managing users there is the possibility of establishing a passwords policy, forcing the use of two stages of authentication and hardware security keys in order to access the platform.
Point-to-point connections and VPNs to connect with local networks
There is the possibility of enabling private or hardware dedicated connections from the office or from on-premise environments, to implement hybrid cloud. They offer greater availability and less latency than existing Internet connections.
You can create VPNs, connect the cloud networks and those of the infrastructure, create sub-networks, segment the traffic through firewalls and route the traffic in the most convenient way.
High availability by default
All the components of cloud platforms are designed to be highly redundant, from the processors, the network connections and the data storage, to the software itself.
In such a way that, in the case of failure of any individual component, the services of the platform can continue working without interruption and without any loss of data. This allows clients to build systems with high availability and resilience.
This is in contrast to the traditional solutions where factors such as availability, redundancy, or flexibility are often sacrificed for cost reasons.
When there is a failure in some part of the hardware, the service provider will handle the replacement in a fully transparent way for the end user, including the possibility to move the application and/or the data to another zone or region.
To increase the redundancy and fault-tolerance you can choose to replicate applications and data in several availability zones in the same region or even between different regions.
Availability is a crucial factor, which is why cloud providers also incorporate protection against DDoS attacks. All incoming traffic is filtered to detect and stop malicious requests.
Besides the protection that they provide by default, they allow you to use their services in a combined manner (like auto-scaling, balancing, CDN and DNS) to implement a more effective strategy to minimise the time that it would take to mitigate and reduce the impact of these distributed denial-of-service attacks.
In the hands of the best security experts in the world
The machines are under the control of the most capable hands. The security teams of cloud providers are very large and are made up of the best experts in network, application, and information security.
They are responsible for maintaining the company's defence systems, developing security review processes, designing infrastructure security, and implementing security policies.
They monitor suspicious activity on their networks, address information security threats, conduct security assessments and audits routinely, and seek outside expertise to conduct security assessments.
They are also involved in research and outreach activities, such as open-source projects, academic conferences, and publishing papers. Among their achievements is having detected some of the greatest vulnerabilities of recent years (like Heartbleed or Poodle).
In general, a model of shared responsibility is established in which the provider will be responsible for the underlying global infrastructure and the services based on the cloud.
Depending on the type of service, the responsibilities change. The minimum is to take care of the security patches and updates to the software and operating system of the host environment, in IaaS, for example. And they increase as you move towards managed services, PaaS or SaaS.
Greater visibility and transparency
Security management is much more transparent in cloud services. There is full visibility of all the cloud resources from a panel. Thus everything is visible to those responsible for controlling security, so it is much easier to have everything mapped out and to fix any type of incident.
The biggest security problems of companies come from users or machines that are often not known, in the cloud] you have a global map of all your machines to avoid this problem.
This clarity is reinforced through tools that provide automated advice to improve resource use and problem solving.
Audits, controls, and certifications
In addition, there are security scanners to help users improve the security and compliance of their applications. These tools perform an automatic inspection of applications for vulnerabilities, cross-site scripting, mixed content and deviations from best practices.
It is common to receive emails from providers with security recommendations such as updating the version of a cluster of kubernetes or renewing security certificates.
Google cloud Platform and Amazon Web Services guarantee compliance with a large number of security standards, controls, audits and certifications. Some of the most important are:
SOC 1, SOC 2, SOC 3, ISO 27001, ISO 27017, ISO 27018, ISO 9001, FedRamp, FIOS, DoD SRG, PCI DSS, HIPAA, CISPE, FERPA, HITECH, CSA STAR, EU Data Protection Directive, CIS, CJIS, CSA, Privacy Shield.
Conclusions
It is common to look towards cloud computing to reduce infrastructure spending, increase agility, and give businesses the best features, but data and application security is a critical requirement and, as we have seen, there are enough reasons to think that in cloud we will undoubtedly have the best security.
Comments are moderated and will only be visible if they add to the discussion in a constructive way. If you disagree with a point, please, be polite.
Tell us what you think.