In technology, speed has almost become a religion. Teams race to deploy faster, automate more, and shrink to a minimum that endless cycle between “the what” and production. And of course, in that race there’s always a toll that eventually gets paid: security. And as always… the scare comes late.
Today everything moves so fast that teams are measured by a single indicator: the speed at which they deliver. We automate deployments, shorten cycles, and rely on increasingly distributed architectures. Even so, in this obsession with optimizing everything, security is still seen as an obstacle, something to be reviewed once the code is already live.
And then the predictable happens: vulnerabilities are detected when there’s no room left to react, and the old excuse hangs in the air: “that’s the Security team’s problem.” For many of you, this will sound painfully familiar.
DevSecOps is born precisely to break with this way of thinking.
It’s not about stuffing the pipeline with tools or turning more steps red, but about changing the mindset: making security a natural part of the project’s growth, as embedded as unit tests or version control.
From DevOps to DevSecOps: a necessary shift
DevOps was born with a clear idea: development and operations should stop working in silos. The goal, in theory, was simple: deliver software faster and with fewer scares. And for a while, it worked. Boy, did it work.
Until the cloud arrived and with it, my dear (and feared) microservices, external dependencies, and that constant feeling that everything could break with a simple terraform apply.
And then, of course, the guest who never fails showed up: Security, waiting at the door with that “I told you so” smile.
DevSecOps isn’t just a catchy name for yet another industry trend; it’s a reminder. It reminds us that security can’t keep being a final audit, but rather a shared task from the very first commit by everyone on the team.
That means developers, testers, and operators alike must keep the essentials in mind, have the right tools, and detect issues while they still smell new.
Because it’s no longer enough for the security team to review the final result, curse under their breath, and take the blame. Every part of the process—every pipeline, container, or variable—has to be born under the principle of security by default.
But even with all that, the obstacle isn’t only technical: it’s also mental. And that mindset shift isn’t imposed through policies; it’s earned through examples. When the team sees that integrating security doesn’t slow anything down, but instead prevents headaches and saves money, the rest follows naturally.
Security by design: the forgotten principle
The idea of Security by Design isn’t new. It already appeared in OWASP standards and in European regulations like GDPR. But honestly, it’s almost never applied properly while actually coding.
Security by design isn’t just about patching things at the end. It means that every time you make a technical decision, you think about how it affects privacy, integrity, and system availability.
From how services verify who you are, to how you store passwords or manage secret rotation. A secure system isn’t fixed with patches: it’s built that way from the start.
So the key to DevSecOps isn’t the tools, but getting every technical profile used to asking: is this secure? and how can I be sure of it?
Three pillars to integrate security into the software lifecycle
1 Automation and early detection
Automation is the best “tool” a development team has. When security checks are integrated into pipelines, they become part of the workflow.
- SAST (Static Application Security Testing): reviews code for vulnerabilities before deployment.
- DAST (Dynamic Application Security Testing): runs tests in staging or QA environments.
- SCA (Software Composition Analysis): identifies vulnerable libraries and dependencies.
- Infrastructure as Code Scanning: analyzes Terraform, Kubernetes, or CloudFormation configurations to ensure policy compliance.
The earlier a flaw is found, the lower the cost to fix it—and the lower the risk of exposure.
2 Shared responsibility
The old model of security as a “control gate” is outdated. Now, every role in the development lifecycle carries a share of responsibility.
- Developers must understand the security implications of the code they ship.
- Operations must ensure deployments, secrets, and permissions follow the principle of least privilege.
- Architecture teams must define patterns that embed security by default.
- Security teams should guide and educate, providing tools instead of blocking progress—while accepting that some healthy resistance will always exist.
The success of DevSecOps hinges on a simple message:
“Security is not a department (or a team, or a silo): it’s everyone’s practice.”
3 Culture and continuous learning
No tool replaces human judgment. Not even our beloved new best friend, AI—useful as it may be. The strongest defense is still a conscious team equipped with the right technology.
Security should be part of retrospectives, sprint reviews, and planning sessions. Adding short training sessions and incident-response drills builds both technical and human reflexes, and a shared language. Turning security into a habit takes consistency.
When technical leadership reinforces this message, the cultural shift sticks.
Tools and key practices (without losing the human focus)
There are hundreds of tools that can be part of a DevSecOps toolchain, but what matters is coherent integration, not duplication. Common examples in cloud-native environments include:
- GitLab CI, GitHub Actions, Jenkins with automated security jobs.
- Trivy, Checkov, or tfsec to scan images and IaC.
- Vault, Secret Manager, or AWS KMS for secure secret management.
- OPA, Conftest for automated compliance policies.
- SonarQube, OWASP ZAP, Dependabot for code and dependency analysis.
Still, the real differentiator lies in process and culture. Configuring alerts isn’t enough; they must be interpreted, prioritized, and resolved.
A secure pipeline isn’t one that never fails, but one that teaches the team to fail earlier and learn faster, applying those lessons to shorten response times over time.
Practical example: a secure pipeline with Terraform and GitLab CI
A real-world example of DevSecOps integration can be seen in the construction of an IaC pipeline.
Let’s assume a project that uses Terraform to deploy resources on GCP and orchestrates deployments through GitLab CI. This is a realistic scenario, given the high level of expertise many teams have in this area. The goal is to ensure that all merge requests pass security and compliance checks before being applied to the target environment.
To illustrate this, I’ll draw from my very first pipeline anecdote: tfsec was failing everywhere. I remember almost every commit breaking due to missing tags or publicly exposed buckets. But here’s the thing: although it was extremely frustrating at first, within a week the team was already writing code with those checks in mind. And the best part? No one had to be forced—it just became a habit.
1 Static scanning of IaC code (Terraform)
Using tools like tfsec, every commit is validated against a set of policies:
- No instances without tags.
- Avoid public resources (for example, buckets or databases with public_access = true).
- Verify that secrets are not hardcoded.
Each commit triggers an automatic tfsec scan to detect insecure configurations:
stages:
- validate
- security
- deploy
validate:
stage: validate
script:
- terraform fmt -check
- terraform validate
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
security_scan:
stage: security
image: aquasec/tfsec:latest
script:
- tfsec --format json --out tfsec-report.json .
artifacts:
paths:
- tfsec-report.json
allow_failure: false
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
This code block enforces that any change to .tf files must be validated before deployment, preventing misconfigurations such as public buckets or unencrypted resources.
2 Compliance validation (Policy as Code)
With Conftest or OPA (Open Policy Agent), rules are defined that the code must comply with before being approved. For example, in the policies/terraform.rego file:
package terraform.security
deny[msg] {
input.resource_type == "google_compute_disk"
not input.values.disk_encryption_key
msg = sprintf("The resource %s does not have encryption enabled", [input.name])
}
deny[msg] {
input.resource_type == "google_compute_instance"
not startswith(input.values.zone, "europe-")
msg = sprintf("The instance %s is not in an allowed region", [input.name])
}
And in the pipeline:
policy_check:
stage: security
image: openpolicyagent/conftest
script:
- conftest test terraform-plan.json --policy ./policies/
needs:
- job: security_scan
allow_failure: false
3 Secrets management
Pipelines must never expose credentials in plain text — this is strictly forbidden, for our own safety and that of our customers. They integrate with Secret Manager or HashiCorp Vault, retrieving secrets through protected variables that are automatically rotated. As a code example:
deploy:
stage: deploy
image: hashicorp/terraform:1.7
script:
- export TF_VAR_db_password=$(gcloud secrets versions access latest --secret="db-password")
- terraform init
- terraform plan -out=plan.out
- terraform apply -auto-approve plan.out
environment:
name: production
when: manual
needs:
- job: policy_check
With when: manual, an explicit approval is required before applying changes, which guarantees traceability and naturally leads us to the next step.
4 Controlled deployment
The terraform apply job only runs if all previous stages complete successfully. This requires manual approval from a responsible person, ensuring traceability and double validation.
5 Auditing and reporting
Each execution generates a security report that is stored in the repository and can be used for future audits.
With this approach, the team gains speed without losing control, which is critical—because clients don’t measure value only in delivery speed, but also highly value governance and control.
Development teams can deploy autonomously, while policies ensure compliance across all environments.
reports:
stage: security
script:
- cat tfsec-report.json | jq '.results | length'
- echo "Generating security report..."
artifacts:
when: always
expire_in: 7 days
paths:
- tfsec-report.json
This report can be archived or integrated into a tool—for example, a SIEM—enabling periodic audits and compliance metrics.
Conclusion: from policy to habit
Adopting DevSecOps is not about writing a new corporate policy or considering the job done once the document is signed. It is about something much simpler—and at the same time much harder: changing habits.
Real progress becomes visible when every technical decision implicitly carries a question: “Is this secure?” When pipelines do not punish, but teach. And when teams stop seeing security as an obstacle and start seeing it as a quality accelerator. That is when change truly begins to take hold.
Because in the end, an organization’s maturity is not measured by how many tools it uses, but by how people understand and apply them to build reliable software.
I remember the environment from the previous example: the real sign of maturity was not having more scans, but when the team stopped asking “Do we have to run the scan?” and started asking “Why isn’t this control automated yet?”. That shift in mindset was worth more than any new tool.
At that moment, I realized that DevSecOps was no longer a role or a process, but a shared way of thinking across the entire organization.
Comments are moderated and will only be visible if they add to the discussion in a constructive way. If you disagree with a point, please, be polite.
Tell us what you think.