Convergence, DevOps as a Service, Security
As organizations increasingly adopt DevOps as part of the digital initiative, the very nature of DevOps is rapidly changing. Once called a movement that defies description, DevOps has evolved to encompass people, process and tools. Now as DevOps as a Service, security-first design patterns, containerization and microservices come into focus, it’s clear that DevOps is becoming the de facto means by which organizations build, run and secure their modern applications.
It also means the classic stereotypes of software developer and IT Ops are changing. In a software-centric world DevOps methodologies and practices are adapting to an ever changing landscape as new stakeholders become tied to the business outcomes tied to the software application. Following are the trends that have already changed the face of DevOps as we know it, and the process by which we build, run and secure modern applications.
The Convergence of DevOps
As is common lore, Patrick Dubois coined the term DevOps while showing off tools he used to configure environments for the deployment of applications, for this was the point where developers handed off applications to IT Ops for deployment. However, DevOps has become a merging of several mutually aligned movements that include Lean, Agile, continuous delivery, Velocity, Toyota Kata, and more recently, Rugged DevOps. John Willis, co-author of the DevOps Handbook, called this the Convergence of DevOps.
As with the many flavors of Agile methodologies, DevOps practitioners have adopted some practices while leaving others behind. For example, Scrum has become common practice in managing development teams, but it is not ubiquitous. Thus, DevOps can look different from one organization to the next.
What has galvanized DevOps is the introduction of Continuous Delivery, an extension of the continuous integration patterns developed by Martin Fowler. Jez Humble and David Farley presented a tangible methodology for automating the software delivery process. Teams that have adopted continuous delivery have seen remarkable increases, delivering 160% faster, according to the Puppet Labs 2016 State of DevOps Report.
So DevOps, as we now know it, is the culmination of decades of practice. What can be said about the future is that DevOps will continue to converge, taking on new technologies. Here’s a look at some of those trends.
Continuous Delivery and The Three Ways of DevOps
DevOps has evolved beyond the concepts Continuous Delivery. Gene Kim and other authors of the DevOps Handbook describe the three principles underlying DevOps:
- Feedback and Telemetry
- Continuous Learning and Experimentation
These are particular patterns for applying DevOps principles in a way that yields high performance outcomes.
The first we are most familiar with: the linear flow from Dev to Ops where application functionality can be delivered continuously. By speeding up that flow, you can reduce the time to fulfill internal requests, deploy new functionality to production faster than your competition. This notion is embodied in Continuous Delivery.
Feedback and Telemetry
The second pattern emphasizes continuous feedback where information from production environments flows from Ops back to the development team. You can facilitate this by inserting what the authors call “telemetry” (logs, metrics and events) to create continuous real-time feedback mechanisms. This allows you to monitor problems as they occur and share events of interest with everyone in the build, run and secure stream.
Former Netflix CTO Adrian Cockroft said, “Monitoring telemetry is so important that our monitoring systems need to be more available and scalable than the systems being monitored.” The State of DevOps Report also found that high-performing teams resolved production incidents 168 times faster than their peers. One of the top practices that decreased MTTR was the proactive monitoring of logs, metrics and events (i.e. telemetry).
While monitoring metrics, logs and events in production apps is becoming mainstream, it’s just as important to collect telemetry from the deployment pipeline in order catch problems before the release candidate ever reaches production. In the build stage you want to flag important events like when a continuous integration test fails. Knowing that a build is taking longer than usual can also flag performance issues. Even basic statistics from your code and artifact repositories can inform on the overall health of your release cycle.
Continuous Learning and Experimentation
The third pattern encourages proactive learning from failures so that you can architect for failure. In architecting for failure, you inject failure into the system to test resiliency. A prime example of this is Simian Army, a suite of tools created at Netflix for keeping your cloud operating. One tool, Chaos Monkey is a resiliency tool that helps ensure that your applications can tolerate random instance failures by identifying groups of systems and randomly terminating one of the systems in that group. This third pattern also promotes a culture where local discovery can be turned into global improvements and tribal knowledge throughout the organization.
Beyond the traditional IT roles, we are now finding that users can use that same data to do what we call “App Intelligence.” The same logs, metrics and events that allow you to figure out something is broken, also tells you what your users are doing. For app intelligence, the focus is on user activity and visibility where everyone becomes a stakeholder including sales, marketing, and product management. To summarize, you can:
- Use telemetry to better anticipate problems and achieve goals
- Integrate user research and feedback into the work of the product teams
- Enable feedback so Dev and Ops can safely perform deployments
- Enable feedback to increase the quality of work through peer reviews
DevOps as a Service
With no infrastructure to manage, some have asked what it means to place DevOps in the cloud. TechCrunch ran the article “Managed services killed DevOps,” opining that “the age of DevOps is just about over.”
But new tooling is appearing even as developers are taking operational responsibility for the code they write. As early as 2006 Werner Vogels, CTO at Amazon said “You Build it, you run it.” According to Vogels, there’s no need to distinguish between building and running an application.
“Giving developers operational responsibilities has greatly enhanced the quality of the services, both from a customer and a technology point of view… This brings developers into contact with the day-to-day operation of their software. It also brings them into day-to-day contact with the customer. This customer feedback loop is essential for improving the quality of the service.”
AWS has also taken the lead by enabling automation tools to manage continuous delivery including the ability to deploy to on-premise systems. Three tools in particular enable this:
- AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define.
AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.
- CodeBuild scales continuously and processes multiple builds concurrently.
AWS CodeDeploy automates code deployments to any instance, including Amazon EC2 instances and on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
You’ll also find other DevOps tools in the AWS Marketplace including many flavors of Jenkins. Jenkins works well both on AWS and with AWS. Additionally, Jenkins plugins are available for a number of AWS services.
You’ll also find DevOps tooling is available on most cloud platforms including Google Compute Engine and Microsoft Azure. Learn more about tooling in the DevOps as a Service section of this site.
DevOps and Security: Rugged and DevSecOps
One common theme throughout DevOps is the concept of being proactive. As modern apps move to the cloud, security is becoming a top concern. There have been several efforts to make security concerns a first-class citizen in the continuous delivery process. Under a waterfall model, developers deliver code that may later have to pass through security review prior to deployment. Movements like Rugged and DevSecOps push security concerns to the left in the release cycle and squarely into the build process, making security engineers responsible through “Security as Code.”
The Rugged DevOps Manifesto was published in 2012. Rugged is associated with security but it is not just about making software secure — it’s partly about making your software defensible against vulnerabilities and threats. The Manifesto recognizes that your code will be used in ways you cannot anticipate, in ways it was not designed, and it will be attacked. As the author writes, “Secure is a possible state of affairs at a certain point in time. But rugged describes staying ahead of the threat over time.”
Since its publication, making your software “rugged” has come to mean you have proactively built in processes to ensure your software is available, scalable, maintainable, defensible, reliable and resilient in the face of failures. Processes like like battle testing your software against conditions, much like Chaos Monkey tests for availability issues during outages are a way of being rugged.
Within DevSecOps, security is everyone’s responsibility and the goal is to bring individuals of all abilities to a high level of security proficiency in a short period of time. The DevSecOps Manifesto published in 2015 speaks to the value that a security practitioner must supply and the changes they must make to enable security. In this way, the value that DevSecOps engineers supply to the system is the ability to continuously monitor, attack and determine defects before attackers discover them. DevSecOps stresses:
- Security test automation
- Configuration and patch management
- Continuous monitoring
- Identity management
DevSecOps teaches ruggedness. One’s code has to be able to withstand the criticism of others because no code is flawless. DevSecOps strives to provide constructive feedback quickly to stay ahead of attackers. One’s infrastructure and code has to be able to be re-stacked quickly while ensuring data security and availability.
There are some common DevOps practices that inherently lend themselves to providing a development and delivery pipeline that can improve your overall security posture. DevOps practices and tooling that can help improve overall security when incorporated directly into your end-to-end continuous integration/continuous delivery (CI/CD) pipeline.
The Impact of Containerization and Microservices on DevOps
Users want reliable, cloud-hosted software applications they can trust to be fast, highly available, easy to use, and bug free. Architects, developers and operations managers look for emerging trends in DevOps and containerization to create these reliable and agile solutions.
To meet these challenges, companies are adopting container technologies like Docker as a way to localize and isolate software function, while running applications on hyper-lean OS environments. In a DevOps context, containers allow agile teams to templatize application execution environments that developers can use and operations teams can “bless” for production deployment.
Containerization is also an enabler for microservices. Microservices are small, independently deployable services, each running in its own process and communicating with lightweight mechanisms, often a REST API. Building applications using loosely couple microservices distributes various responsibilities of the application making it easier to change and add functions and features to the system at any time.
Microservices can be written in different programming languages and use different data storage technologies. Thus, there is a bare minimum of centralized management of these services. Microservices are typically built around business capabilities.
Traditional application development uses a project model where a team delivers some piece of software that is handed over to the maintenance organization upon completion. A Microservice architecture on the other hand prefers the idea that a team should own the application over its full lifetime. This aligns closely with the “you build, you run it” model described above where the development team takes full responsibility for the software in production. This brings developers into day-to-day contact with how their software behaves in production and increases contact with their users, as they have to take on at least some of the support burden.
Since services can fail at any time, applications built on Microservices places an emphasis on real-time monitoring of the application to detect the failures quickly. Here again, telemetry from logs, metrics and events can be invaluable in checking both architectural elements (how many requests per second is the database getting) and business relevant metrics (such as how many orders per minute are received).
Microservice teams typically employ machine data analytics to log and monitor each individual service, utilizing dashboards to check up/down status, other operational metrics and performance-related KPIs.
What’s Next for DevOps
If we were to summarize DevOps in one word, it would be “proactive.” Traditional IT Ops typically waits until the problem occurs, then goes into Incident Response mode while troubleshooting and fixing the problem. A common theme throughout DevOps from continuous delivery, Rugged and DevOps as a Service is being proactive — building out functionality incrementally, automating integration testing, using telemetry to see and solve problems as they happen, and pushing security review back to the design stage. Ultimately, the goal is to give your customers a better experience.