DevOps

Where Is the Deployment Space Heading?

With all the time I’ve spent of late assessing different deployment options, it seemed pertinent to stop for a moment and see where the deployment space is heading over the next couple of years.

However, like any form of crystal ball-gazing, what we see happening in the future is only a prediction about what we think may come — even if that prediction is based on real-world evidence. But I’m not alone in doing this. People far more well-known than I, most notably Bill Gates, do this on a regular basis.

Despite that, I’m still keen to share my thoughts on where I see the deployment space heading in the next few years. Have a read and tell me whether you agree or not. If you strongly disagree, please tell me why in the comments.

Continuous Delivery Will Be the Norm

If you’re not too familiar with the term, continuous delivery is:

…the ability to get changes of all types — including new features, configuration changes, bug fixes, and experiments — into production or into the hands of users safely and quickly in a sustainable way. Our goal is to make deployments — whether of a large-scale distributed system, a complex production environment, an embedded system, or an app — predictable, routine affairs that can be performed on demand.

Why use it? Stop and consider for a moment how complex, how sophisticated, and how demanding it can be to deploy software. Consider the multitude of components that underpin it.

Speaking from personal experience, when I first started (back in 1999), all that it took to deploy a [web] application was to:

  1. Migrate any database schema changes I’d made locally to the production server;
  2. Rsync the code to the production server;
  3. Reload the app in the browser.

Not much could have been simpler. The application consisted of Apache with PHP and MySQL. There wasn’t much that I had to do, nor that could go wrong.

These days everything’s different. There are database, caching, queueing, and log servers. On top of that, software is governed increasingly by complex and demanding legislation. So as well as the applications becoming more sophisticated, the rules that govern them are also growing in complexity.

As a result, we need a process or architecture that allows for complex, sophisticated systems to be regularly built and deployed, one that both manages and reduces this complexity as much as possible.

Consider these five benefits of continuous delivery from Martin Fowler, and I’m sure you’ll agree that CD is essential:

  • Your software is deployable throughout its lifecycle.
  • Your team prioritizes keeping the software deployable over working on new features.
  • Anybody can get fast, automated feedback on the production readiness of their systems anytime somebody makes a change to them.
  • You can perform push-button deployments of any version of the software to any environment on demand.

Who wouldn’t want these benefits?

Everything Will Be Automated

If there was one mantra, automate all the things would be it. By automating everything, processes are both well defined and repeatable. Following this logic, if processes are well defined, that means that they’ve been thought about — in depth — for some time.

So it follows by consequence that you know your software and its needs at an intimate level. Combine that with an array of tools that make automation easier than it’s ever been, and you have 1) a recipe for regular, successful deployments and 2) something that people will adopt with a passion.

It’s not to say that people don’t focus. But, I feel, up until now, there were valid excuses for a lack of automation. The options that were available took a lot of effort to use, whether because of education, time, cost, or a combination of all three.

However, in more recent times, this is decreasingly the case. There are so many tools springing up and reaching a sufficient level of maturity that there’s no reasonable excuse not to use them.

These include open-source tools such as Rocketeer, Capistrano, Phing, Deployer, Forge, and Envoyer, as well as CI services like Codeship and solutions such as Docker, which provides a complete development-to-production workflow.

Regardless of the tool you choose, it’s hard to find one that’s either immature or lacks an intuitive interface. Given that, the time to automate everything is well and truly here. The reasons for not automating all the things are gone.

Then there are the benefits that automation brings. Automation breeds reliability, predictability, and security.

When software can be built the same way each and every time, organizations can create software with confidence. Creating software with confidence leads to more predictable release cycles, greater visibility, and greater feedback. This in turn leads to software that is more likely to have a higher level of quality and customer satisfaction.

Then there’s the fact that automated builds encourage other best practices, such as testing. You could say the tipping point’s been reached. What’s more, the mindset in the developer community, at least the ones that I participate in, is sufficiently behind it.

I’m encouraged in this thinking because I’ve seen what’s happened when the tooling and mindset shift reached a critical mass previously, such as with testing. Once reached, the speed of adoption only increased until it became almost universally accepted.

Docker, Docker, Docker

This might be a controversial statement to make, but it’s one that I’m willing to put my money on. The reasons why are uniformity and simplicity.

Let’s explore those points in some further depth. A typical, even clichéd, response over the years when something would go wrong in a deployed application would be “it works on my machine.” While that may have been true, it never helped the user who was prevented from or who’d lost an hour or more of work.

To that end, a variety of solutions were created that sought to minimize the difference between all environments involved in a software solution, whether that was development, testing, staging, or production. Of those, some of the most notable are Vagrant, Chef, Puppet, Salt, and Ansible.

However, even the simplest of these, Ansible, still requires effort, knowledge, and time to use effectively. Yes, they create idempotent environments, ones which can be versioned along with the code that they support. But they’re still complex setups, and builds are both resource- and time-intensive.

Contrast that with the solution — and model — that Docker provides. Here’s an example of a docker-compose.yml file for building a set of Docker containers to support a rudimentary application (you can see the complete configuration on GitHub).

version: '2'

services:
    nginx:
        image: nginx
        ports:
            - 8080:80
        volumes:
            - ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
        volumes_from:
            - php

    php:
        build: ./docker/php/
        expose:
            - 9000
        volumes:
            - .:/var/www/html

Contrast that with an equivalent, well-structured Ansible configuration. I think you see where I’m going with this. The Docker configuration is significantly smaller and less involved to both understand and to maintain.

What’s more, and as the presentation The Docker ecosystem and the future of application deployment points out, building environments, whether locally or remotely, with Docker is a lot faster than with virtual machines. It also avoids configurations that are (operating system) distribution- or language-specific.

To be fair, in Docker’s earlier days, there wasn’t a clear path from development to production. But in recent years, as the adoption of Docker has increased, the toolchain has matured, and the number of commercial vendors supporting it grows, this lack of clarity is all but a thing of the past.

Given that, I see Docker as a game changer for deployment in the years to come. It does have some detractors who have leveled valid criticism at it. But I see the foundation for these criticisms continuing to erode.

A Multitude of Vendors and Tools

If history is anything to go by, being able to sustain a multitude of vendors and tools shouldn’t be the case. In most other industries, there’s always an initial explosion of interest followed by an explosion of companies to support that interest.

But gradually some players consume others until there are only a few players left. This holds true in mass media, shipping, auto-making, hardware, and even software in the earlier days. It wasn’t all that long ago that companies such as Microsoft and Oracle were consuming any number of smaller companies, even if the synergies weren’t there.

But in this particular software niche, I don’t see it holding true as it has previously. It’s my firm belief that it cannot, primarily because of the enormous amount of variation and specialization of software being created.

What’s more, as software is not bound by the hard realities and costs of other industries, an almost unthinkable array and variety of software can be created. This software will have competing demands and needs and not fit into a one-size-fits-all model. Given that, I don’t see how any one or even a small group of vendors could ever hope to cater to all of it. I’m not saying that it won’t happen, but I highly doubt that it would happen.

Having said that, I do see a certain level of commoditization happening. I see vendors coalescing around key areas and specialties. I see there being a few key players in the relevant spaces, such as storage, deployment, logging, caching, queueing, etc.

But I don’t see a future where there’s only a handful of vendors covering everything. While some companies have tried (and continue to try) to set the terms and conditions for how software is created, they’ve never succeeded — not in the long term. Perhaps because software development came out of the hacker mindset or because software is so malleable.

But software is also a tool to solve a need, and people will always have needs to solve. Those needs, while sometimes similar, will never be the same. Given that, there will always be a need for variety and diversity of solutions on offer.

In Conclusion

To sum up, these are my four predictions for the future of the deployment space. The software that we’re writing (specifically in the web space) aren’t simple pages with dates and copyright notices anymore. They’re complex, often sophisticated applications that perform all manner of tasks.

From tools such as Zoho Books and FreeAgent, which help us manage our clients and accounts, to mytaxi and Lyft for finding a ride, the software that we’re writing is growing ever more complex, composed of any number of components and backend services.

Given that, the complexity and demands on us to be able to deploy changes, bug fixes, and releases are also growing. Yet the available tools have also matured and refined over the years as well.

It’s for these reasons that I believe that the future of the deployment space is a combination of processes, techniques, and tooling to make our lives easier and less error prone.

Do you agree or disagree? What does your crystal ball (or more important, your experience) tell you? Do you have some unique insights to share?

This article wouldn’t have been possible without the input from countless colleagues and friends across the wider software development community. So thank you to all those whom I bugged and hassled for their input.

Reference: Where Is the Deployment Space Heading? from our WCG partner Matthew Setter at the Codeship Blog blog.

Matthew Setter

Matthew Setter is a developer and technical writer. He creates web-based applications and technical content that engage developers with platforms, technologies, applications, and tools.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Rebecca
Rebecca
6 years ago

Any comments on deployment for serverless architectures?

Back to top button