Skip to Content

The IT-DevOps Life Cycle is Like a Pyramid That Keeps Growing

Don’t let anyone tell you how much easier IT life is these days. It is, in many ways, more fun and productive … but it is not easier. In fact, we’re doing more with less, cubed, at this point. My point is—think of the life cycle as a pyramid. The base is the build++ chain, from VCS to public-facing active protection; the height is the new technologies and systems being bolted on daily—like SaaS tools someone in accounting just signed up for or the new architectures a dev team in a subsidiary just decided to adopt (but which will eventually have to be maintained by IT). And the depth is the completely new capabilities that are required by market forces or regulation – SBOM is the poster child for this group; suddenly, just about every vendor is giving it to you as part of their product.

No matter which of these axes is growing, the workload is generally growing commensurately. Think about it: If a new product is implemented, it impacts IT’s workload. If a new technology that solves a problem is implemented, it, too, impacts workload. And if a new capability in an existing product is dictated by the compliance team for any reason, this, too, increases workload. This is just the side, and doesn’t include the fact that most orgs are dropping newly developed applications on a pretty regular basis these days.

Automation has helped, but it hasn’t solved the problem yet. I believe it can, given time and focus, but for the most part, the demand for new technology to solve age-old problems (“We can’t effectively test” being met with “Here is automated test generation, execution and results filtering”) has been a net increase in workload. Though far smaller than it might have been, it is still increasing. This is true across IT. The test example above applies to app dev and security, with result sets that did not exist six years ago because the process was too expensive in staff hours. It also applies to ops because the processing power to get those tests done – and it is a lot with large test environments – is ops’ responsibility.

Painting a bleak picture? Nope. Making certain we’re realistic. Trying to ensure that we consider how much we are already doing, and take a chance to implement actual time savers. For example, applying that same test automation to testing (for most orgs, I’m thinking security testing) that was actually being done already will reduce the cost of getting it done. Using SBoM for SCA to contribute to existing compliance saves a lot of time generating that information. Using shortens coding time and (arguably at this point, but inevitably, given the fullness of time) improves code quality, reducing rework, which is costly, even in an agile environment.

We are growing our capabilities almost daily. I’m just suggesting that, in the process, we backward-apply the good stuff to reduce existing overhead. And as always, this big, complex beast exists and keeps clipping along because you’re there, keeping it fed and stable. Keep rocking it, and save yourself some time by using new technology smartly—there is plenty more to do than whatever you streamline.