Early adopter or laggard? Cutting edge or legacy? Fast IT or slow IT? It’s never been a binary choice, but
with the pace of IT evolution continuing to accelerate, it’s never been more important to plan for the future, while at the same time maintaining the maximum benefits from past investments.
What characteristics might define ‘fast IT?’ Here are just a few suggestions: new tools to enable rapid IT provisioning; simplified, automated management, so that many systems can be controlled by fewer admin staff; flexible resources to handle a broader set of workloads, including ‘cloud native’ applications; ubiquitous access to data (both for data-intensive applications and to take advantage of operational analytics); and built-in security.
It almost goes without saying that virtualized infrastructure is a foundation point and that cloud computing – private, public and hybrid in various proportions – is a destination. And as for the physical infrastructure, there must be plenty of CPU power, tiered storage and (especially) high-speed system interconnectivity to link it all up. Fast IT also implies the coexistence of the opposite: ‘slow IT,’ legacy, traditional, or even ‘heritage’ for the polite. Much of this existing technology is likely to remain in place for many years to come and will have to be efficiently managed alongside the new.
All of this is having a big influence on system design and product packages. We’re seeing stripped-down
servers used as standard building blocks for hyperscale architectures, inspired by high-performance computing and web-scale workloads. And at the opposite end of the scale are converged infrastructure and hyperconverged systems, where multiple elements are factory-integrated to make procurement and lifecycle management easier for the customer. New software-defined datacenter layers span the compute, storage and networking domains, extending system capabilities at the virtualization, data and management layers.
So where to begin? This paper provides concise overviews of four still-emerging areas where datacenter modernization projects can have the most impact – not just on costs, but on agility, resilience, security,
efficiency and long-term continuity. Should you move to public cloud, stick to private cloud or adopt a hybrid model? How can you evolve your existing physical infrastructure to support a more cloud-like approach? What can DevOps or the adoption of containers do to transform the IT department with an IT-asa-service mind-set? And is the relational database really the right answer to every question?