Ten trends redefining enterprise IT infrastructure
1. ‘As-a-service’ consumption for everything from software to hardware. Moving from In house infrastructure to the cloud – Enterprise buyers increasingly prefer consumption-based pricing models. This shift from capital expenditures to operational expenditures helps reduce risk, frees up capital, and provides increased flexibility.
From 2015 through 2016, revenues for infrastructure as a service (IaaS) and platform as a service (PaaS) rose by 53 percent, making them the highest-growth segments in cloud and infrastructure services.1Considering that a unit of compute/storage in the cloud can be up to 40 to 50 percent cheaper in total cost of ownership than a unit on premises, the shift to as-a-service models is striking. In addition to moving from on premise to cloud, IT providers and customers are experimenting with annuity-based payments for traditional hardware.
2. The public cloud goes mainstream. While companies have been moving their workloads to the public cloud for years, there has recently been a sea change at large enterprises. Capital One, GE, Netflix, Time Inc., and many others have drastically reduced or even eliminated their private data centers, moving their operations to the cloud.2In fact, cloud providers are expected to account for about 80 percent of shipped server and storage capacity by 2018.
Amazon is the leader in IaaS, with about 40 percent market share.3Microsoft is a clear second, followed by Google and IBM. Together these players account for approximately 65 percent of the IaaS market today.4With the decline of on-premises data centers, they could account for almost half of all IT infrastructure provisioning by 2020. If that is the case, only companies with significant capital-investment capabilities could compete with them. One potential candidate would be Alibaba, which has recently experienced triple-digit year-over-year cloud-related revenue growth, driven largely by cloud adoption in China. 5
3. Increased use of open-source offerings, up and down the stack. Approximately 65 percent of companies increased their use of open-source software from 2015 to 2016, according to the 2016 Future of Open Source Survey conducted by Black Duck and North Bridge. Major IT providers now rely on programs such as Apache Spark, Kubernetes, and OpenShift. Moreover, Airbnb, Airbus, eBay, Intel, and Qualcomm are among the many large companies using TensorFlow, Google’s open-source library of machine-learning code.6Facebook’s Open Compute Project, which aims to make hardware more efficient, flexible, and scalable, has helped extend the open-source movement into the data centers of companies that are participating members, such as AT&T, Deutsche Telekom, and Goldman Sachs.7
4. Cybersecurity remains a major concern.Cybersecurity continues to be a top C-suite and board-level priority. Across all industries, attacks are growing in number and complexity, with 80 percent of technology executives reporting that their organizations are struggling to mount a solid defense. Many companies cannot recruit the internal talent needed because there is a shortage of cybersecurity experts, leading them to invest in managed security services. Cloud-based security offerings are also becoming more attractive to companies, with McKinsey estimating that they will comprise 60 percent of security products by 2020, up from 10 percent in 2015.
5. Mainstream comfort with ‘white box’ hardware. Traditionally, IT infrastructure providers have relied on assembling branded systems for their server, storage, and networking offerings. To do so, they outsourced hardware manufacturing to original-design manufacturers (ODMs). However, this model is becoming obsolete because customers are increasingly unwilling to pay for assembly. Instead, customers go directly to ODMs, using designs for servers obtained from sources such as Facebook’s Open Compute Project to customize their data-center configurations. Open Compute Project member companies that have taken this route include IBM, Fidelity Investments, and Verizon.8As discussed later in this article, many of these ODMs are located in Asia, which is driving more hardware business to that region. By 2020, IDC estimates that “self-built” servers will comprise half the hyperscale-server market.
6. Internet of Things business applications are ready for adoption. McKinsey estimates that business-to-business applications will account for nearly 70 percent of the value that will flow from the Internet of Things (IoT) in the next ten years. According to our 2017 Enterprise IoT Executive Survey, 96 percent of companies expect to increase their IoT spending over the next three years, with some planning to devote as much as a quarter of their IT expenditures to IoT-related capabilities. The most popular use cases for enterprise IoT involve increasing visibility into operations, optimizing operational tasks, or assisting with the development of new business models. The upshift in adoption is even occurring in industries that have traditionally been slow to adopt new technologies, such as oil and gas. The growth of enterprise IoT will vastly increase demand for the compute-and-storage infrastructure, augmenting demand for hyperscale resources and IoT-specific PaaS solutions.
BI Intelligence predicts that more than five billion IoT devices, such as inventory-control and safety-monitoring tools, will require edge solutions by 2020 because they must collect and process data in real time.9Edge solutions allow information processing at the device or gateway level, rather than within the cloud or a data center, reducing both latency and connectivity dependencies. Of the $500 billion in growth expected for IoT through 2020, McKinsey estimates that about 25 percent will be directly related to edge technology. Edge computing will help improve data compression and transfer in the connectivity layer of the technology stack, reducing network bandwidth and making a wider range of IoT applications possible.
New trends to watch
In addition to the acceleration of familiar trends, several new developments are altering the IT infrastructure landscape for both providers and customers. These include the shift to Asia in hardware, the use of DevOps for software and hardware, container-first architectures, and the growth of artificial intelligence and machine-learning-optimized stacks.
7. The shift of the hardware infrastructure market to Asia. Asian original-equipment manufacturers (OEMs) have been making inroads in the IT infrastructure market dominated by US-based providers. Consider two examples in the server market:
- Huawei plans to shore up its position in the server market by spending about $1 billion of its annual $9 billion R&D budget on equipment for data centers.10
- Lenovo acquired IBM’s x86 server business in 2014, helping to expand its footprint in large enterprises globally.11
An equally important shift involves Asian ODMs, which have also increased their share of the hardware market as white-box systems become more popular. Taiwan-based Quanta Computer’s cloud-computing revenue from server, storage, switch, and IoT devices has been strong. Several Asian ODMs now provide servers to some of the top global hyperscale cloud providers, including Amazon, Facebook, and Google, all of which are investing heavily in expanding their data-center infrastructure.12As noted earlier, initiatives such as Facebook’s Open Compute Project are accelerating with this shift, since they allow members to obtain plans and designs for servers, storage, and networking. Some Asian ODMs are also offering off-the-shelf products based on open-source designs. If current trends continue, Asian ODMs may increase their revenue share of the hardware market two- or threefold by 2020.13
8. DevOps for software and hardware. IT departments have to deliver new features even faster. Meanwhile, companies now expect greater availability from them—24-hour coverage every day of the week. DevOps can help achieve both goals by fostering a high degree of collaboration along the entire IT value chain.
The new DevOps business model extends beyond application development to encompass application operations and IT infrastructure. Within DevOps, all three groups work as one. Many organizations understand the benefits of this model and are moving in this direction. In McKinsey’s 2017 IT-as-a-Service Survey, 80 percent of respondents stated that they had implemented DevOps practices in some part of their organization. In addition, 53 percent of respondents stated that they would apply these practices across their entire organization by 2020, up from 37 percent today.
In keeping with these trends, demand for DevOps talent will surge over the next few years. Companies may have trouble finding staff to fill all roles, since 40 percent of survey respondents stated that a lack of internal talent and skills was the primary factor preventing DevOps from becoming mainstream.
9. Container-first architectures. No longer confined to niche development environments, containers are on the path to overtake virtual machines and become the primary unit of deployment in the cloud. Atlassian’s 2016 report, Software development trends and benchmarks, revealed that 34 percent of software professionals have adopted containerization in their development teams.
What is most remarkable about containerization is the speed of its growth. In RightScale’s 2016 State of the cloudreport, only 18 percent of respondents reported deploying containers in production environments. In the 2017 survey, by contrast, respondents stated that Docker was their most frequently used DevOps tool. The growth of containerization has been occurring in tandem with the proliferation of microservice architecture—the development of software applications in small, independent units. As developers refine microservices, they are also addressing many of the challenges that prevented containerization’s growth, including inadequate security, problems with management or orchestration, and scalability.
In parallel with these trends, the next logical step in application atomization is emerging. It involves the abstraction of compute resources, in which functions become a unit of deployment, or function as a service. This will eliminate the need to provision infrastructure or manage compute resources for these functions.
10. Artificial intelligence and machine-learning-optimized stacks. After many years of refinements, artificial intelligence (AI) is delivering benefits to companies across industries.14Consider, for instance, how AI helps utilities forecast electricity demand, or how it allows automakers to create self-driving cars. Various developments are encouraging this new wave of AI, including increased computation power and the availability of more sophisticated algorithms and models. Perhaps most important, data volume is exploding, with network devices collecting billions of gigabytes every day.
McKinsey Global Institute estimates that the entrepreneurial activity unleashed by AI drew between $26 billion and $39 billion in investment in 2016—three times the amount attracted in 2013. Most AI investment comes from large digital natives, such as Amazon, Baidu, and Google, which are exploring innovations in semiconductors, infrastructure software, and systems. Some companies are building new computing paradigms that incorporate tensor processing units from Google, graphics processing units from Nvidia, and field-programmable gate arrays from Xilinx. The large hyperscale providers are also offering AI and machine-learning capabilities to enterprises through the cloud.
As enterprises gain increased access to leading-edge AI and machine-learning technologies, automation will increase. According to MGI, about half of all the activities people are paid to do in the world’s workforce could be automated, accounting for almost $15 trillion in wages.
The scale of disruption in the technology infra-structure landscape is unprecedented, creating huge opportunities and risks for industry players and their customers. Executives at technology infrastructure companies must drive growth by transforming their portfolios and rethinking their go-to-market strategies. They should also build the fundamental capabilities needed for long-term success, including those related to digitization, analytics, and agile development. All of these ambitious steps will require more capital and capacity, but customers in the new IT infrastructure landscape will reward their efforts.
Posted on November 9, 2017