Sea Change Series: Orchestration Software in the Mega Data Center

  • ID: 4226677
  • Report
  • Region: Global
  • 115 Pages
  • Wintergreen Research, Inc
1 of 5
Mega Data Center Depends on Orchestration Software to Control Operations, Implement Microservices

FEATURED COMPANIES

  • Amazon
  • Cisco
  • Facebook
  • Google
  • IBM
  • Mesosphere
  • MORE

The 2017 module has 115 pages and 74 tables and figures. Orchestration Software in the Mega Data Center is used to tie a fabric architecture together that fills up an entire building with 100,000 processors and 100,001 switches. The mega data center described in the study is effective because it leverages the economies of scale. This orchestration software infrastructure study module is part of a longer study that addresses the business issues connected with data center modernization. There are 26 module parts to the larger study comprised of detailed analysis of how new infrastructure layers will work to support management of vast quantities of data.

Business growth depends on technology spending. Intelligent, automated process, not manual labor systems are what speed business growth. We have had the situation in the data center where 93% of spending is just to keep current systems running, many of those plagued with manual input. Mega data centers change that pattern of IT manual process.

The Internet has grown by a factor of 100 over the past 10 years. To accommodate that growth, mega data centers have evolved to provide processing at scale. Facebook for one, has increased the corporate data center compute capacity by a factor of 1,000, virtually eliminating much manual process. Orchestration software is a key aspect of that process. To meet future demands on the Internet over the next 10 years, companies with that capacity need to increase capacity by the same amount again while the other companies struggle to catch up. Nobody really knows how to get to increasing compute capacity by another factor of 1,000.

Business growth depends on technology spending. Intelligent, automated process, not manual labor systems are what speed business growth. We have had the situation in the data center where 93% of spending is just to keep current systems running, many of those plagued with manual input. Mega data centers change that pattern of IT manual process.

Realigning data center cost structures is a core job of orchestration software. The enterprise data centers and many cloud infrastructure operations all have similar problems of being mired in administrative expense. Containers address that issue by creating vastly more efficient operations for data center infrastructure.

Lead author of the team that prepared the study, "The only way to realign cost structure is to automate infrastructure management and orchestration. Mega data centers automate server and connectivity management using orchestration software to manage multiple application containers. Other systems automate switching and storage, along with hypervisor, operating system, and virtual machine provisioning."

As IT relies more on virtualization and cloud mega data center computing, the physical infrastructure is flexible and agile enough to support the virtual infrastructure. Comprehensive infrastructure management and orchestration is essential.

The Enterprise Data Center has become a bottleneck, it needs to be completely replaced. Category 5 and Category 6 Ethernet cable is spread throughout the existing enterprise data centers and is too slow to handle all the digital data coming through the data center. Cat 5 and Cat 6 Ethernet utilized by the servers to achieve data transport using that cable does not keep up with the data coming through the data center the way optical cable and optical transceivers do.

The existing servers and cable are a problem because they are too slow for modern systems. The cable is too slow to handle all the data coming at us in the new digital age, and the associated technology that operates at Ethernet category 5 and category 6 cable speeds is too slow as well, this is why the entire set of existing enterprise data centers is a bottleneck.

Mobile data traffic is set to increase by a factor of eight between 2015 and 2020. Growth is anticipated at 53 percent per year, faster than systems revenue or industry revenue.

The theme of this study is that the pace of data expansion creates the need for more modern means of managing data. There are some companies that are doing a better job, better than others of adapting to IT infrastructure to the wild influx of data.

The four superstar companies that are able to leverage IT to achieve growth, Microsoft, Google, Facebook, and the leader AWS all use Clos architecture. What is significant is that systems have to hit a certain scale before Clos networks work Clos networks are what work now for flexibility and supporting innovation in an affordable manner. There is no dipping your toe in to try the system to see if it will work, it will not and then the IT says, “We tried that, we failed,” but what the executive needs to understand is that scale matters. A little mega data center does not exist. Only scale works.

Many companies are using digital technology to create market disruption. Amazon, Uber, Google, IBM, and Microsoft represent companies using effective strategic positioning that protects the security of the data. As entire industries shift to the digital world, once buoyant companies are threatened with disappearing. A digital transformation represents an approach that enables organizations to drive changes in their business models and ecosystems leveraging cloud computing, and not just hyperscale systems but leveraging mega data centers. Just as robots make work more automated, so also cloud based communications systems implement the IoT digital connectivity transformation.

READ MORE
Note: Product cover images may vary from those shown
2 of 5

FEATURED COMPANIES

  • Amazon
  • Cisco
  • Facebook
  • Google
  • IBM
  • Mesosphere
  • MORE

Sea Change Series: Scale in the Mega Data Center Executive Summary

  • Amazon, Google, Microsoft, Facebook
  • IT Is Better When New Sets Of Servers Can Be Spun With The Push Of A Button
  • Aim to Realign IT Cost Structure
  • Scale Matters
  • Table of Contents

1. Effect of Scale in the Mega Data Center
1.1 Facebook Mega Datacenter Physical Infrastructure
1.1.1 Facebook Automation of Mega Data Center Process
1.1.2 Facebook Altoona Data Center Networking Fabric
1.1.3 Facebook Altoona Cloud Mega Data Center
1.1.4 Facebook Altoona Data Center Innovative Networking Fabric Depends on Scale
1.1.5 Facebook Fabric Operates Inside the Data Center
1.1.6 Facebook Fabric
1.1.7 Exchange Of Data Between Servers Represents A Complex Automation Of Process

2. Applications Customized For Each User
2.1 Customized Back-End Service Tiers And Applications
2.2 Machine-To-Machine Management of Traffic Growth
2.2.1 Facebook Data Center Fabric Network Topology
2.2.2 Building-Wide Connectivity
2.3 Highly Modular Design Allows Users To Quickly Scale Capacity In Any Dimension
2.3.1 Back-End Service Tiers And Applications
2.3.2 Scaling Up As a Basic Function Of The Mega Data Center Network
2.3.3 Facebook Fabric Next-Generation Data Center Network Design: Pod Unit of Network

3. Mega Data Center Server Pods
3.1 Server Pods Permit An Architecture To Implement High-Performance Connectivity
3.1.1 Facebook Sample Pod: Unit of Network
3.1.2 Non-Blocking Network Architecture
3.2 Data Center Auto Discovery
3.3 Facebook Large-Scale Network
3.3.1 Rapid Deployment Architecture
3.3.2 Facebook Expedites Provisioning And Changes

4. Google Mega Data Center Scale
4.1 Google Douglas County Mega Data Center
4.1.1 Google Data Center Efficiency Measurements
4.1.2 Google Programmable Access To Network Stack
4.1.3 Google Software Defined Networking (SDN)-Supports Scale and Automation
4.1.4 Google Compute Engine Load Balancing
4.1.5 Google Compute Engine Load Balanced Requests Architecture
4.1.6 Google Compute Engine Load Balancing Scaling
4.2 Google Switches Provide Scale-Out: Server And Storage Expansion
4.2.1 Google Uses Switches Deployed in Fabrics
4.2.2 Google Mega Data Center Multi-pathing
4.2.3 Google Mega Data Center Multipathing: Routing Destinations
4.2.4 Google Clos Topology Network Capacity Scalability
4.2.5 Google Aggregation Switches Are Lashed Together Through a Set Of Non-Blocking Spine Switches
4.3 Google Network Called Jupiter

5. Microsoft Mega Data Center Scale
5.1 Microsoft Cloud Data Center Multi-Tenant Containers
5.2 Microsoft Azure Running Docker Containers
5.2.1 Microsoft Data Center, Dublin, 550,000 Sf
5.2.2 Microsoft Builds Intelligent Cloud Platform
5.3 Microsoft Server Products And Cloud Services
5.3.1 Microsoft Crafts Homegrown Linux For Azure Switches
5.3.2 Microsoft Azure Has Scale
5.4 Microsoft Azure Stack Hardware Foundation
5.5 Microsoft Azure Stack Key Systems Partners: Cisco Systems, Lenovo, Fujitsu, and NEC
5.6 Microsoft Gradual Transformation From A Platform Cloud To A Broader Offering Leveraging Economies of Scale
5.7 Microsoft Contributing to Open Systems
5.8 Microsoft Mega Data Center Supply Chain
5.9 Microsoft Leverages Open Compute Project to Bring Benefit to Enterprise Customers
5.9.1 Microsoft Assists Open Compute to Close The Loop On The Hardware Side
5.10 Microsoft Project Olympus Modular And Flexible
5.11 Microsoft Azure
5.11.1 Microsoft Azure Active Directory Has Synchronization
5.12 Microsoft Azure Has Scale

6. Mega Data Center Different from the Hyperscale Cloud
6.1 Hyperscale Cloud Computing Addresses The Issues Of Economies Of Scale
6.1.1 Mega Data Center Scaling
6.1.2 Mega Data Center Automatic Rules and Push-Button Actions
6.1.3 Keep It Simple Principle

7. Amazon Capex for Cloud 2.0 Mega Data Centers
7.1 Amazon Capex Dedicated To Support Datacenter
7.2 AWS Server Scale
7.3 Amazon North America
7.4 Amazon North America List of Locations
7.5 Innovation a Core Effort for Amazon
7.6 Amazon Offers the Richest Services Set
7.6.1 AWS Server Scale
7.7 On AWS, Customers Architect Their Applications
7.8 AWS Scale to Address Network Bottleneck
7.9 Networking A Concern for AWS Solved by Scale
7.10 AWS Regions and Network Scale
7.11 AWS Datacenter Bandwidth
7.12 Amazon (AWS) Regional Data Center
7.12.1 Map of Amazon Web Service Global Infrastructure
7.12.2 Rows of Servers Inside an Amazon (AWS) Data Center
7.12.3 Amazon Capex for Mega Data Centers
7.13 Amazon Addresses Enterprise Cloud Market, Partnering With VMware
7.13.1 Making Individual Circuits And Devices Unimportant Is A Primary Aim Of Fabric Architecture

8. Clos Network Architecture Topology
8.1 Google Clos Network Architecture Topology Allows the Building a Non-Blocking Network Using Small Switches
You Have To Hit A Certain Scale Before Clos Networks Work
8.1.1 Clos Network
8.2 Digital Data Expanding Exponentially, Global IP Traffic Passes Zettabyte (1000 Exabytes) Threshold

9. Summary: Economies of Scale

List of Figures:
Figure 1. Slow Growth Companies Do Not Have Data Center Scale
Figure 2. Mega Data Center Fabric Implementation
Figure 3. Facebook Schematic Fabric-Optimized Datacenter Physical Topology
Figure 4. Facebook Automation of Mega Data Center Process
Figure 5. Facebook Altoona Positioning Of Global Infrastructure
Figure 6. FaceBook Equal Performance Paths Between Servers
Figure 7. FaceBook Data Center Fabric Depends on Scale
Figure 8. Facebook Fabric Operates Inside the Data Center, Fabric Is The Whole Data Center
Figure 9. Fabric Switches and Top of Rack Switches, Facebook Took a Disaggregated Approach
Figure 10. Exchange Of Data Between Servers Represents A Complex Automation Of Process
Figure 11. Samsung Galaxy J3
Figure 12. Facebook Back-End Service Tiers And Applications Account for Machine-To-Machine Traffic Growth
Figure 1. Facebook Data Center Fabric Network Topology
Figure 13. Implementing building-wide connectivity
Figure 14. Modular Design Allows Users To Quickly Scale Capacity In Any Dimension
Figure 15. Facebook Back-End Service Tiers And Applications Functions
Figure 16. Using Fabric to Scale Capacity
Figure 17. Facebook Fabric: Pod Unit of Network
Figure 18. Server Pods Permit An Architecture Able To Implement Uniform High-Performance Connectivity
Figure 19. Non-Blocking Network Architecture
Figure 20. Facebook Automation of Cloud 2.0 Mega Data Center Process
Figure 21. Facebook Creating a Modular Cloud 2.0 mega data center Solution
Figure 22. Facebook Cloud 2.0 mega data center Fabric High-Level Settings Components
Figure 23. Facebook Mega Data Center Fabric Unattended Mode
Figure 24. Facebook Data Center Auto Discovery Functions
Figure 25. Facebook Automated Process Rapid Deployment Architecture
Figure 26. Google Douglas County Cloud 2.0 Mega Data Center
Figure 27. Google Data Center Efficiency Measurements
Figure 28. Google Andromeda Cloud High-Level Architecture
Figure 29. Google Andromeda Software Defined Networking (SDN)-Based Substrate Functions
Figure 30. Google Compute Engine Load Balancing Functions
Figure 31. Google Compute Engine Load Balanced Requests Architecture
Figure 32. Google Compute Engine Load Balancing Scaling
Figure 33. Google Traffic Generated by Data Center Servers
Figure 34. Google Mega Data Center Multipathing: Implementing Lots And Lots Of Paths Between Each Source And Destination
Figure 35. Google Mega Data Center Multipathing: Routing Destinations
Figure 36. Google Builds Own Network Switches And Software
Figure 37. Google Clos Topology Network Capacity Scalability
Figure 38. Schematic fabric-optimized Facebook datacenter physical topology
Figure 39. Google Jupiter Network Delivers 1.3 Pb/Sec Of Aggregate Bisection Bandwidth Across A Datacenter
Figure 40. Microsoft Azure Cloud Software Stack Hyper-V hypervisor
Figure 41. Microsoft Azure Running Docker Containers
Figure 42. Microsoft Data Center, Dublin, 550,000 Sf
Figure 43. Microsoft-Azure-Stack-Block-Diagram
Figure 44. Microsoft Azure Stack Architecture
Figure 45. Microsoft Data Centers
Figure 46. Microsoft Open Hardware Design: Project Olympus
Figure 47. Microsoft Open Compute Closes That Loop On The Hardware Side
Figure 48. Microsoft Olympus Product
Figure 49. Microsoft Azure Has Scale
Figure 50. Mega Data Center Cloud vs. Hyperscale Cloud
Figure 51. Amazon Web Services
Figure 52. Amazon North America Map
Figure 53. Amazon North America List of Locations
Figure 54. Woods Hole Bottleneck: Google Addresses Network Bottleneck with Scale
Figure 55. Example of AWS Region
Figure 56. Example of AWS Availability Zone
Figure 57. Example of AWS Data Center
Figure 58. AWS Network Latency and Variability
Figure 59. Amazon (AWS) Regional Data Center
Figure 60. A Map of Amazon Web Service Global Infrastructure
Figure 61. Rows of Servers Inside an Amazon (AWS) Data Center
Figure 62. Clos Network
Figure 63. Data Center Technology Shifting
Figure 64. Data Center Technology Shift

Note: Product cover images may vary from those shown
3 of 5

Loading
LOADING...

4 of 5
  • Amazon
  • Cisco
  • Facebook
  • Google
  • IBM
  • Mesosphere
  • Microsoft
  • Red Hat
Note: Product cover images may vary from those shown
5 of 5
The methodology is available for download below.
6 of 5
Note: Product cover images may vary from those shown
Adroll
adroll