Welcome to Kenny-s Blog

hyperscale computing

Hyperscale computing is a distributed infrastructure that can quickly accommodate an increased demand for internet-facing and back-end computing resources without requiring additional physical space, cooling or electrical power. Hyperscale computing is characterized by standardization, automation, redundancy, high performance computing (HPC) and high availability (HA). The term is often associated with cloud computing and the very large data centers owned by Facebook, Google, Amazon and Netflix.

While a corporate data center might support hundreds of physical servers and thousands of virtual machines (VMs), a hyperscale data center needs to support thousands of physical servers and millions of virtual machines. To accommodate such demand, cloud providers like Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) have developed new infrastructures that maximize hardware density, while minimizing the cost of cooling and administrative overhead.

There is a lot of interest in hyperscale computing right now because the open source software and architectural changes created for hyperscale data centers are expected to trickle down to smaller data centers, helping them to use physical space more efficiently, consume less power and respond more quickly to user’s needs. Hyperscale innovations currently being adopted by smaller organizations include software-defined networking (SDN), converged infrastructure and microsegmentation.

Have something to add? Share it in the comments.

Your email address will not be published. Required fields are marked *