Cloud Computing: a $200-Billion Dollar Market
According to a 2022 Statista report cloud computing infrastructure service spending rose by $55 billion in just the second quarter of 2022. From what is now a $200 billion market (for the trailing twelve months) it is the cloud service providers Amazon and Microsoft that have taken over 50% of the market share 1 with their Amazon Web Services and Azure offerings (respectively).
It’s clear that professionals across nearly every industry - and certainly those in technology, DevOps, and data leadership - would benefit from a solid understanding of cloud computing, the benefits it offers, and the changes it introduces to existing systems and processes. Whether you’re managing infrastructure or communicating strategic value to stakeholders, foundational knowledge of cloud models, capabilities, and trade-offs is now essential.
Let’s dive in.
Fundamental Definitions
The National Institute of Standards and Technology (NIST) defines cloud computing as a model enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (such as networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This definition highlights key aspects like shared resources, broad network access, self-service provisioning, rapid elasticity, and measured service.
Almost all modern computing centers around a basic client-server model. The client is the customer, accessing the compute cloud via a Web browser or desktop application that a person interacts with to make requests to computer servers. The server is the Cloud provider, the compute provider, for example in the case of Amazon Web Services (AWS) the server is called an Amazon Elastic Compute Cloud (EC2). The client makes a request, which the server validates as a legitimate request, and then the server returns a response.
Cloud computing offers a massive range of services for every type of business:
- Basic elements like compute storage and network security tools.
- Through complex solutions like blockchain, machine learning or artificial intelligence.
- And robot development platforms all the way through very specialized toolsets, like video production management system and orbital satellites you can rent by the minute.
Architectual Approaches and the Cloud
Existing Architectual Approaches
The industry has generally moved through three fundamental architectural approaches - an evolution that began before the advent of cloud computing. The three fundamental architectural approached are: Big Ball of Mud, Modular Monolith, and Distributed Architecture.
- A Big Ball of Mud (BBoM) is often considered a lack-of-architecture, characterised by a lack of discernible modularity or structure, where components and data are tightly coupled with many dependencies.
- A Modular Monolith improves upon a BBoM by structuring the application as a single executable composed of modules with high cohesion and low coupling, making it easier to maintain and evolve.
- A Distributed Architecture takes modularity further by composing the application from code modules that are separately deployable workloads, running in separate processes, possibly on different computers, and communicating over a network. This allows parts to be developed, deployed, and run independently.
Cloud computing is not a completely new technology, but is an evolution from earlier computing technologies that has made resources more widely accessible.
However, designing applications for the cloud is significantly different from designing them for traditional IT due to inherent characteristics of the cloud like universal access, shared resources, distributed and elastic computing, multitenancy, and self-service. Many applications running in the cloud today weren’t designed for it, meaning they don’t run as well as they could.
Cloud-Native Applications and Microservice Architectures
This leads to the concept of a Cloud-Native Application, one specifically written or modernized for the cloud to take full advantage of the cloud computing model. A cloud-native architecture fundamentally consists of the application’s custom domain logic separated from reusable services provided by the cloud platform.
This approach is less about where the application is physically located and more about how it is structured and deployed. An application built with cloud-native principles is intended to run well on the cloud and will also generally perform better in traditional IT environments.
Key characteristics of cloud-native applications include being deployed as an Application Package, exposing functionality via a Service API, running as a Stateless Application, being a Replicable Application, using External Configuration, and leveraging Backend Services.
These characteristics align with cloud practices like reliability through redundancy, eventual consistency, generic hardware, application mobility, multitenancy, horizontal scaling, and self-provisioning.
Microservices Architecture are a refinement of Cloud-Native Architecture and Distributed Architecture.
It structures an application as a collection of small independent services that communicate over well-defined APIs and are owned by small, self-contained teams. Each microservice is a cloud-native service, designed to perform a capability in the application’s business domain.
Deployment Models
Across the data and computing industry there are four well-established deployment models that describe how cloud computing can be introduced to, and used by, a business: “Public”, “Private”, “Hybrid”, and “Community”. Each has distinct implications for security, cost, and control.
Public
With a public cloud deployment, cloud-based systems, processes, and services are offered to the public over the internet - making them available to anyone who wishes to purchase them. Those cloud resources are owned and operated by a third-party cloud service provider, with Amazon Web Services, Microsoft Azure, and Google Cloud being the dominant service providers. These services are available to anyone on a pay-as-you-go basis, requiring no up-front capital expenditure, and where their delivery over the internet means applications can be quickly provisioned and de-provisioned.
This OpEx (Operational Expenditure) model, where you only pay for the computing resources that you use, offers flexibility and rapid provisioning - resources can be scaled up or down as needed, and costs adjust accordingly. So if you choose to reduce your compute load you’ll watch as your costs drop off - while adding more services to your deployment pushes your operational costs higher.
Private
At the other end of the scale to a public cloud deployment is the private cloud. Here, computing resources are used exclusively by users from the purchasing business or organization. The infrastructure may be hosted on-premise or by a third party, but it is not shared with other customers. This model typically follows a CapEx (Capital Expenditure) approach, involving upfront investment in hardware and infrastructure.
While more expensive, it means organizations gain complete control over the resources, security, and compliance of the systems they use. Indeed those systems can be physically located on-site (e.g. in the organization’s data center), or they can be hosted by a third-party service provider. You can imagine industries such as banking, insurance, government, and defence making use of private cloud deployments - where highly sensitive data and computing security and handling is of the upmost importance.
Hybrid
Hybrid cloud environments combine elements of both public and private clouds, and thus the computing environment created combines both a public and private cloud. With the hybrid deployment model, computation, data and applications can be shared between the public and private clouds. This offers businesses the greatest flexibility to determine where they run their applications.
Organizations can move workloads between environments as needs evolve - for example, using public cloud resources for burst capacity while keeping sensitive workloads in a private cloud. The organization can decide how this plays out in the long term, with the hybrid model offering a path to gradual, planned cloud adoption. And importantly, the business or organization maintains control over security, compliance, and legal requirement concerns.
Community
A community cloud is a deployment model is where the cloud infrastructure is shared between several organizations that have common concerns, such as regulatory requirements, security standards, compliance obligations, or shared operational goals.
These organizations form a community and collectively use, manage, and sometimes fund the infrastructure, which may be managed by one or more of the participating organizations, a third party, or a combination of both.
This model offers a balance between the cost efficiency of public cloud services and the control and compliance features of private cloud environments. By pooling resources, community cloud participants can reduce individual investment while ensuring that their shared infrastructure is tailored to their specific needs.
Common use cases include collaborations between government agencies, healthcare institutions, or research organizations, where strict data handling and privacy standards must be observed but full-scale private cloud deployment may be impractical or redundant.
While community clouds are less common than public or private clouds in commercial settings, they offer an effective solution for groups of organizations with aligned missions and trust-based relationships.
Aside: Hybrid versus Community Models
While the community cloud and hybrid cloud models both involve multiple environments or parties, they serve different strategic purposes.
A community cloud is designed for use by several organizations that share common objectives or compliance needs, such as universities, hospitals, or government departments.
In contrast, a hybrid cloud is deployed by a single organization that integrates both public and private cloud environments to optimize flexibility, cost, and control.
The key distinction lies in who is using the cloud (multiple trusted entities vs. a single organization) and why it is being structured that way (shared mission vs. workload distribution).
| Feature | Community Cloud | Hybrid Cloud |
|---|---|---|
| Primary Users | Multiple organizations with shared goals or concerns | A single organization |
| Purpose | Collaboration under common policies or compliance requirements | Combining public and private clouds for operational flexibility |
| Ownership & Governance | Jointly managed by participants or a trusted third party | Managed by a single organization |
| Deployment Style | One shared environment among multiple parties | Integrated environments (e.g., public + private cloud) |
| Common Use Cases | Healthcare consortia, research networks, government agencies | Financial services, retail, enterprises with mixed workloads |
Benefits of Cloud Computing
The main benefits of cloud computing, can be summarized by the following six elements:
(1) High availability
Depending on the service-level agreement (SLA) that you choose (as per the deployment model), the principle of high availability means you can expect your cloud-based applications to always be available, providing a continuous high quality user experience with no apparent downtime. Even if individual components fail, services are automatically rerouted to alternate servers or data centers to maintain uptime - often targeting “five nines” (99.999%) availability. Thus many cloud service providers offer SLAs that guarantee the amount of down time will extend to no more than a few hours per year.
(2) Scalability
Through scalability, cloud service providers ensure that applications can be scaled both vertically and horizontally. Here, “vertical scaling” means, for example, to increase compute capacity by adding RAM or CPUs to a virtual machine as requirements change. Meanwhile, “horizontal scaling” means, for example, to increase compute capacity by adding instances of resources (more virtual machines) to the computing configuration as demand changes.
(3) Elasticity
Most cloud service providers allow you to configure your cloud-based applications to take advantage of autoscaling. In this way, your applications will always have the resources they need. In particular this is something that works both in the up-scale and down-scale sense. Thus, with greater demand the instances invoked can be configured to increase alongside that increasing demand (e.g. during black Friday, or the Chistmas shopping peak). And then, when demand falls off, the same configuration will reduce the instances being used (e.g. in the middle of the night when fewer people are shopping). Hence, as I’ve briefly mentioned already, you only pay for what you use - as an operational expense within a consumption model - and you don’t need to worry about specifically invoking a greater or fewer number of instances to meet those changes in demand, it’s all pre-configured (under your guidance) and invoked automatically.
(4) Agility
Cloud platforms enable rapid provisioning of infrastructure via web interfaces, APIs, or command-line tools. Resources can be deployed globally in minutes, whenever your requirements change - accelerating innovation and reducing time to market. And this extends to quickly being able to replicate your services in other geographic locations - to ensure a secure and efficient compute experience regardless of customer physical location.
(5) Geographic distribution
Cloud providers operate data centers in multiple regions worldwide. This allows organizations to deploy applications closer to end-users, improving performance and meeting data residency requirements. It also underpins global high availability and disaster recovery strategies, especially where a particular geographic region contains multiple data centers.
(6) Disaster recovery
Cloud architectures support resilient backups and replication strategies. By taking advantage of cloud-based backup services, data replication, and geographic distribution, you can deploy your applications and data knowing that your data is safe in the event of a disaster. Should one server, data center, or even an entire region fail (with the probability of those events decreasing in the order as listed) you can be sure that your data is safe and your applications continue running since the cloud service provider automatically replicates your work across several data centers and regions. Thus regardless of what happens, say anything from a localized power outage through to an event effecting a much larger geographic region, your cloud based business operations remains safe.
The Consumption-based Model
In the above I alluded to the Capital and Operation expense considerations of cloud computing offerings. For Capital Expenditure (CapEx) the up-front spending is placed on the computing hardware and infrastructure necessary to implement the cloud. That expense would then be amortized or depreciated from the accounts over time to accound for the assets’ limited useful lifespan. This leads you naturally into a Private or Hybrid cloud deployment.
On the other hand, with Operational Expenditure (OpEx), the spending incurred is only on those services and computations happening right now. Thus, above, when we discussed Elasticity and Agility, you only pay for what you use - meaning additional demand incurs a higher cost, but that cost then drops as demand falls off.
It’s no surprise that the OpEx form of expenditure, where you only pay for what you use, is one of the strongest benefits of cloud computing. Most cloud service providers operate in this model, also referred to as the consumption-based model, meaning you are only paying if you are actually using a service or performing computations. There are no up-front costs, no need to purchase and manage hardware or infrastructure (so no CapEx), and no worries that hardware capacity will either go unused or, conversely, not be sufficient.
And if your business needs change, you simply remove your applications and services from operation and your payments immediately drop to zero.
Cloud Service Models
In addition to the deployment models, benefits, and expenditure styles mentioned above, cloud computing services can also be grouped according to the service model being implemented. There are three levels of model to consider:
- IaaS - Infrastructure as a Service.
- PaaS - Platform as a Service.
- SaaS - Software as a Service.
Infrastructure as a Service
With IaaS most of the management of cloud services and infrastructure lies with you, the cloud “tenant”. Here, the cloud provider ensures the hardware is kept up-to-date and secure (e.g. in terms of physical security and operational concerns). However, responsibility for maintenance of the operating system and configuration of any networks remains with the cloud tenant. Virtual Machines (VMs) operating in data centers would be a typical example of an IaaS Service Model. The advantages come from your ability, as cloud tenant, to rapidly deploy new compute devices instead of having to procur, install, and configure the physical servers yourself. IaaS is the most flexible of cloud service models as it gives you complete control over the hardware that you use to run your applications alongside an agility and flexibility over time.
Platform as a Service
PaaS is best described as a managed hosting environment - that is, the cloud provider would be expected to manage the compuation devices (virtual machines) and networking resources. It would therefore be the cloud tenant’s responsbility to deploy their applications into this managed hosting environment. If you are a developer you may have come across platforms such as Heroku - this is a good example of a Platform as a Service (from Heroku’s point of view and their relationship with their cloud service provider). Here, developers (you, for example) can upload their web applications without having to worry about the physical hardware and software requirements.
Software as a Service
In opposition to Infrastructure as a Service we with Software as a Service where most of the management responsibility lies with the cloud service provider. It is the cloud provider therefore who manages all aspects of the application environment, such as virtual machines, networking resources, data storage, and applications. The cloud tenant only needs to provide their data to the application managed by the cloud provider. If you’ve ever used Google Sheets, GMail, Microsoft Outlook, or Microsoft Office browser based tools then you’ve experienced SaaS.
Conclusions
Cloud computing is the delivery of computing services over the internet. It represents a transformative shift in how technology resources are delivered, scaled, and consumed.
Whether you’re managing infrastructure, building modern applications, or planning long-term IT strategy, understanding deployment models, service models, and operational implications is critical.
You have a range of options for deployment and service level - providing you a route into an even larger grouping of computation and data services that cover all aspects of modern application development - right the way through to state of the art machine learning tools.
Servers, data storage, networking, data analytics and intelligence make up the broad strokes of what business gain through cloud computing. And of course this is layered into a domain that is always innovating (and innovates rapidly), a domain that is flexible to your computing needs, and which provides you the economies of scale that you can take advantage of as your business grows and needs change.
While each provider offers unique services and capabilities, the core principles outlined here form the foundation of any cloud ecosystem.
Footnotes
-
Including both Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). ↩
