Thanks for printing! Don't forget to come back to Intellinet for fresh articles!
Is Your Application Really Cloud Native?
It's no
secret that cloud native applications are more resilient, enable faster time to
market, and can make your development team more engaged and efficient. However, I still encounter confusion around what
it really means for an application to be "cloud native." Taking an existing application and
replicating it in the cloud isn't cloud native.
Sure, you're in the cloud following the "letter of the law" but you're
not taking advantage of the "spirit" or the benefits cloud-native can
offer. Arguably, I see cloud-native
boiling down to four main traits.
Composable
Architecture
A composable
architecture decouples and isolates the logical components of the system in
which they reside. For example, an e-commerce site is, at minimum, composed of a
product catalog, shopping cart, checkout experience, and order processing
system. A traditional monolithic application would contain all these components
as a single executable unit with no isolation and a high degree of coupling,
thus leaving the door open for one component to adversely affect another. The inherent nature of a cloud-native
application ensures an increased level of resiliency because those adverse
effects are significantly reduced, or outright eliminated with the proper
planning and facilities.
When the
components of the system are properly isolated, scaling of the application with
surgical precision is enabled. As an example of how this comes in handy, let's
say you are running an e-commerce site and your marketing team decides to run a
promotion that you expect will create a large spike in traffic. If you had one monolithic application, you
would have to scale out and duplicate it in its entirety, leading to superfluous
resource consumption and costs. With the
decoupled components of a cloud-native application, you can scale out just the components
that are dedicated to processing orders to handle the excess load.
Externalizing
the configuration of the application from the runnable application itself
provides a "single source of truth" for the different components of the
architecture, which helps to ensure the independent scalability of those
individual components. More than likely,
the Checkout component of your e-commerce site relies on an external service to
calculate taxes based on the destination address. What happens when you've
found a better vendor to perform that job and you need to change the service
connection information to point to the new vendor? Keep in mind you are
currently in-season, so your Checkout component has a significant number of
replicas running because it had to scale up to meet demand. Do you risk
downtime by rebuilding and redeploying that component, or do you just update
the external configuration and let the replicas read in the new service
information with no downtime? I'm willing to bet the latter.
Developing a
cloud native application is not about virtualizing or lifting and shifting
traditional servers into a cloud provider.
In this process, you need to take a step back, look at your existing
services and decide what you need to build yourself versus what services are
available that you can use.
On-Demand
Infrastructure
The two main
characteristics of an on-demand infrastructure are that it is immutable and
observable. When those concepts are
properly combined, they will add an exponential amount of resiliency to your
application.
Immutable
infrastructure refers to servers, virtual machines, or containers that are
never modified after deployment. We no longer want to update in-place servers,
instead opting to ensure that a deployed server will remain intact, with no
changes made. So, no more worrying about "Patch Tuesdays" or
deploying changes into an incompatible environment. You can be a lot more
deliberate about what needs to change and when. Every change to the application
or environment is exact, versioned, timestamped, and redeployed.
Observability
comes from the control systems engineering discipline and is defined as a
measurement of how well a system's internal states could be inferred from its
external outputs. Having an observable infrastructure allows your application
to keep itself in check, so to speak. Both the software and hardware components
of your system should expose facilities to indicate their current state when
requested. If parts of the application are not performing as expected, it can
report back to the delegated controller, such as a Load Balancer, and it can
work to fix the issue without human intervention.
Imagine a
scenario where a specific user path through your product causes runaway code to
eat up all the CPU resources and brings the server to a crawl but does not completely
crash it. If you have numerous instances running behind a load balancer, it is
not that big of a deal since the other servers will pick up the slack. However,
with the proper amount of immutability and observability in place, the load
balancer would be able to detect the degradation of the rouge server,
automatically decommission it, and spin up a new server to take its place.
Hyper-Responsive
Delivery Model
When you
have an application that is composable versus a monolithic one, as you can
imagine, that impacts how your development and operations teams work together
to release the product. No longer is it
necessary to schedule massive releases with all hands-on deck. Smaller releases
mean a smaller change surface and a smaller risk of error.
A best
practice is to organize your teams by features or domains. To revisit the e-commerce example, the product
catalog, shopping cart, checkout, and order processing are four different
domains within the site - each with their own team. Each domain should have a cross-functional
team composed of developers, QA experts, product owners, and operations. The team should have its own code base and
release schedule. This gives teams a significant amount of independence and autonomy,
allowing them to be more responsive and less encumbered by the overhead
required for orchestrating releases with other teams.
A significant
benefit of this model is that having smaller teams working on smaller products producing
smaller changes will increase velocity and reduce time to market.
Greater Cost
Controls
Whether you
are re-platforming or developing a cloud native application, organizations
typically see a significant reduction in capital expenses by moving from their
own data center servers to a service-based cloud provider. Instead of paying
for servers that you may only use a fraction of the capacity at any given time,
you will only pay for the amount of the service you consume - no more, no less
- just like a utility. However, when you
develop a cloud native application, you will ensure resources are more
efficiently used versus a straight lift and shift into the cloud.
Autoscaling,
and other demand-based concepts, also contribute directly to greater cost
controls. If the elements of your application and infrastructure are properly
composed, this turns into a dashboard of knobs and levers that can be used to
fine-tune resources consumption against spend. While that may sound more complicated,
it beats the alternative of just "throwing more hardware at the problem,"
which will rapidly bloat your costs.
Additionally,
many IT leader face the "build versus buy" conversation. Cloud native apps can also help provide a
third option - renting. Taking advantage
of a service that a cloud provider already has can increase your time to market
while allowing you to defer certain decisions until they are necessary.
While the four
traits I outlined highlight the core tenets of a cloud-native application, they
are not an exhaustive list that touch every corner of this paradigm. The intent
of this article was to give you a broad understanding and get you pointed in
the right direction.
So, is your
application really "cloud-native"?
Do these traits fit with your experience?
If not, I
challenge you to revisit any existing applications through the lens of these four
traits and really think about the cost and benefits of making the migration. If you are starting any greenfield
initiatives, I urge you to consider the pros and cons of adopting these
paradigms from the start.
As the
demands for both internal and external stakeholders continue to increase, those
organizations not adopting these traits in some manner will likely be left
behind.