MJay

Benefits of Serverless 본문

Cloud Computing/Serverless

Benefits of Serverless

MJSon 2019. 4. 3. 22:59

five benefits of infrastructural outsourcing:

Reduced labor cost

Reduced risk

Reduced resource cost

Increased flexibility of scaling

Shorter lead time

Reduced Labor Cost

Serverless was fundamentally about no longer needing to look after your own server processes

you care about your application’s business logic and state, and you let someone else look after whatever else is necessary for those to work.

You’re no longer managing operating systems, patch levels, database version upgrades, etc. If you’re using a BaaS database, message bus, or object store, then congratulations—that’s another piece of infrastructure you’re not operating anymore.

you have less logic to develop yourself.

here all of the HTTP-level request and response processing is done for us by the API Gateway

D__eployment with FaaS is easier because we’re just uploading basic code units—zip files of source code in the case of Javascript or Python, and plain JAR files in the case of JVM-based languages__. There are no Puppet, Chef, Ansible, or Docker configurations to manage. Other types of operational activity get more simple too, beyond just those we mentioned earlier in this section. For example, since we’re no longer looking after an “always on” server process, we can limit our monitoring to more application-oriented metrics. These are statistics such as execution duration and customer-oriented metrics, rather than free disk space or CPU usage.

Not NoOps

Support, monitoring, deployment, security, and networking are still considerations when building a Serverless app, and while they may require less and/or different work, they do still need to be approached carefully, and with expertise. Serverless is not “NoOps.”

Reduced Risk

A specific example here is managing a distributed NoSQL database. Once such a component is set up, it might be relatively rare that a failure in a node occurs, but when it does, what happens? Does your team have the expertise to quickly and efficiently diagnose, fix, and recover from the problem? Maybe, but oftentimes not. Instead, a team can opt to use a Serverless NoSQL database service, such as Amazon DynamoDB. While outages in DynamoDB do occasionally happen, they are both relatively rare and managed effectively since Amazon has entire teams dedicated to this specific service.

Reduced Resource Cost

Once we’ve figured out planning what hosts or resources we need we can then work on allocation—mapping out which parts of our application are going to run on which resources. And finally, once we’re ready to deploy our application, we need to actually obtain the hosts we wanted—this is provisioning.

This whole process is complicated, and it’s far from an exact science. We very rarely know ahead of time precisely what our resource requirements are, and so we overestimate our plan. This is known as over-provisioning. This is actually the right thing to do—it’s much better to have spare capacity and keep our application operating than for it to fall over under load. And for certain types of components, like databases, it may be hard to scale up later, so we might want to over-provision in anticipation of future load.

Over-provisioning means we’re always paying for the capacity necessary to handle our peak expected load, even when our application isn’t experiencing that load. The extreme case is when our application is sitting idle—at that point in time we’re paying for our servers to be running when in fact they aren’t doing anything useful. But even when our applications are active we don’t want our hosts to be fully utilized. Instead, we want to leave some headroom in order to cope with unexpected spikes in load.

service provide precisely the amount of capacity we need at any point in time

For instance, if our application is only running for 5 minutes of every hour, we only pay for 5 minutes of every hour, not the whole 60 minutes. Further, a good Serverless product will have very precise increments of use; for example, AWS Lambda is charged by the 100 milliseconds of use, 36,000 times more precise than the hourly billing of EC2.

In modern non-Serverless apps, we do see some of these benefits through techniques like auto scaling; however, these approaches are often not nearly as precise as a Serverless product (see above our point of EC2 charging by the hour), and it’s still typically not possible to auto-scale a non-Serverless database.

Increased Flexibility of Scaling

All of these resource cost benefits come from the fact that a Serverless service precisely scales to our need. So how do we actually make that scaling happen? Do we need to setup auto-scale groups? Monitoring processes? No! In fact, scaling happens automatically, with no effort.

Let’s take AWS Lambda as an example. When the platform receives the first event to trigger a function, it will spin up a container to run your code. If this event is still being processed when another event is received, the platform will spin up a second instance of your code to process the second event. This automatic, zero management, horizontal scaling will continue until Lambda has enough instances of your code to handle the load.

A particularly nice aspect to this is that Amazon will still only charge you for how long your code is executing, no matter how many containers it has to launch. For instance, it costs precisely the same to invoke a Lambda 100 separate times in one container sequentially as it does to invoke a Lambda 100 times concurrently in 100 different containers, assuming the total execution time across all the events is the same.

WHAT STOPS LAMBDA SCALING INFINITELY?

What happens if someone performs a deliberate denial of service (DDoS) attack on your Lambda application—will it scale to tens of thousands of containers? No, fortunately not! Apart from anything else, this would get mighty expensive very quickly, and it would also negatively impact other users of the AWS platform.

Instead, Amazon places a concurrent execution limit on your account, in other words it will only spin up to a maximum number of Lambda containers across your whole AWS account. The default (at the time writing) for this is 1,000 concurrently executing Lambda functions, but you can request to have this increased depending on your needs.

Shorter Lead Time

The first four high-level benefits we’ve covered are all excellent reasons to consider Serverless—depending on what your application does you are likely to see significant (double digit percentage) cost savings by embracing Serverless technologies.

However, we’d like to show you a quote from Sree Kotay, the CTO of Comcast Cable, from an AWS Summit in August 2016. Full disclosure: he wasn’t talking about Serverless, but he was talking about how Comcast had gained significantly from various other infrastructural outsourcing, moving from “on prem” to the cloud. He said the following:

After going through this journey [of cloud and Agile] for the last five years we’ve realized benefits, and these benefits are around cost and scale. And they’re critical and important, but interestingly they’re not the compelling bit…The key part is this really changes your innovation cycle, it fundamentally shifts how you think about product development.

Sree Kotay

The point we want to make is that the CTO of a major corporation is saying that costs and scale aren’t the most important thing to him—innovation is. So how does Serverless help in this regard?

Here are some more quotes, this time from Adrian Cockcroft (VP, Cloud Architecture Strategy at AWS, and formerly Cloud Architect at Netflix), talking about Serverless:

We’re starting to see applications be built in ridiculously short time periods.

Adrian Cockcroft

Small teams of developers are building production-ready applications from scratch in just a few days. They are using short, simple functions and events to glue together robust API-driven data stores and services. The finished applications are already highly available and scalable, high utilization, low cost, and fast to deploy.

Adrian Cockcroft

Over the last few years we’ve seen great advances in improving the incremental cycle time of development through practices such as continuous delivery and automated testing, and technologies like Docker. These techniques are great, but only once they are set up and stabilized. For innovation to truly flourish it’s not enough to have short cycle time, you also need short lead time—the time from conceptualization of a new product or feature to having it deployed in a minimum viable way to a production environment.

Because Serverless removes so much of the incidental complexity of building, deploying, and operating applications in production, and at scale, it gives us a huge amount of leverage, so much so that our ways of delivering software can be turned upside down. With the right organizational support, innovation, and “Lean Startup” style, experimentation can become the default way of working for all businesses, and not just something reserved for startups or “hack days.”

This is not just a theory. Beyond Adrian’s quotes above, we’ve seen comparatively inexperienced engineers take on projects that would normally have taken months and required significant help from more senior folks. Instead, using a Serverless approach, they were able to implement the project largely unaided in a couple of days.

And this is why we are so excited about Serverless: beyond all of the cost savings, it’s a democratizing approach that untaps the abilities of our fellow engineers, letting them focus on making their customers awesome.

'Cloud Computing > Serverless' 카테고리의 다른 글

Introducing Serverless  (0) 2019.03.29
What Do Serverless Applications Look Like?  (0) 2019.03.29
Benefits of Serverless  (0) 2019.03.29