- My presentation in faculty of computers and artficial intelligence, cairo university on Serverless Computing model
- And How to migrate your application to serverless computing model and to a cloud service like AWS that serves that serverless model
2. Outline
1. What is serverless model
2. Benefits of serverless model
3. Steps to migrate to serverless model
4. Review
5. Drawbacks of serverless model
6. Summary
3. What is serverless model
• Serverless computing is a cloud-computing execution
model in which the cloud provider runs the server, and
dynamically manages the allocation of machine
resources.
4. Benefits of serverless computing
• Cost
• Serverless can be more cost-effective than renting or purchasing a fixed
quantity of servers, which generally involves significant periods of
underutilization or idle time.
• This can be described as pay-as-you-go computing or bare-code as you
are charged based solely upon the time and memory allocated to run your
code; without associated fees for idle time.
• Elasticity versus scalability
• In addition, a serverless architecture means that developers and operators do
not need to spend time setting up and tuning auto scaling policies or systems;
the cloud provider is responsible for scaling the capacity to the demand.
• Productivity
• With function as a service, the units of code exposed to the outside world are
simple event driven functions.
• This means that typically, the programmer does not have to worry
about multithreading or directly handling HTTP requests in their code,
simplifying the task of back-end software development.
5. Steps to migrate to serverless
model
1. Review your existing application
2. Break apart your application
3. Refactoring code (If needed)
4. What doesn’t need to be refactored
5. Deploy a prototype
6. 1. Review your existing application
• We want to start off by reviewing how your application is
configured and currently running.
• This is to determine the level of ephemeralness and see what
needs to be refactored and what doesn’t.
7. 1.1 Ephemeral-ness
• When working with serverless, we are operating at a level where all transactions through our
system should be ephemeral.
• Meaning we are not keeping any state, e.g. not saving files onto the server running our code.
• Instead, we are relying on other services to handle those operations.
• Let’s look at some options.
• Storing files
• If your application code is handling files and writing or reading directly from the server.
• This is going to need to be changed because when you leverage cloud functions versus a traditional virtual
machine or container.
• The server hosting your cloud function and your application code will at some point shut down.
• Therefore, we want to create a system which stores files outside of the server running our application code.
• This gives us the ability to ensure that nothing is ever lost and will set us up to make the move to serverless.
• Every cloud provider has some type of cloud storage, which will be sufficient for storing images, documents, etc.
that your application generates.
• Storing data
• If you were relying on a local cache on your virtual machine or container, then you will need to find an alternate
data store.
• The data store will work with your cloud function and make sure that across invocations you are keeping track of
whatever you need too.
• Here are some options if you’re on AWS.
• We talk about databases in a bit more detail further down in the article as well.
• AWS ElastiCache — Fully managed Redis or Memcached
• AWS DynamoDB — Fully managed NoSQL Database
8. 2. Break apart your application
• Once you’ve identified the areas of your application which need to change. We need to take a
look at how our application is architected.
• If you’re already using a micro-service architecture than things become a lot easier because you
already had to move away from a monolithic architecture.
• However, if you’re not working with micro-services then we need to start from the top.
• First, we need to look at how to effectively break apart your application code for simplicity we are
going to use pizza 🍕 as our running analogy.
• Our goal is to go from one large pizza with no slices to a large pizza with multiple slices.
• Each slice of pizza will represent one cloud function. Each pizza will represent one logical grouping of
functionality.
9. 2.1 The CRUD example
• For example, let’s imagine that your application handles CRUD
(Create, Read, Update, Delete) functionality for /users.
• Create a user — POST /users
• Get a user — GET /users/:id
• Get all users — GET /users
• Update a user — PUT /users/:id
• Delete a user — DELETE /users/:id
10. 2.1 The CRUD example
• What is our logical grouping?
• The logical grouping would be the entire /users functionality. We now have identified our
pizza 🍕.
• How many cloud functions are in the grouping?
• Each CRUD endpoint will route to a different cloud function which handles that specific piece
of functionality.
• Resulting in five slices of highly modular pizza 🍕
11. 2.1 The CRUD example
• Five separate cloud functions, is that a good idea?
• When you break things apart to that degree the organizational overhead increases
dramatically.
• You also are more susceptible to “cold starts” which happen when your cloud function first
receives a request and hasn’t had any requests for an extended period of time (e.g. 3–10
minutes, it varies).
• Since you will have five cloud functions, each time you receive that first request you will have
to deal with another “cold start” which will result in a slower response.
• If you’re making an enterprise shift to serverless from a monolithic architecture then you’re
going to want to be more hesitant when breaking things apart to this degree.
• As you will quickly go from one server hosting your entire application to hundreds of cloud
functions, depending on the size and level of functionality.
12. 2.1 The CRUD example
• What’s an alternative, how should I handle this?
• If you’re looking for an alternative to make the migration smoother with less overhead
and less “cold start” issues.
• You can think about the logical groupings differently.
• For example, let’s imagine the entire application as the pizza 🍕 and make each slice of
pizza 🍕 a subset of functionality (e.g. /users CRUD).
• The pizza — Entire application
• Slice of pizza — Logical grouping of similar functionality (e.g. /users CRUD)
• Now if we take the example above and define it under our new classification we
are left with a single cloud function.
• Only need to worry about “cold start” on a single cloud function
• Only need to manage a single cloud function
• Less bloated automation
• Easier to understand
• If we then add more functionality in the future like some new API endpoints for
/schedules that also has full CRUD functionality similar to /users.
• We are going to have two cloud functions.
• Leaving us with a nicely broken apart and optimized micro-service architecture
running as cloud functions.
• This level of modularity will keep us focused on the migration and avoid the worst
outcome, pre-optimization.
13. 3. Refactoring code
• When it comes to working within this new environment, you’re going to need to adjust the top
layer of your API code.
• Due to the fact that your cloud function is going to be hosted and fully managed by a cloud
provider.
• Your cloud function will be invoked in numerous different ways, but the request will always
contain the same core arguments.
• Points to consider when refactoring your code
• Scaling, avoiding bottlenecks
• Reduce execution time
• Leverage fully managed services
14. 3.1 Scaling, avoiding bottlenecks
• When moving to serverless, you want to make sure that all layers of your new architecture can
scale to the same degree as your cloud functions.
• What can easily happen and does happen when companies migrate to serverless is they run into
scaling issues as their supporting services end up becoming bottlenecks.
• If you are leveraging cloud functions for your compute layer, but still have dependencies on other
services which are not able to match that horizontal scaling, you can quickly run into issues.
• For example, if your company hand-rolled its own queuing system, you could now make the move
to AWS SQS.
15. 3.2 Reduce execution time
• If you’re already familiar with cloud functions then you most likely know that “long-running”
processes are not the best fit. At least on the surface.
• For example, AWS Lambda has a max timeout of 15 minutes which can be restrictive and make
companies hesitate to move to serverless because they have APIs which run for 30+ minutes.
• However, it’s possible to even make a 30+ minute process work with cloud functions.
• For example, you could break apart a long-running process into smaller chunks and leverage a
queue service like AWS SQS you can handle reduce the total execution time to a minute or less,
spread across multiple invocations.
16. 3.3 Leverage fully managed services
• An area which provides immediate benefit when migrating to serverless is the ability to
now tap into fully managed services and keep the focus on the product you’re building.
• If you have already been running on the cloud, then this may be less relevant, however, if
you are coming from on-premises and making the leap straight to serverless then you now
have the option to throw away some fragile code.
• Instead of what a lot of companies do when they migrate from on-premises to the cloud.
• Companies will carry over fragile code or internally created services into the new
environment and not invest enough time in alternatives which would ultimately save
money/time over the long run.
• From my experience, most of the use-cases we’ve come across align with some kind of fully
managed service from one of the major cloud providers
• Examples
• AWS DocumentDB.
• AWS SQS
17. 4. What doesn’t need to be refactored
• Although there are many moving parts which need to be thought about from a different
perspective, you don’t have to throw everything away.
• The logic and functions which have been driving your application thus far can mostly be ported.
• There may be something which you want to refactor based on now having the ability to better
hook into cloud services from your cloud functions, but it’s not a requirement
18. Deploy a prototype of the app
• This strategy usually plays out like this:
• Select a small application running on-premises
• Select your desired cloud provider
• Manually start the cloud VM
• Upload your code, install dependencies, configure networking
and security
• Test the application
19. Review
• So you revamped your project and everything is working fine ?
• You actually think that serverless model is the magical solution
for your business needs ?
20. Drawback of serverless model
• Performance
• Infrequently-used serverless code may suffer from greater response latency than code
that is continuously running on a dedicated server, virtual machine, or container.
• This is because, unlike with autoscaling, the cloud provider typically "spins down" the
serverless code completely when not in use.
• This means that if the runtime (for example, the Java runtime) requires a significant
amount of time to start up, it will create additional latency.
• Resource limits
• Serverless computing is not suited to some computing workloads, such as high-
performance computing, because of the resource limits imposed by cloud providers,
and also because it would likely be cheaper to bulk-provision the number of servers
believed to be required at any given point in time.
• Monitoring and debugging
• Diagnosing performance or excessive resource usage problems with serverless code
may be more difficult than with traditional server code, because although entire
functions can be timed, there is typically no ability to dig into more detail by
attaching profilers, debuggers or APM tools.
• Furthermore, the environment in which the code runs is typically not open source, so
its performance characteristics cannot be precisely replicated in a local environment.
21. Drawback of serverless model
• Security
• Serverless is sometimes mistakenly considered as more secure than traditional architectures.
• While this is true to some extent because OS vulnerabilities are taken care of by the cloud provider, the total attack
surface is significantly larger as there are many more components to the application compared to traditional
architectures and each component is an entry point to the serverless application.
• Moreover, the security solutions customers used to have to protect their cloud workloads become irrelevant as
customers cannot control and install anything on the endpoint and network level such as an intrusion
detection/prevention system (IDS/IPS).
• Privacy
• Many serverless function environments are based on proprietary public cloud environments.
• Here, some privacy implications have to be considered, such as shared resources and access by external employees.
• However, serverless computing can also be done on private cloud environment or even on-premises, using for example
the Kubernetes platform.
• This gives companies full control over privacy mechanisms, just as with hosting in traditional server setups.
• Standards
• Serverless computing is covered by International Data Center Authority (IDCA) in their Framework AE360. However, the
part related to portability can be an issue when moving business logic from one public cloud to another for which
the Docker solution was created
• Vendor lock-in
• Serverless computing is provided as a third-party service.
• Applications and software that run in the serverless environment are by default locked to a specific cloud vendor.
• Therefore, serverless can cause multiple issues during migration.
22. Summary
• Before rushing to serverless model, read your business requirements, your application
documentation in light of all that decide
1. Does moving to serverless model add any value to the project in terms of functionality ,
portability and efficiency?
2. Should you go to cloud service like AWS to host your serverless model or still work on
premise but using Kubernetes?
3. Which cloud service is more fitting for the app and affordable according to budget?
• AWS or Azure or Google Cloud Platform
4. Is your application architecture can easily work on serverless model like microservices or
you will need to modify it or cut it more if your application follows the monolithic
architecture
5. Can you afford refactoring your code if needed
6. Can you work with fully managed services or will you still need your own specific managed
service