PaaS vs FaaS: Which one should I choose to run my microservices on?
We all know that microservices are distributed processesthat must beindependently releasable, deployable and scalable. At first glancePlatform-as-a-Service (PaaS)andFunction-as-a-Service (FaaS)a.k.a. Serverless seem to be capable of bringing into the world the previous description. Both cloud computing models are also able to support incredibly shortlead timeduring software development which promotes innovation andcontinuous experimentation.However when we delve deeper into their technical details, we soon realise they are not always suitable for the same use cases.
PaaS
Platform-as-a-Service is a cloud model where you provide your source code and the platform will package, release, provision, deploy, run, monitor and scale out/in your microservices. The best examples I can think of are Cloud Foundry, Heroku and Google App Engine.
Your app will always have at least one instance running on PaaS. This comes in handy where persistent connections are required for implementing push notifications via SSE (Server-Sent-Events), Websockets or RSocket. There are many other benefits as well, for example promptly processing incoming requests, keeping reference data in memory (a.k.a. in-process data cache), implementing circuit breaker pattern for handling partial failures, or leveraging connection pooling for throttling workload and reducing response time.
FaaS
Function-as-a-Service implies a computing model where your code will be packaged by the platform and run on demand as a result of some configurable event (e.g. HTTP request, message arrival, file upload) for a limited amount of time and may be disposed of anytime afterwards. Here good representatives are AWS Lambda, Azure Functions and Google Cloud Functions.
We can assemble our apps out of a multitude of functions but each one needs to be configured and deployed individually. That’s why FaaS is sometimes called Nanoservices. This computing model has some interesting consequences:
- As aforementioned, a function life cycle starts when it is activated for a user request and finishes when it returns. The processing time is usually limited by the platform, for example 15 minutes on AWS as of the time of writing.
- FaaS auto scales almost indefinitely and that might be seen as a blessing because it allows us to seamlessly handle workload spikes. On the other hand, a large number of function instances may start putting so much pressure on our architecture until something breaks somewhere and we end up DDoS our own backends namely: databases, internal services, or third party APIs.
- Functions tend to be more granular than services, therefore we end up with a larger number of deployment units to coordinate, integrate and manage. The more features we have to implement, the more complex a full-blown FaaS application will be. As Twin Tech Innovations pointed out:
Consider the chart below comparing lines of code between projects implemented using the Serverless framework (Lambda + API Gateway) against a project implemented in pure Node.js. For each non-trivial route (piece of functionality) added to a software system, the number of lines of configuration code needed to maintain the project grows at a steep linear rate when using a serverless architecture. Someone has to maintain all of that… In short, for a short term gain, serverless architectures mortgage the future.
https://twintechinnovations.wordpress.com/author/twintechinnovations
- There will be no function running if there are no user requests to process. This property is famously known as scale-to-zero and it’s the hallmark of FaaS computing. It allows for savings in infrastructure costs when the workload is low.
- On the flip side, the costs of running functions might be more expensive under steady workloads. The graphics below compare the prices of AWS Lambdas vs EC2 VMs per workload.
https://www.bbva.com/en/economics-of-serverless/
N.B. Please note the above graphics compare Lambdas with heavyweight VMs, not PaaS which have been using containers long before Kubernetes came into existence. Taking Cloud Foundry as reference, I have seen customers running 20+ different microservices per VM in highly regulated and very stringent production environments. That means the previous breakeven would happen at 5 requests per second on average for platforms underpinned by m4.4xlarge VMs.
Developer experience
I have been witnessing colleagues advocating for and some companies moving wholesale into FaaS as a means to avoid the painful process of building and maintaining a large number of container images as well as orchestrating them across various environments.
I couldn’t agree more with the much needed idea of abstracting away from developers the burden of managing infrastructure. However we have seen that both PaaS and FaaS are able to handle the undifferentiated heavy lifting on behalf of developers, including packaging, deploying, and auto scaling applications as well as managing security, routing and log aggregation.
There is no need to adopt FaaS just to avoid the complexity caused by running containers at scale.
If your goal is to solely improve developer experience, you might well find that PaaS fits your needs with lower complexity and in a less intrusive way when compared to FaaS. I believe that this idea is behind the growing adoption of the Digital Platform model.
A digital platform is a foundation of self-service APIs, tools, services, knowledge and support which are arranged as a compelling internal product. Autonomous delivery teams can make use of the platform to deliver product features at a higher pace, with reduced coordination.
Wrapping up
To wrap up, it’s evident that Platform-as-a-Service (PaaS) and Function-as-a-Service (FaaS) both hold valuable positions in cloud computing. Rather than viewing these models as competitors, it’s more productive to see them as complementary tools.
At Armakuni, we specialise in guiding organisations through the complexities of both PaaS and FaaS. Our end-to-end cloud-native solutions and expertise in process optimisation can help you harness the strengths of each approach to enhance your development and deployment processes. Evaluate your project’s needs, experiment, and partner with us to find the best-fit solution.
Meet the speakers
View all insights
We are happy to announce that Armakuni has achieved the AWS Generative AI Competency! This recognition confirms our expertise in helping businesses harness generative AI to modernise operations and enhance customer experiences.
Learn how to implement Parent-Child Hierarchy Row-Level Security in Power BI with this comprehensive guide. Discover step-by-step instructions to secure data access, manage permissions, and enhance your Power BI reports.