Skip to main content

Understanding Serverless Architecture advantages and limitations

Just before all the hype about Microservices Architecture is gone, there is another term which is looming within the technology forums. Even though this is not entirely new concept, it became a talking topic recently. This hot topic is Serverless Architecture and Serverless computing. As I have already mentioned, this was around us for some time with the advent of Backend As A Service or BaaS or MBaaS. But it was at a different scale before AWS Lambda, Azure Functions and Google Cloud Functions came into the picture with their own serverless solutions.


In layman’s terms, Serverless Architecture or Serveress Computing means that your backend logic will run on some third party vendor’s server infrastructure which you don’t need to worry about. It does not mean that there is no server to run your backend logic, rather you do not need to maintain it. That is the business of these third party vendors like AWS, Azure and Google. Serverless computing has 2 variants.

  • Backend as a Service (BaaS or MBaaS — M for Mobile)
  • Function as a Service (FaaS)

With BaaS or MBaaS, backend logic will run on a third party vendor. Application developer do not need to provision or maintain the servers or the infrastructure which runs this backend services. In most cases, these back end services will run continuously once they are started. Instead, application developers need to pay a subscription to the hosting vendor. In most cases, this subscription is weekly, monthly or yearly basis. Another important aspect of BaaS is that it will run on shared infrastructure and the same backend service will be used by multiple different applications.
The second variant is the more popular one these days which is Function as a Service or FaaS. Most of the popular technologies like AWS Lambda, Microsoft Azure Functions as well as Google Cloud Functions fall into this category. With FaaS platforms, application developers (users) can implement their own back end logic and run them within the serverless framework. Running of this functionality in a server will be handled by the serverless framework. All the scalability, reliability and security aspects will be taken over by this framework. Different vendors provide different options to implement these functions with popular programming languages like Java and C#.
Once these functions are implemented and deployed on the FaaS framework, the services offered by these functions can be triggered via events from vendor specific utilities or via HTTP requests. If we consider the most popular FaaS framework which is AWS Lambda, it allows users to trigger these functions through HTTP requests by interfacing the lambda functions with AWS API Gateway. There are few main differences of FaaS when compared to BaaS.

  • FaaS function will run for a short period of time (at max 5 mins for Lambda function)
  • Cost would be only for the amount of resources used (per minute level charging)
  • Ideal for hugely fluctuating traffic as well as typical user traffic

All the above mentioned points proves that Serverless Computing or Serverless Architecture is worth giving a shot at. It will simplify the maintenance of back end systems while giving cost benefits for handling all sorts of different user behaviors. But according to Newtons 3rd law, there is a reaction for every action, meaning there are some things we need to be careful when dealing with serverless kind of architectures.

  • Lack of monitoring, debugging capabilities to the user about the production system and it’s behaviors. User had to trust whatever the monitoring options provided by the vendor
  • Vendor locking can cause problems like frequent mandatory API changes, pricing structure changes and technology changes
  • Latencies occur at initial requests can become challenging for providing better SLAs across multiple concurrent users
  • Since server instances are come and go, maintaining the state of an application is really challenging with these types of frameworks
  • Not suitable for running long running business processes since these function (Services) instances will destroy after a fixed time duration
  • There are some other limitations like maximum TPS which can be handled by a given function within a given user account (AWS) is fixed by the vendor.
  • End to end testing or Integration testing is not easy with the functions come and go.

Having said all these limitations, these things will change in the future with more and more vendors coming along with more improved versions of their platforms.
Another important thing is the difference between a Platform as a Service and a FaaS (or BaaS or MBaas). With PaaS platforms, users can implement their business logic with polyglot of programming languages as well as other well known technologies. There will always be servers running in the backend specifically for the user applications and will run continuously. Due to this behavior of the PaaS frameworks, pricing is based on large chunks such as weekly, monthly or yearly. Another important aspect is the automatic scalability of the resources is not that easy with these platforms. If you have these requirements, considering a Serverless Computing platform (FaaS) would be a better choice.
Finally, with the type of popularity achieved by the Microservices Architecture, Serverless Architecture is well suited for adopting a MSA without the hassle of maintaining the servers, scalability and availability headaches. Even though Serverless computing has the capabiltiies to extend the popularity of MSA, it has some limitations when it comes to practical implementations due to the nature of vendor specific concepts. Hopefully these things will eventually go away with concepts like serverless framework.

Comments

  1. It is nice blog Thank you provide important information and I am searching for the same information to save my time
    Azure Online Training Bangalore

    ReplyDelete

  2. I like your post very much. It is very much useful for my research. I hope you to share more info about this. Keep posting mulesoft training

    ReplyDelete
  3. Thank you for posting such a great blog. I found your website perfect for my needs. Read About Sell Apple Developer Enterprise Account

    ReplyDelete
  4. Very nice post with lots of information. Thanks for sharing this
    Mulesoft Training
    Mulesoft Self Learning

    ReplyDelete
  5. cloud baas providers
    With SunTec Ecosystem Management, co-innovate and create solutions which solve specific customer lifecycle needs.

    ReplyDelete
  6. Nice and good article. It is very useful for me to learn and understand easily. Thanks for sharing
    Mule 4 Training
    Best Mulesoft Online Training

    ReplyDelete

Post a Comment

Popular posts from this blog

Understanding Threads created in WSO2 ESB

WSO2 ESB is an asynchronous high performing messaging engine which uses Java NIO technology for its internal implementations. You can find more information about the implementation details about the WSO2 ESB’s high performing http transport known as Pass-Through Transport (PTT) from the links given below. [1] http://soatutorials.blogspot.com/2015/05/understanding-wso2-esb-pass-through.html [2] http://wso2.com/library/articles/2013/12/demystifying-wso2-esb-pass-through-transport-part-i/ From this tutorial, I am going to discuss about various threads created when you start the ESB and start processing requests with that. This would help you to troubleshoot critical ESB server issues with the usage of a thread dump. You can monitor the threads created by using a monitoring tool like Jconsole or java mission control (java 1.7.40 upwards). Given below is a list of important threads and their stack traces from an active ESB server.  PassThroughHTTPSSender ( 1 Thread )

WSO2 ESB tuning performance with threads

I have written several blog posts explaining the internal behavior of the ESB and the threads created inside ESB. With this post, I am talking about the effect of threads in the WSO2 ESB and how to tune up threads for optimal performance. You can refer [1] and [2] to understand the threads created within the ESB. [1] http://soatutorials.blogspot.com/2015/05/understanding-threads-created-in-wso2.html [2] http://wso2.com/library/articles/2012/03/importance-performance-wso2-esb-handles-nonobvious/ Within this blog post, I am discussing about the "worker threads" which are used for processing the data within the WSO2 ESB. There are 2 types of worker threads created when you start sending the requests to the server 1) Server Worker/Client Worker Threads 2) Mediator Worker (Synapse-Worker) Threads Server Worker/Client Worker Threads These set of threads will be used to process all the requests/responses coming to the ESB server. ServerWorker Threads will be used to pr

How to configure timeouts in WSO2 ESB to get rid of client timeout errors

WSO2 ESB has defined some configuration parameters which controls the timeout of a particular request which is going out of ESB. In a particular  scneario, your client sends a request to ESB, and then ESB sends a request to another endpoint to serve the request. CLIENT->WSO2 ESB->BACKEND The reason for clients getting timeout is that ESB timeout is larger than client's timeout. This can be solved by either increasing the timeout at client side or by decreasing the timeout in ESB side. In any of the case, you can control the timeout in ESB using the below properties. 1) Global timeout defined in synapse.properties (ESB_HOME\repository\conf\) file. This will decide the maximum time that a callback is waiting in the ESB for a response for a particular request. If ESB does not get any response from Back End, it will drop the message and clears out the call back. This is a global level parameter which affects all the endpoints configured in ESB. synapse.global_timeout_inte