Skip to main content

Understanding WSO2 API Manager Deployment patterns

WSO2 API Manager comes with a modularized architecture so that users can scale the components based on their needs. When scaling the WSO2 API manager deployments, it is essential to understand the interconnections between different components. As depicted in the below figure, WSO2 API Manager has 6 main components.

apim-compo.png

  • Publisher - Creates and Publish APIs and manage API lifecycle
  • Developer Portal (Store) - Discover APIs, Subscribe to APIs, Test APIs
  • Gateway - Process the API requests. Traffic handling component
  • Key Manager - Validate the authenticity of the requests (traffic) coming into the gateway
  • Analytics - Provide analytics on API usability
  • Traffic Manager - Provide throttling and rate limiting of API access

The interconnections of these components happen through database, user store(LDAP) sharing. The interconnections are depicted in the below figure.

APIMDeployment.png
As depicted in the above figure, once the API is created and published from the API publisher, meta data will be push into the registry database. From that, API developer portal will get the meta information and show them in the portal. In the meantime, API file will be deployed to API gateway through the web service (admin service) call from publisher to gateway. If there are any throttling policies attached to a given API, that information will be pushed to the traffic manager. User information will be shared through user management database as well as user store (LDAP). When an API user generates a token for his application, that information will be stored in the API management database which is shared across publisher, dev portal and key manager. When the application sends an API call to the gateway, it will call the key manager and validate the token using the API management database.

Those are basic information about the componentized architecture and how they interact with each other during an API creation, publishing, subscription and invocation process. When these components are distributed for scalability, these interconnections needs to be maintained using relevant database sharing options mentioned above.

WSO2 API Manager All-in-one deployment

This is the simplest deployment pattern which can be setup using the WSO2 API Manager and APIM Analytics components. This is suitable if you have a considerably low traffic requirements (<1000TPS) and need not to worry about scalability as such. The deployment would be 2 All-in-one nodes in active/active mode with one APIM analytics node.

api-dep-1.png
If you are going with this approach, it is essential to deploy the load balancer in DMZ and put the APIM deployment within LAN as depicted above.

WSO2 API Manager partially distributed deployment

The next level of deployment pattern is to partially distribute the API manager components so that scaling and non-scaling profiles are separated. In WSO2 APIM, we have 3 major scaling components
  • Gateway
  • Key Manager
  • Traffic Manager
Compared to Gateway and Key Manager, Traffic Manager scales at a low rate (10:1) and hence we can consider that as a non-scalable component. Due to that reason, If you need some kind of scalability while keeping the setup less-complex with less costly, you can use the partially distributed deployment mentioned in below figure.

In the above mentioned setup, Key Manager and Gateway components are installed in separate JVMs while Publisher, Store and Traffic Manager components are installed in the same JVM. If you need to further scale down this solution, you can think of keeping both Gateway and Key Manager nodes combined into one JVM so that you can further reduce the number of nodes by 2. But that is optional.

WSO2 API Manager fully distributed deployment

If users need the full control of the WSO2 API manager deployment with scalability, users can utilize the fully distributed deployment as depicted in the below figure. In this setup, all the components are deployed in separate JVM instances and based on the requirements, scaling profiles can scale up and scale down.

full-distributed-apim-dep.png

WSO2 API Manager Internal/External deployment

If your organization needs to separate out the internal and external API gateways, that can be achieved by using this deployment pattern. In this setup, there is only one API publisher which is publishing the APIs. There are separate gateways for handling internal and external traffic.


When publishing the APIs, users can select which gateways to be published the API and the store will only display the APIs relevant to specific tenants which are accessible from outside.

Comments

Popular posts from this blog

WSO2 ESB tuning performance with threads

I have written several blog posts explaining the internal behavior of the ESB and the threads created inside ESB. With this post, I am talking about the effect of threads in the WSO2 ESB and how to tune up threads for optimal performance. You can refer [1] and [2] to understand the threads created within the ESB. [1] http://soatutorials.blogspot.com/2015/05/understanding-threads-created-in-wso2.html [2] http://wso2.com/library/articles/2012/03/importance-performance-wso2-esb-handles-nonobvious/ Within this blog post, I am discussing about the "worker threads" which are used for processing the data within the WSO2 ESB. There are 2 types of worker threads created when you start sending the requests to the server 1) Server Worker/Client Worker Threads 2) Mediator Worker (Synapse-Worker) Threads Server Worker/Client Worker Threads These set of threads will be used to process all the requests/responses coming to the ESB server. ServerWorker Threads will be used to pr

How puppet works in your IT infrstructure

What is Puppet? Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to orchestration and reporting. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud. How the puppet works? It works like this..Puppet agent is a daemon that runs on all the client servers(the servers where you require some configuration, or the servers which are going to be managed using puppet.) All the clients which are to be managed will have puppet agent installed on them, and are called nodes in puppet. Puppet Master: This machine contains all the configuration for different hosts. Puppet master will run as a daemon on this master server. Puppet Agent: This is the daemon that will run on all the servers, which are to be managed using p

How to configure timeouts in WSO2 ESB to get rid of client timeout errors

WSO2 ESB has defined some configuration parameters which controls the timeout of a particular request which is going out of ESB. In a particular  scneario, your client sends a request to ESB, and then ESB sends a request to another endpoint to serve the request. CLIENT->WSO2 ESB->BACKEND The reason for clients getting timeout is that ESB timeout is larger than client's timeout. This can be solved by either increasing the timeout at client side or by decreasing the timeout in ESB side. In any of the case, you can control the timeout in ESB using the below properties. 1) Global timeout defined in synapse.properties (ESB_HOME\repository\conf\) file. This will decide the maximum time that a callback is waiting in the ESB for a response for a particular request. If ESB does not get any response from Back End, it will drop the message and clears out the call back. This is a global level parameter which affects all the endpoints configured in ESB. synapse.global_timeout_inte