Skip to main content

Performance Tuning WSO2 ESB with a practical example

WSO2 ESB is arguably the highest performing open source ESB available in the industry. With the default settings, it will provide a really good performance OOTB. You can find the official performance tuning guide in the below link.

https://docs.wso2.com/display/ESB481/Performance+Tuning

Above document includes lot of parameters and how to change them for best performance. Even though it provides some recommended values for the performance tuning, that is not straightforward. The values you put in the configuration files will highly reliant on your use case. Idea of this blog post is to provide a practical example of tuning these parameters with a sample use case.

For the performance testing, I am using a simple proxy service which iterate through set of xml elements and send request to a sample server and aggregate the responses back to the client.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="SplitAggregateProxy"
       startOnLoad="true">
   <target>
      <inSequence>
         <iterate xmlns:m0="http://services.samples"
                  preservePayload="true"
                  attachPath="//m0:getQuote"
                  expression="//m0:getQuote/m0:request">
            <target>
               <sequence>
                  <send>
                     <endpoint>
                        <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
                     </endpoint>
                  </send>
               </sequence>
            </target>
         </iterate>
      </inSequence>
      <outSequence>
         <aggregate>
            <completeCondition>
               <messageCount/>
            </completeCondition>
            <onComplete xmlns:m0="http://services.samples" expression="//m0:getQuoteResponse">
               <send/>
            </onComplete>
         </aggregate>
      </outSequence>
   </target>
</proxy>

The reason for selecting this type of proxy is that it will allow us to test different thread groups within the WSO2 ESB.  Request for this proxy is given below.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ser="http://services.samples" xmlns:xsd="http://services.samples/xsd">
   <soapenv:Header/>
   <soapenv:Body>
      <ser:getQuote>
          <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>
           <!--Optional:-->
         <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>
          <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>  
          <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>
          <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>
          <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>    
          <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>
           <!--Optional:-->
         <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>
          <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>  
          <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>
          <ser:request>
            <!--Optional:-->
            <xsd:symbol>IBM</xsd:symbol>
         </ser:request>        
      </ser:getQuote>    
   </soapenv:Body>
</soapenv:Envelope>

We have used the apache jmeter as the test client. There we have created a Thread group with following configurations.

Number of Threads (Users) - 50
Ramp up period (seconds) - 10
Loop count - 200
Total requests - 10000

With this information in hand, we have used WSO2 ESB 4.8.1 within a ubuntu linux machine with 8GB memory and 4 core cpu.

First, let's talk about the parameters which we have used to carry out this performance test.

(ESB_HOME/bin/wso2server.sh)
Memory (Xmx) - Maximum heap memory allocated for JVM. We kept both Xmx and Xms the same.

(ESB_HOME/repository/conf/synapse.properties)
synapse.threads.core - Core number of threads allocated for ThreadPoolExecutor used for creating threads for iterate mediator executions

synapse.threads.qlen - Task queue length used for ThreadPoolExecutor used for creating threads for iterate mediator executions

(ESB_HOME/repository/conf/passthru-http.properties)
worker_pool_size_core - Core number of threads allocated for ThreadPoolExecutor used for processing incoming requests to the ESB

io_buffer_size - Size of the memory buffer used for reading/writing data from NIO channels


Performance tests were carried out while monitoring the server using Java Mission Control tool and the server load is kept at a perfect value around 60-70 % CPU usage and Load average value around 3-4 (4 core). There were no GC related issues observed during this testing.

You can read the following article to get an understanding about the performance of a server and the methods for tuning servers.

http://www.infoq.com/articles/Tuning-Java-Servers

We have captured both Latency and the TPS value for monitoring the performance of the server


Performance variation with Memory allocation

Theoretically, the performance of the server should be increased with the allocated memory. By performance, we consider both Latency and the TPS value of the server. According to the below results, we can see that TPS is increased with the memory and the Latency has reduced (increased performance).






Performance variation with number of Server Worker/Client Worker Threads

WSO2 ESB uses the ThreadPoolExecutor to create threads when there is data to be processed from the client requests. worker_pool_size_core parameter controls the number of core threads for this executor pool. By increasing the thread pool, we would expect that we see a performance improvement. According to the below graphs, we can see that latency is reduced and the TPS is slightly improved with this parameter (performance increased with number of threads)

















Performance variation with Synapse Worker Threads count
When using iterate or clone mediator within the ESB, it will use a separate thread pool to create new threads when there is data to be iterated and processed. The size of this thread pool is configured with the parameter synapse.threads.core. By increasing the value here, we would expect a better performance when iterate mediator is used. According to the test results, we can see that performance is increased when the value is changed from 20 to 100. But after that, we can see some performance degradation since when the number of threads in the system increases, that will degrade the performance due to the high load in the thread scheduler in the operating system.




Performance variation with Synapse Worker Queue length

When using iterate or clone mediator within the ESB, it will use a separate thread pool to create new threads when there is data to be iterated and processed. We can configure the task queue length of this thread pool with the synapse.threads.qlen parameter. By giving a finite value to this task queue, it will make the system to create new threads when all the core number of threads are created and task queue is full. This is the only time that max value of thread pool is used. If the queue length is infinite (-1), max value is never used and there will only be core number of threads at any given time. According to the results, we can see that, when there is a finite value for queue length, we can see increased performance. One thing to note here is that, when we have a limited value for this queue length, there can be situations that requests will be rejected when the task queue is full and all the threads are occupied. Therefore, you need to make a decision according to your actual load. One additional thing with this parameter is that if there is some thread blocking happens when the queue length is infinite, server can go in to OOM situation.






Performance variation with IO Buffer size
IO buffer size parameter(io_buffer_size) decides the value of the memory buffer allocated for reading data in to the memory from the underlying socket/file channels. According to the average value of the payloads, we can configure this parameter. According to the results we observed from the testing, we cannot clearly come to a conclusion for this scenario since the request/response size is <4k during this testing.











According to the above results, we can see that when tuning the WSO2 ESB is not straightforward and you need to have a proper idea about your use cases.




Comments

  1. I really appreciate information shared above. It’s of great help. If someone want to learn Online (Virtual) instructor lead live training in Performance Tuning, kindly contact us http://www.maxmunus.com/contact
    MaxMunus Offer World Class Virtual Instructor led training on Performance Tuning. We have industry expert trainer. We provide Training Material and Software Support. MaxMunus has successfully conducted 100000+ trainings in India, USA, UK, Australlia, Switzerland, Qatar, Saudi Arabia, Bangladesh, Bahrain and UAE etc.
    For Demo Contact us:
    Name : Arunkumar U
    Email : arun@maxmunus.com
    Skype id: training_maxmunus
    Contact No.-+91-9738507310
    Company Website –http://www.maxmunus.com



    ReplyDelete

Post a Comment

Popular posts from this blog

Understanding Threads created in WSO2 ESB

WSO2 ESB is an asynchronous high performing messaging engine which uses Java NIO technology for its internal implementations. You can find more information about the implementation details about the WSO2 ESB’s high performing http transport known as Pass-Through Transport (PTT) from the links given below. [1] http://soatutorials.blogspot.com/2015/05/understanding-wso2-esb-pass-through.html [2] http://wso2.com/library/articles/2013/12/demystifying-wso2-esb-pass-through-transport-part-i/ From this tutorial, I am going to discuss about various threads created when you start the ESB and start processing requests with that. This would help you to troubleshoot critical ESB server issues with the usage of a thread dump. You can monitor the threads created by using a monitoring tool like Jconsole or java mission control (java 1.7.40 upwards). Given below is a list of important threads and their stack traces from an active ESB server.  PassThroughHTTPSSender ( 1 Thread )

How to configure timeouts in WSO2 ESB to get rid of client timeout errors

WSO2 ESB has defined some configuration parameters which controls the timeout of a particular request which is going out of ESB. In a particular  scneario, your client sends a request to ESB, and then ESB sends a request to another endpoint to serve the request. CLIENT->WSO2 ESB->BACKEND The reason for clients getting timeout is that ESB timeout is larger than client's timeout. This can be solved by either increasing the timeout at client side or by decreasing the timeout in ESB side. In any of the case, you can control the timeout in ESB using the below properties. 1) Global timeout defined in synapse.properties (ESB_HOME\repository\conf\) file. This will decide the maximum time that a callback is waiting in the ESB for a response for a particular request. If ESB does not get any response from Back End, it will drop the message and clears out the call back. This is a global level parameter which affects all the endpoints configured in ESB. synapse.global_timeout_inte

How puppet works in your IT infrstructure

What is Puppet? Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to orchestration and reporting. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud. How the puppet works? It works like this..Puppet agent is a daemon that runs on all the client servers(the servers where you require some configuration, or the servers which are going to be managed using puppet.) All the clients which are to be managed will have puppet agent installed on them, and are called nodes in puppet. Puppet Master: This machine contains all the configuration for different hosts. Puppet master will run as a daemon on this master server. Puppet Agent: This is the daemon that will run on all the servers, which are to be managed using p