Skip to main content

Monitoring Garbage Collection of WSO2 ESB

Garbage Collection has been one of the most important features of Java programming language which made it the automatic choice for developing enterprise applications. WSO2 ESB has been written entirely in Java. Garbage Collection is pretty much related to the performance of a Java program. WSO2 ESB is a java program and it needs to provide the maximum performance to the users who use that for their enterprise integrations. From this blog post, I will be discussing about different tools which we can use to monitor the GC performance of WSO2 ESB.

1) Monitoring GC activity using jstat command

We can use the jstat command line tool which comes with the JDK to monitor the GC activity on a java program. Let's start the WSO2 ESB server by executing the wso2server.sh file located under ESB_HOME/bin directory.

sh wso2server.sh start

Then we need to find the process ID of this java process using the following command

ps -ef | grep wso2esb | grep java


501 13352 13345   0  7:25PM ttys000    1:18.41

We can execute the jstat command with the process ID

jstat -gc 13352 1000

In the above command, last argument is the time gap in which it prints the statistics. For the above command, it will print statistics every 1 second.

 S0C       S1C      S0U   S1U      EC          EU            OC            OU        PC          PU     YGC   YGCT   FGC  FGCT    GCT   
49664.0 50688.0  0.0    0.0   246272.0 135276.4  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818
49664.0 50688.0  0.0    0.0   246272.0 135276.7  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818
49664.0 50688.0  0.0    0.0   246272.0 135281.1  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818
49664.0 50688.0  0.0    0.0   246272.0 135281.1  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818
49664.0 50688.0  0.0    0.0   246272.0 135281.1  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818
49664.0 50688.0  0.0    0.0   246272.0 135281.2  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818

49664.0 50688.0  0.0    0.0   246272.0 135285.7  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818

The above output provides a detailed information on the GC activity going on with the java program.

  • S0C and S1C: This column shows the current size of the Survivor0 and Survivor1 areas in KB.
  • S0U and S1U: This column shows the current usage of the Survivor0 and Survivor1 areas in KB. Notice that one of the survivor areas are empty all the time.
  • EC and EU: These columns show the current size and usage of Eden space in KB. Note that EU size is increasing and as soon as it crosses the EC, Minor GC is called and EU size is decreased.
  • OC and OU: These columns show the current size and current usage of Old generation in KB.
  • PC and PU: These columns show the current size and current usage of Perm Gen in KB.
  • YGC and YGCT: YGC column displays the number of GC event occurred in young generation. YGCT column displays the accumulated time for GC operations for Young generation. Notice that both of them are increased in the same row where EU value is dropped because of minor GC.
  • FGC and FGCT: FGC column displays the number of Full GC event occurred. FGCT column displays the accumulated time for Full GC operations. Notice that Full GC time is too high when compared to young generation GC timings.
  • GCT: This column displays the total accumulated time for GC operations. Notice that it’s sum of YGCT and FGCT column values.
2) Monitoring GC activity using JVisualVM an VisualGC 

If you need to monitor the GC activity in a graphical manner, you can use the jvisualvm tool which comes with the JDK by installing the Visual GC plugin.

Just run jvisualvm command in the terminal to launch the Java VisualVM application. 

Once launched, you need to install Visual GC plugin from Tools -> Plugins->Available Plugins (Tab) option

After installing Visual GC, just open the application(by double clicking) from the left side column and head over to Visual GC section























As depicted in the above diagram, you can visually monitor the GC activities of the WSO2 ESB using the jvisualvm tool.

3) Monitoring GC activity using GC log file

In most of the production use cases, we don't need to interact with the running process through different programs. Instead, we would like to have the GC logging as an internal part of the program itself. We can enable GC logging for the JVM such that it will log all the GC activities into a separate log file such that monitoring tools can interpret this file separately without interacting with the application directly.

You can enable GC logging in to an external log file by adding the following flags to startup script of the WSO2 ESB (wso2server.sh)

    -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps \
    -Xloggc:/Users/chanaka-mac/Documents/wso2/wso2esb-4.8.1/repository/logs/gc.log \

When you start the server with these flags included in the wso2server.sh file, you can observe that gc.log file is populating with the relevant GC activities as depicted below.

Chanakas-MacBook-Air:logs chanaka-mac$ tail -f gc.log 
2015-06-20T20:04:22.222-0530: 20.347: [GC [PSYoungGen: 253355K->31555K(285184K)] 354302K->132534K(460288K), 0.0179690 secs] [Times: user=0.02 sys=0.00, real=0.02 secs] 
2015-06-20T20:04:23.031-0530: 21.156: [GC [PSYoungGen: 262979K->33422K(288256K)] 363958K->134431K(463360K), 0.0183790 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 
2015-06-20T20:04:23.797-0530: 21.922: [GC [PSYoungGen: 264846K->35384K(292352K)] 365855K->136426K(467456K), 0.0192760 secs] [Times: user=0.02 sys=0.00, real=0.02 secs] 
2015-06-20T20:04:24.468-0530: 22.593: [GC [PSYoungGen: 271928K->33834K(292864K)] 372970K->134994K(467968K), 0.0195170 secs] [Times: user=0.03 sys=0.00, real=0.02 secs] 
2015-06-20T20:04:25.162-0530: 23.287: [GC [PSYoungGen: 270378K->29641K(288768K)] 371538K->130840K(463872K), 0.0186680 secs] [Times: user=0.03 sys=0.00, real=0.02 secs] 
2015-06-20T20:04:26.547-0530: 24.672: [GC [PSYoungGen: 267721K->2845K(290816K)] 368920K->104320K(465920K), 0.0069150 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 
2015-06-20T20:04:29.429-0530: 27.554: [GC [PSYoungGen: 240925K->9509K(294400K)] 342400K->111496K(469504K), 0.0123910 secs] [Times: user=0.04 sys=0.00, real=0.02 secs] 
2015-06-20T20:04:32.290-0530: 30.415: [GC [PSYoungGen: 248613K->28794K(268288K)] 350600K->134734K(443392K), 0.0373390 secs] [Times: user=0.13 sys=0.01, real=0.03 secs] 
2015-06-20T20:04:37.493-0530: 35.618: [GC [PSYoungGen: 249742K->21673K(287744K)] 355682K->152515K(462848K), 0.0903050 secs] [Times: user=0.16 sys=0.02, real=0.09 secs] 
2015-06-20T20:04:37.584-0530: 35.709: [Full GC [PSYoungGen: 21673K->0K(287744K)] [ParOldGen: 130841K->80598K(175104K)] 152515K->80598K(462848K) [PSPermGen: 57507K->57484K(115200K)], 0.8345630 secs] [Times: user=1.68 sys=0.14, real=0.83 secs] 

From the above log, we can extract information related to GC activities going on within the WSO2 ESB server.

  1. 2015-06-20T20:04:22.222-0530: – Time when the GC event started.
  2. 20.347 – Time when the GC event started, relative to the JVM startup time. Measured in seconds.
  3. GC – Flag to distinguish between Minor & Full GC. This time it is indicating that this was a Minor GC.
  4. PSYoungGen – Collection type. 
  5. 253355K->31555K – Usage of Young generation before and after collection.
  6. (285184K) – Total size of the Young generation.
  7. 354302K->132534K – Total used heap before and after collection.
  8. (460288K) – Total available heap.
  9. 0.0179690 secs – Duration of the GC event in seconds.
  10. [Times: user=0.02 sys=0.00, real=0.02 secs] – Duration of the GC event, measured in different categories:
    • user – Total CPU time that was consumed by Garbage Collector threads during this collection
    • sys – Time spent in OS calls or waiting for system event
    • real – Clock time for which your application was stopped. As Serial Garbage Collector always uses just a single thread, real time is thus equal to the sum of user and system times.

I hope this blog post have provided you a comprehensive guide on monitoring GC activities with WSO2 ESB for tuning the performance. I will discuss about GC performance of different algorithms with WSO2 ESB in a future blog post.

Happy GC Monitoring !!!

Comments

  1. Great post Chanaka. Thank you. Also I noticed in last line of GC log output the real time is greater than sys+user. For anyone interested in that. It is apparently due to multi-core processing: http://unix.stackexchange.com/questions/40694/why-real-time-can-be-lower-than-user-time

    According to this stack overflow a good rule of thumb is:
    real < user: The process is CPU bound and takes advantage of parallel execution on multiple cores/CPUs.
    real ≈ user: The process is CPU bound and takes no advantage of parallel exeuction.
    real > user: The process is I/O bound. Execution on multiple cores would be of little to no advantage.

    ReplyDelete

Post a Comment

Popular posts from this blog

WSO2 ESB tuning performance with threads

I have written several blog posts explaining the internal behavior of the ESB and the threads created inside ESB. With this post, I am talking about the effect of threads in the WSO2 ESB and how to tune up threads for optimal performance. You can refer [1] and [2] to understand the threads created within the ESB. [1] http://soatutorials.blogspot.com/2015/05/understanding-threads-created-in-wso2.html [2] http://wso2.com/library/articles/2012/03/importance-performance-wso2-esb-handles-nonobvious/ Within this blog post, I am discussing about the "worker threads" which are used for processing the data within the WSO2 ESB. There are 2 types of worker threads created when you start sending the requests to the server 1) Server Worker/Client Worker Threads 2) Mediator Worker (Synapse-Worker) Threads Server Worker/Client Worker Threads These set of threads will be used to process all the requests/responses coming to the ESB server. ServerWorker Threads will be used to pr

How puppet works in your IT infrstructure

What is Puppet? Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to orchestration and reporting. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud. How the puppet works? It works like this..Puppet agent is a daemon that runs on all the client servers(the servers where you require some configuration, or the servers which are going to be managed using puppet.) All the clients which are to be managed will have puppet agent installed on them, and are called nodes in puppet. Puppet Master: This machine contains all the configuration for different hosts. Puppet master will run as a daemon on this master server. Puppet Agent: This is the daemon that will run on all the servers, which are to be managed using p

How to configure timeouts in WSO2 ESB to get rid of client timeout errors

WSO2 ESB has defined some configuration parameters which controls the timeout of a particular request which is going out of ESB. In a particular  scneario, your client sends a request to ESB, and then ESB sends a request to another endpoint to serve the request. CLIENT->WSO2 ESB->BACKEND The reason for clients getting timeout is that ESB timeout is larger than client's timeout. This can be solved by either increasing the timeout at client side or by decreasing the timeout in ESB side. In any of the case, you can control the timeout in ESB using the below properties. 1) Global timeout defined in synapse.properties (ESB_HOME\repository\conf\) file. This will decide the maximum time that a callback is waiting in the ESB for a response for a particular request. If ESB does not get any response from Back End, it will drop the message and clears out the call back. This is a global level parameter which affects all the endpoints configured in ESB. synapse.global_timeout_inte