Why do you need to handle peak loads in your system?
Enterprises are heavily relying on their various systems and particularly they want their systems to run smoothly in peak load scenarios. Let's take an online shopping web site like ebay, amazon or wallmart. They want their systems to work best when they have their maximum loads during summer season and new year festivals. If the system crashes in that kind of scenario, entire business will go down and they may never able to recover from that. Which means that your systems need to be run without any issue in peak load scenarios.
Why WSO2 ESB?
You may find loads of systems which are capable of handling peak load scenarios. WSO2 ESB is a lean, enterprise ready ESB which comes with a great amount of features which every enterprise system requires.
How to handle peak loads of your system?
In almost every system, there is a limited capacity of handling information. If your peak exceeds that capacity, anyway your system will crash. We can have 3 approaches to handle peak load scenarios.
1. Make your system with very high capacity which will never reach by the peak loads of your system.
- If your company has loads of money, you can have this approach
2. Make your system with alternate resources which may become active if one resource is going down due to peak load (Fail-Over resources)
- System will be handled by a single resource until you hit peaks and will switch if something went wrong. Cost effective, resource effective
3. Make your systems with parallel resources which will handle the peak loads equally by dividing the load among them. (Load-Balancing resources)
- System load will be handled by several resources in the same time. peak loads will be divided among the resources. Cost effective
How WSO2 ESB handles peak loads?
First you need to download and install WSO2 ESB in your system. you can follow this link to achieve that.
Then setup your system to run the samples as mentioned in this link.
Now you are good to go. WSO2 ESB can handle your peak loads with either fail-over or load-balancing endpoints.
1. Fail-over endpoint
WSO2 ESB comes with rich set of samples which describes how to handle your system with the fail-over endpoint. Here is a sample configuration which can be used to achieve the same.
<
sequence
name
=
"main"
onError
=
"errorHandler"
>
<
in
>
<
send
>
<
endpoint
>
<
failover
>
<
endpoint
>
<
enableAddressing
/>
<
suspendDurationOnFailure
>60</
suspendDurationOnFailure
>
</
address
>
</
endpoint
>
<
endpoint
>
<
enableAddressing
/>
<
suspendDurationOnFailure
>60</
suspendDurationOnFailure
>
</
address
>
</
endpoint
>
<
endpoint
>
<
enableAddressing
/>
<
suspendDurationOnFailure
>60</
suspendDurationOnFailure
>
</
address
>
</
endpoint
>
</
failover
>
</
endpoint
>
</
send
><
drop
/>
</
in
>
<
out
>
<!-- Send the messages where they have been sent (i.e. implicit To EPR) -->
<
send
/>
</
out
>
</
sequence
>
<
sequence
name
=
"errorHandler"
>
<
makefault
>
<
reason
value
=
"COULDN'T SEND THE MESSAGE TO THE SERVER."
/>
</
makefault
>
<
header
name
=
"To"
action
=
"remove"
/>
<
property
name
=
"RESPONSE"
value
=
"true"
/>
<
send
/>
</
sequence
>
</
definitions
>
Here you have 3 endpoints which can handle the requests coming in to your system. You can start these endpoints by starting 3 sample axis2Server instances which comes with WSO2 ESB.
Deploy the LoadbalanceFailoverService by switching to <ESB_HOME>/samples/axis2Server/src/LoadbalanceFailoverService directory and running ant.
Start three instances of sample Axis2 server on HTTP ports 9001, 9002 and 9003 and give some unique names to each server.
Example commands to run sample Axis2 servers from the <Synapse installation directory>/samples/axis2Server directory in Linux are listed below:
Start three instances of sample Axis2 server on HTTP ports 9001, 9002 and 9003 and give some unique names to each server.
Example commands to run sample Axis2 servers from the <Synapse installation directory>/samples/axis2Server directory in Linux are listed below:
./axis2server.sh -http 9001 -https 9005 -name MyServer1 ./axis2server.sh -http 9002 -https 9006 -name MyServer2 ./axis2server.sh -http 9003 -https 9007 -name MyServer3
|
Above configuration sends messages with the failover behavior.
Initially the server at port 9001 is treated as primary and other two
are treated as back ups. Messages are always directed only to the
primary server. If the primary server has failed, next listed server is
selected as the primary. Thus, messages are sent successfully as long as
there is at least one active server. To test this, run the
loadbalancefailover client to send infinite requests as follows:
You can see that all requests are processed by MyServer1.
Now shutdown MyServer1 and inspect the console output of the client.
You will observe that all subsequent requests are processed by
MyServer2.
The console output with MyServer1 shutdown after request 127 is listed below:
ant loadbalancefailover |
The console output with MyServer1 shutdown after request 127 is listed below:
... [java] Request: 125 ==> Response from server: MyServer1 [java] Request: 126 ==> Response from server: MyServer1 [java] Request: 127 ==> Response from server: MyServer1 [java] Request: 128 ==> Response from server: MyServer2 [java] Request: 129 ==> Response from server: MyServer2 [java] Request: 130 ==> Response from server: MyServer2
|
2. Load balanced endpoint
WSO2 ESB comes with rich set of samples which describes how to handle your system with the load-balanced endpoint. Here is a sample configuration which can be used to achieve the same.
<
sequence
name
=
"main"
onError
=
"errorHandler"
>
<
in
>
<
send
>
<
endpoint
>
<
loadbalance
>
<
endpoint
>
<
enableAddressing
/>
<
suspendDurationOnFailure
>60</
suspendDurationOnFailure
>
</
address
>
</
endpoint
>
<
endpoint
>
<
enableAddressing
/>
<
suspendDurationOnFailure
>60</
suspendDurationOnFailure
>
</
address
>
</
endpoint
>
<
endpoint
>
<
enableAddressing
/>
<
suspendDurationOnFailure
>60</
suspendDurationOnFailure
>
</
address
>
</
endpoint
>
</
loadbalance
>
</
endpoint
>
</
send
>
<
drop
/>
</
in
>
<
out
>
<!-- Send the messages where they have been sent (i.e. implicit To EPR) -->
<
send
/>
</
out
>
</
sequence
>
<
sequence
name
=
"errorHandler"
>
<
makefault
response
=
"true"
>
<
reason
value
=
"COULDN'T SEND THE MESSAGE TO THE SERVER."
/>
</
makefault
>
<
send
/>
</
sequence
>
</
definitions
>
Deploy the LoadbalanceFailoverService by switching to <ESB_HOME>/samples/axis2Server/src/LoadbalanceFailoverService directory and running ant.
Start three instances of sample Axis2 server on HTTP ports 9001, 9002 and 9003 and give some unique names to each server.
Example commands to run sample Axis2 servers from the <Synapse installation directory>/samples/axis2Server directory in Linux are listed below:
Now we are done with setting up the environment for load balancing. Start the load balance and failover client using the following command:
This client sends 100 requests to the LoadbalanceFailoverService through Synapse. Synapse will distribute the load among the three endpoints mentioned in the configuration in round-robin manner. LoadbalanceFailoverService appends the name of the server to the response, so that client can determine which server has processed the message. If you examine the console output of the client, you can see that requests are processed by three servers as follows:
Start three instances of sample Axis2 server on HTTP ports 9001, 9002 and 9003 and give some unique names to each server.
Example commands to run sample Axis2 servers from the <Synapse installation directory>/samples/axis2Server directory in Linux are listed below:
./axis2server.sh -http 9001 -https 9005 -name MyServer1 ./axis2server.sh -http 9002 -https 9006 -name MyServer2 ./axis2server.sh -http 9003 -https 9007 -name MyServer3 |
ant loadbalancefailover -Di= 100 |
[java] Request: 1 ==> Response from server: MyServer1 [java] Request: 2 ==> Response from server: MyServer2 [java] Request: 3 ==> Response from server: MyServer3 [java] Request: 4 ==> Response from server: MyServer1 [java] Request: 5 ==> Response from server: MyServer2 [java] Request: 6 ==> Response from server: MyServer3 [java] Request: 7 ==> Response from server: MyServer1 ...
|
Above mentioned were only 2 samples of handling peak loads with WSO2 ESB. You can find many samples in the documentation link below.
Comments
Post a Comment