Friday, 5 December 2014

Timer Nodes in Message Broker


Scenario 1:- Usage of TimeoutNotification Node in a flow which fires automatically when a flow has been deployed onto the ExecutionGroup.

Step 1 :- Create a message flow as appears below:-


Step 2:- Configure the TimeoutNotification Node as follows :-





Step 3:- Copy the following ESQL Code onto Compute Node :-



                  The above flow shows how to use the first trigger of TimeoutNotification node which happens immediately after the flow is deployed into the Execution Group or at flow startup. The compute node ‘LoadRefData_&_TransformInput’ checks whether the reference data is present/loaded in shared variable. If not, it will retrieve the reference data using db select query and store it in shared variable.Then this compute node does the transformation of message payload using the reference data and generates the output payload. This way we make sure that the reference data is always loaded on the message flow startup before any input message is processed by MQInput node.The shared variable makes the reference data available to all the subsequent messages processed by the message flow.

               An Reset Content Descriptor (RCD) node after the Catch node prevents an exception being thrown by the Trace node in the case where a message generated by a TimeoutNotification node or MQInput node cannot be parsed by the Trace node. The RCD node resets the domain to BLOB.

*********************************************************************************

Scenario 2:- Usage of TimeoutNotification Node & TimeoutControl node in a flow

*********************************************************************************

Step 1:- Create a flow as shown below:-

Triggering a message flow using configurable trigger parameters (start time, count,
Interval)


Possible scenarios:-

               This scenario would occur if we need the cached data in execution group to be refreshed at certain time interval and this interval is not fixed or predefined. Also the operations/support team wants to drive the event trigger for cache refresh after they update the reference database. This is expected to be handled with minimum process/impact.
Programming pattern.Given these requirements, the first option would be to provide a separate message flow starting with MQInput node. The operations/support person would put a predefined XML message onto the MQ queue and the message flow would have logic to refresh the cached data in execution group’s JVM.But this approach introduces once more component flow and possibly a java program to put a predefined message onto the input queue. It adds up manual steps and another point of failures for support team.So let’s evaluate this pattern which would automatically handle this scenario without using any additional components. The triggering parameters are set configurable in an existing database table where the operations team has access to change it. Then we use a combination of TimeoutNotification and TimeoutControl nodes as shown in below diagram.

The below sample code shows how to trigger the flow with the ‘Interval’ value configured in database table. For other parameters (start date/time, count), the same logic can be used.
Flow processing.With the TimeoutNotification node (Timer1) in automatic mode, the flow retrieves the interval value from db table at every say 30 minutes (assumed value to optimize db interaction). The ‘getInterval_&_generateReq’ compute node compares this value with previously set value in a shared variable. (A shared variable is set with the interval value retrieved from DB at the first trigger of Timer1).If this newly retrieved interval value is different than the previous value (meaning operations team changed it), the compute node constructs a new TimeoutRequest message containing the new interval and proceeds ahead to trigger the flow with TimeoutNotification node (Timer2). Now on thenew interval comes in effect within maximum 30 min. (Timer1 interval) after the change in DBconfiguration. This gives the operations team the complete flexibility and control to trigger the flowwhenever they want.The database table (APP_PARAM) has below configuration for trigger parameters.


The following ESQL code is written in the ‘getInterval_&_generateReq’ node above.

Installing IBM MQ On Windows

Installation On Windows
  1. Login as Administrator to the destination windows server using Remote Desktop Connection Tool
  2. Check hostname of the windows server (command is--- hostname). Hostname should not contain any spaces 
We must check Hardware & Software requirements to proceed our MQSeries installation
  •  How much disk space is free. How much disk space is required for our configuration.                 (command is -- dir | find "free") 900MB required for full installation.
  • Speed of the processor.
  • Hardware type (32 bit or 64 bit)                                                                                                 (command is--- systeminfo
  • Get MQseries Software Dump to our destination server.
  • Check whether MQSeries installation supports our destination platform or not by invoking MQLaunchpad
  • Check Eclipse installation required for our MQSeries features.If it is required we must install Eclipse first.
How To Install Eclipse & MQSeries

  • Find following path then invoke the setup.exe.                                                                              (Path is ---- MQV7.X/prereqs/IES/setup.exe)
  • Find MQLaunchpad in the following path.                                                                                          (Path is ----- MQV7.X/MQLaunch.exe)
  • Click On Software requirements in the MQ Installation Wizard. Following wizard shows prerequisite softwares has installed or not
  • Click on WebSphereMQ Installation and then click on Launch IBM WebSphere MQ Installer.
  • It opens following window. accept the license agreement
  • Choose setup type 

  • select folder for program files and then click on next

  • select data-files folder and then click on next
  • select Global securityKit files folder and then click on next
  • select  log files folder and then click on next
  • select features to install in the following window
  • In the following window we can have a look what are the features selected to install and then click on install

  • After installation check MQSeries Installed Version Info Using Command “dspmqver”. (open command prompt then execute command)

In the Upcoming Posts,we will see how to install Message Broker in a Windows Machine as well as in a Linux box.

Basic Linux Commands

$ -- Normal User Prompt
# -- System Admin Prompt

Commands
************

$ logname  (or) whoami  --------- To Check Present Working User
$ who ---------------------------- To Know Present Working Users
$ hostname --------To Know HostName(Machine Name) Of A Server
$ ifconfig ---------- To Find an IP
$ lscpu ------------- To check CPU architecture
$ su [super user name] ---- Switch to Super User
$ clear---------- To Clear Screen
$ exit ------------To exit From user Session
$ man [cmd]----------- Help Facility
$ date ----------- To Display date
$ cal [month] [year] ----- To Display Calender
$ pwd ---------- To Know Present Working Directory
$ ls [flags]--------------To Display Directory Content
$ mkdir [dir name]---------- To Create New Directory
$ cd  [dir name] -------------- To Change Directory
$ mv [oldname] [newname) -------- To Rename Directory
$ rmdir [Directory Name] ---------- To delete directory
$ mv [source path] [target path] ---- To Move directory with sub directories
$ cp [source path] [target path] ---- To Copy directory with sub directories
$ cat > [file name] ----------------- To Create  File
$ cp [source file] [target file] ------- To Copy A File
$ mv [Old File] [New File] --------- To Rename A File
$ rm [file name] --------------------- To Remove A File
$ locate [file name] ------------------- To Search A file In Whole File System
$ find [file name] [path] ---------------- To Search A file in Specified Path
$ gzip [file name] ---------------------- To Compress A File
$ gunzip [file name] ------------------- To Uncompressed A File
$ chmod [777] [file name] ------------- To Change File Permissions
$ chown  [new owner name] [file name] --- To Change Owner
$ chgrp [new group name] [file name] ----- Change Group To A File
$ useradd -u [uid] -g [group name] -d [user home dir] -s [shell]- Creating New User
$ groupadd -g [group id] [group name] ------ To Create New Group
$ userdel [user name] ------ To Delete  User
$ shutdown [time] -------- To Shutdown System At Particular Time
$ shutdown -r now ------------- Shutdown and reboot immediately
$ shutdown -h halt -------------- Shutdown immediately and halt
$ df/df-k/df -g/df -h ---------- To Check Free Disk Space
$ du / du -s [dir name] ------- To Check Used Space
$ vi [file name] -------------- To Open a File with VI Editor
$ head -n [no of lines] [file name] ---- To Display First N line in A File
$ tail -n [no of line] [file name] --------  To Display Last N lines In A File
ODATA
Open Data Protocol (OData) is a Restful data access protocol initially defined by Microsoft. Versions 1.0, 2.0, and 3.0 are released under the Microsoft Open Specification Promise. Version 4.0 is being standardized at OASIS,[1] and was released in March 2014.[2]
The protocol enables the creation and consumption of REST APIs, which allow resources, identified using URLs and defined in a data model, to be published and edited by Web clients using simple HTTP messages. It shares some similarity with JDBC and ODBC but OData is not limited to relational databases.
OData is built on the AtomPub protocol and XML where the Atom structure is the envelope that contains the data returned from each OData request. An OData request uses the REST model for all requests. Each REST command is a POST, GET, PUT, PATCH, or DELETE HTTP request (mapping to CRUD) where the specifics of the command are in the URL.
·         GET: Get a collection of entities (as a feed document) or a single entity (as an entry document).
·         POST: Create a new entity from an entry document.
·         PUT: Update an existing entity with an entry document.
·         PATCH: Update an existing entity with a partial entry document.
·         DELETE: Remove an entity.

Any platform that provides support for HTTP and XML is enough to form HTTP requests to interact with AtomPub. The OData specification defines how AtomPub is used to standardize a typed, resource-oriented CRUD interface for manipulating data sources.

Implementing ODATA in IIB

Step 1 :- Create a New Integration Project--> with name ODATA1



Step 2:- Create a message flow with name :- ODATA_TEST


Step 3:- Design a flow as shown below:-


Step 4:- Configure the nodes as follows:-

        MQInput Node Configuration :-
  1. Message Domain : JSON



          HTTP REQUEST NODE Configuration:
  1. Path Suffix for URL  :- http://services.odata.org/Experimental/OData/OData.svc/Products(1)?$format=json
  2. Message Domain :- JSON
  3. HTTP Method :- GET

Step 5:- Now we need to Test this flow:-
          Input:- {"Product":"0"} 





Simple flow in Message broker using JavaComputeNode

*************************************************************************

Step 1:- Create a Flow using MQInput Node,JavaComputeNode & MQOutput Nodes.
Provide some queue name for -->MQInput Node
Message Domain as                -->XMLNSC

Similarly provide some queue name for MQOutput node as well.

Step 2:- Double click on JavaComputeNode and select "Filter Javaclass".

import com.ibm.broker.javacompute.MbJavaComputeNode;
import com.ibm.broker.plugin.MbElement;
import com.ibm.broker.plugin.MbException;
import com.ibm.broker.plugin.MbMessage;
import com.ibm.broker.plugin.MbMessageAssembly;
import com.ibm.broker.plugin.MbOutputTerminal;
import com.ibm.broker.plugin.MbUserException;
import com.ibm.broker.javacompute.MbJavaComputeNode;
import com.ibm.broker.plugin.*;

public class SIMPLE_JCN_JavaCompute extends MbJavaComputeNode
     {

public void evaluate(MbMessageAssembly inAssembly) throws MbException
            {
MbOutputTerminal out = getOutputTerminal("out");
MbOutputTerminal alt = getOutputTerminal("alternate");

//Declaration of Variables
String Ename = null,Location = null,Batch = null,Company = null;

               MbMessage inMessage = inAssembly.getMessage();
               // create new message
MbMessage outMessage = new MbMessage(inMessage);
MbMessageAssembly outAssembly = new                     MbMessageAssembly(inAssembly,outMessage);
//-------- INPUT PARSING ------------
MbElement InRoot           = inMessage.getRootElement();                // MESSAGE
MbElement InXmlNsc         = InRoot.getLastChild();                     // XMLNSC
MbElement  InDetails       = InXmlNsc.getLastChild();                   //<DETAILS></DETAILS>
//creating MbElement array to store all employee details
MbElement Emp[]           = InDetails.getAllElementsByPath("*");        // <EMP></EMP>
// ------- CREATING STRUCTURE FOR OUTPUT PARSING --------------
MbElement OutRoot         = outMessage.getRootElement();          // MESSAGE
MbElement OutXmlNsc       = OutRoot.getLastChild();               // XMLNSC
MbElement OutDetails      = OutXmlNsc.createElementAsFirstChild(MbElement.TYPE_NAME,"EMP_DETAILS",null);  //<EMP_DETAILS></EMP_DETAILS>
MbElement OutEmp = OutDetails.createElementAsFirstChild(MbElement.TYPE_NAME,"EMP",null); //<EMP></EMP>

//Extracting child elements from <EMP>

for(int i=0;i<Emp.length;i++){
Ename       = Emp[i].getFirstChild().getValueAsString();
Location    = Emp[i].getFirstChild().getNextSibling().getValueAsString();
Batch       = Emp[i].getFirstChild().getNextSibling().getNextSibling().getValueAsString();
Company     = Emp[i].getLastChild().getValueAsString();
}
//Create Values inside OutEmp

OutEmp.createElementAsFirstChild(MbElement.TYPE_NAME_VALUE,"EMPNAME",Ename);
OutEmp.createElementAsLastChild(MbElement.TYPE_NAME_VALUE,"MSS_LOCATION",Location);
OutEmp.createElementAsLastChild(MbElement.TYPE_NAME_VALUE,"ITG_BATCH",Batch);
OutEmp.createElementAsLastChild(MbElement.TYPE_NAME_VALUE,"PAR_COMPANY",Company);

//detach XMLNSC last child because outMessage is created from inMessage and already contains InMessage Structure.
OutXmlNsc.getLastChild().detach();
// if not propagating message to the 'out' terminal
out.propagate(outAssembly);
}
}

Step 3:- Test the flow using the Test cases:-

************************************************************************
Input:-
<?xml version="1.0"?>
        <DETAILS>
                <EMP>
                         <ENAME>RAJESH</ENAME>
                         <LOCATION>MHEIGHTS</LOCATION>
                        <BATCH>35</BATCH>
                       <COMPANY>CTS</COMPANY>
              </EMP>
         </DETAILS> 
************************************************************************

Expected Output:-
<?xml version="1.0"?>
        <EMP_DETAILS>
                  <EMP>
                          <EMPNAME>RAJESH</EMPNAME>
                          <MSS_LOCATION>MHEIGHTS</MSS_LOCATION>
                         <ITG_BATCH>35</ITG_BATCH>                                                                                      <PAR_COMPANY>CTS</PAR_COMPANY>
                 </EMP>
       </EMP_DETAILS>

 ************************************************************************

Thursday, 4 December 2014

Using TryCatch Nodes in WMB


Also I am providing few details of ExceptionList tree, this tree has correlation name as ExceptionList; It it initially empty & is only populated if an exception occurs during message flow processing.
We can access the ExceptionList tree in Compute, Database, and Filter nodes, and we can update it in a Compute node. We must use the appropriate correlation name; ExceptionList for a Database or Filter node, and InputExceptionList for a Compute node.

Below example shows the usage of Try Catch & Throw nodes & also how to manipulate the ExceptionList in compute node using the InputExceptionList correlation name.

Message Flow elaborating Try Catch & ExceptionList processing 
Note : "The Form Error Details" compute node should have following property:
Basic Tab -> Compute mode -> Exception and Message.

If input for IN (MQInput) Node is :
<Request><No>0</No></Request>
Output in Catch (MQOutput) Node is :
<ErrorDetails>
 <Text>Throwing exception from throw node.</Text>
 <Number>1001</Number>
 <FlowName.NodeName>Sample.Throw</FlowName.NodeName>
</ErrorDetails>


If input for IN (MQInput) Node is :
<Request><No>10</No></Request>
Output in Catch (MQOutput) Node is :
<ErrorDetails>
 <Number>500</Number>
 <FlowName.NodeName>Sample.Validation</FlowName.NodeName>
 <Text>Request.No is Invalid</Text>
</ErrorDetails>

 MQGet in WMB

 MQGet node can be used anywhere in a message flow to store message in intermediate state & afterwords in another thread of flow we can aggregate this temporary result to form final output (Getting message by correl/message id..)

Following example illustrates how an intermediate message can be combined to incoming message to form final message.


  • First Flow : IN (MQInput) , TEMP (MQOutput)
  • Second Flow :  IN2 (MQInput), TEMP (MQGet), OUT (MQOutput)

Here the Compute node copies the message id of the incoming message to correlation id of output message, so that we can retrieve this intermediate message based on correlation id.
SET OutputRoot.MQMD.CorrelId  =  InputRoot.MQMD.MsgId;



Three different input for first flow is  : (Put in IN QUEUE)


<Project>PD1</Project> Say MsgId is (ABCXX)
<Project>PD2</Project> Say MsgId is (ABCYY)
<Project>PD3</Project> Say MsgId is (ABCZZ)


Same data is being put in TEMP (Output) queue after copying its MsgId to CorrelId.
Input to the second flow is  : (Put in IN2 QUEUE)

<Employee>
<NAME>HARISH</NAME>
<ProjectDetails>PD</ProjectDetails>
</Employee>
Note : Passed correlId is ABCXX (MsgId of PD1)

Now we can see the property of the MQGet (TEMP) node, Get by CorrelationID is checked; it means whatever message we put in TEMP queue , we can get based on correlId.



Next is to form the output tree from input & result trees.
The content of Employee.ProjectDetails will be replace by the content of Project (present in TEMP queue from first flow)



Final output will be :
<Employee><NAME>AJAY</NAME><ProjectDetails>PD1</ProjectDetails></Employee>

Error Handling in WebSphere Message Broker

When designing a message flow in MB, we often stress on main flows rather than on error handling. Well AFAIK Error Handling techniques & design principles are more crucial than designing the best path.


Design Consideration

  • Connect the Failure terminal of any node to a sequence of nodes that processes the node's internal exception (the Failure flow).
  • Connect the Catch terminal of the input node or a Try Catch node to a sequence of nodes that processes exceptions that are generated beyond it (the Catch flow).
  • Insert one or more Try Catch nodes at specific points in the message flow to catch and process exceptions that are generated by the flow that is connected to the Try terminal.
  • Ensure that all messages that are received by an MQInput node are processed within a transaction or are not processed within a transaction.

Understanding the Flow Sequence


  • When an exception is detected within a node, the message and the exception information are propagated to the node's Failure terminal ( Diagnostic information is available in the Exception List). 
  • If the node does not have a Failure terminal or if it is not connected, the broker throws an exception and returns control to the closest previous node that can process the exception. This node can be a Try Catch node  (Root & Local Environment are reset to the values they had before) or the MQInput node .
  • If the catch terminal of the MQInput node is connected, the message is propagated there (Exception List entries are available; Root & Local Environment are reset to the values they had before).Otherwise if is not connected, the transactionality of the message is considered.
  • If the message is not transactional, the message is discarded. Otherwise , if it is transactional, the message is returned to the input queue, and it is read again, whereupon the back out count is checked.
  • If the back out count has not exceeded its threshold, the message is propagated to the output terminal of the MQInput node for reprocessing. Otherwise if it is exceeded & if the failure terminal of the MQInput node is connected then the message is propagated to that path. (Root is available but Exception List is empty)
  • If the failure terminal of the MQInput node is not connected, the message is put on an available queue, in order of preference; message is put in back out queue, if one is defined; otherwise in dead-letter queue, if one is defined. If the message cannot be put on either of these queues, it remains on the input queue in a retry loop until the target queue clears.(It also records the error situation by writing errors to the local error log)

Conclusion:- If the flow has an error in Compute node,then it will be routed back to the MQInput node.Here it checks for whether the Catch Terminal is connected or not.
  1. If it is connected it will be routed towards that terminal & falls in that queue.
  2. If it is not connected it will  check for Failure Terminal.
  3.  If it is connected it will be routed towards that terminal & falls in that queue.
  4. If it is not connected it will  check for Transactionality Property.
  • If the message is not transactional, the message is discarded. 
  •  if the message is transactional, the message is returned to the input queue, and it is read again, whereupon the back out count is checked.