Tuesday, 26 September 2017

Best Practices to be followed in Message Broker Development Perspective

1. MESSAGE FLOW DEVELOPMENT STANDARDS

           -Below are the Best practices during the message flow development stage, including how to avoid message flow implementations that can cause performance problems.
  To separate configuration information from business logic, do not externalize configuration information to a file or database. This technique can reduce performance, because reading a configuration or parameters file is a one-time activity at the time of the first instance of a node is created or at the time the first message is processed, instead of a loop checkup for each message. Since Message Broker is more CPU-oriented than I/O-oriented, it is usually best to avoiding I/O operations involving files or databases when possible.

  Avoid overuse of Compute and Java Compute nodes because tree copying is processor heavy instead put reusable nodes into sub-flows. There are no additional nodes inserted into the message flow as a result of using sub flow's.

  For efficient code re-use consider using ESQL modules and schema rather than sub flows. The addition of extra compute nodes to perform initialization and finalization for the processing that is done in the sub flow results in extra message tree copying which is relatively expensive as it is a copy of a structured object.

  Avoid consecutive short message flows in which the output of a message flow is immediately processed by another message flow as opposed to the output of the message flow being read by an external application. By using consecutive short message flows you are forcing additional parsing and serialization of messages which is likely to be expensive. The only exception to this is the use of the Aggregation nodes.

  It is important to think about the structure of your message flows and to think about how they will process incoming data. If a unique message flow is produced for each different type of message it is referred to as a specific flow and if several message flows, each processing a different group of messages we call it as a generic flow.

There are advantages to both the specific and generic approaches. From a message throughput point of view it is better to implement specific flows. From a management and operation point of view it is better to use generic flows. Which approach you choose will depend on what is important in your own situation.

  Maximize the use of the built-in parsers. It is better to attach more than one wire format to a single logical message set model and allow the Message Broker writers to convert the data when it is written to the wire, than having to use multiple lines of ESQL or Java to copy field values from one logical message set model to another. This will often require more time and effort in the construction of the model, but will save coding effort in return, and will provide a smaller runtime memory footprint which will be long lasting.

  Avoid Parsing cost during routing a message in a message flow which needs to look at a field in the body of the incoming message(several megabytes in size) in order to make a routing decision.

A technique to reduce this cost would be to have the application which creates this message copy the field that is needed for routing into a header within the message, say in an MQRFH2 header for an MQ message or as a JMS property if it is a JMS message. If you were to do this it would no longer be necessary to parse the message body so potentially saving a large amount of processing effort. The MQRFH2 header or JMS Properties folder would still need to be parsed but this is going to be smaller amount of data. The parsers in this case are also more efficient than the general parser for a message body as the structure of the header is known.

A second approach to not parsing data is to not send it in the first place. Where two applications communicate consider sending only the changed data rather than sending the full message. This requires additional complexity in the receiving application but could potentially save significant parsing processing dependent on the situation. This technique also has the benefit of reducing the amount of data to be transmitted across the network.

  Use opaque parsing (XMLNS and XMLNSC domains only) where you do not need to access the elements of the sub tree, for example you need to copy a portion of the input tree to the output message but may not care about the contents in this particular message flow. You accept the content in the sub folder and have no need to validate or process it in any way.

- Opaque parsing is a technique that allows the whole of an XML sub tree to place in the message tree as a single element. The entry in the message tree is the bit stream of the original input message. This technique has two benefits:
It reduces the size of the message tree since the XML sub tree is not expanded into the individual elements.

The cost of parsing is reduced since less of the input message is expanded as individual elements and added
    to the message tree.

        For Example: The element <p56:requestAssessorAvailability> in the below SQL code snippet was large with many child elements with it. In this case the cost of populating the message tree would be large. As no part of <p56:requestAssessorAvailability> is needed in the message flow we can opaquely parse this element.

CREATE LASTCHILD OF OutputRoot
DOMAIN(’XMLNS’)
PARSE (BitStream
ENCODING InputRoot.Properties.Encoding
CCSID InputRoot.Properties.CodedCharSetId
FORMAT ’XMLNS_OPAQUE’
TYPE ’p56:requestAssessorAvailability ’);
             
                 Note: It is not currently possible to use the CREATE statement to opaquely parse a message in the XMLNSC domain.
  Use the compact parsers (XMLNSC, MRM XML and RFH2C). The compact parsers discard comments and white space in the input message. Dependent on the contents of your messages this may have an effect of not. By comparison the other parsers include data in the original message, so white space and comments would be inserted into the message tree.


  Avoid using Reset Content Descriptor nodes. An RCD node is intended to change the message domain which actually parses the complete message tree. This is both memory and CPU intensive activity. A logical combination of IF statements, "CREATE with PARSE" statement and ESQL ASBITSTREAM can be used to eliminate RCD nodes and multiple compute/filter nodes.

  Do not use trace nodes on production environments. Using ${Root} expression is expensive operation as this causes the complete message tree parsing. This happens even if the destination is not an active one.

  Wherever possible use user-exits and redirect the audit / logging information appropriately. User exit feature gives the flexibility to activate & deactivate them dynamically during message processing.

  Using destination list is more recommended rather than using more nodes when message has to be written to multiple destinations.

  If XMLT nodes are to be used then make use of style sheet caching wherever possible.

  Ensure transaction mode is set to 'NO' for input nodes and 'Automatic' for output nodes during message processing. These should differ only when processing has to be done under transaction mode.

  Always have exceptional handling mechanism for the message flows rather than relying on the default broker exception handler. The default exceptional handler can block the message consumption when a single poisoned message processing is failed.

  If there are database manipulating nodes then promote the data source name property as it might not be same across various environments (development, test, production, etc). Promoting the property helps to change it, at a flow level rather than at each node level during deployments on various environments. The same applies to other node properties like style sheet name for XMLT node.

  Revisit all the java nodes and ensure that there is a clearMessage() called on every MbMessage object especially in the finally block. MbMessage object is used to create output message tree, environment tree, local environment tree, exception list tree, etc. So where ever the message trees are created, clear them out in the try - finally block.

  Each message flow is a thread. For effective processing integrity it is not good to spawn additional threads in the message flow nodes. If a business requirement arises then all the threads should be maintained by the node itself and release them at the time of node deletion ensuring that there is no thread blocking during message processing.
       



2. PUBLISH/SUBSCRIBE BEST PRACTICES

  When using pub/sub, the number of subscribers per topic will affect message throughput as this will determine how many output messages have to be written per publication on a topic. The messages will be written in a single unit of work so the broker queue manager log needs to be tuned when using persistent messages. You should also consider the use message batching, which is achieved through the use of Commit Count. The value for commit count is specified on the message flow configuration panel in the BAR file editor.

  Use of publication nodes can lead to increased use of the broker database especially if they are retained publications as broker stores them in the broker database. So be judicious if the publications have to be retained publications or not.

  Details of each subscription registration and de-registration is stored in the broker database table. If the level of dynamic subscribing or unsubscribing by applications is too high, then there will be greater level of broker database operations. All I/O and DB operations are expensive. So design a solution in such a way that these operations are minimized or tune the database for high performance.

  When designing publish / subscribe model consider content based routing over topic based routing. By using content based routing, it is easy to evaluate an SQL expression against the contents of a message and take a decision whether the subscriber really need to get the message or not. This helps in the reduction of number of messages sent from the broker to the subscribers. On topic based routing, the subscriber would get all the messages on the registered topic. The subscriber might not need all the messages and might discard them based on the content. Thus content based routing helps the subscribers get the most of those messages that they really need.

  Where the number of subscribers matching on a topic is high (in the hundreds or thousands) this may result in a message rate and message volume which is beyond the capabilities of a single queue manager. This will also depend on the publication rate. In this case consider using a collective of brokers in order to distribute load. The subscribers can then be allocated across the members of the collective rather than them all trying to use the same broker.

  Use of collective brokers also improves availability of the Publish/Subscribe service. In the event of failure on any one broker, subscribers would need to connect to another broker and re-subscribe. A publisher may also need to reconnect to a broker in the collective.






3. DATABASE BEST PRACTICES FROM IIB PERSPECTIVE 


  I/O operations and database operations are expensive. Wherever possible minimize the number of such operations in the solution. Try to build cache wherever possible. The decision is purely business scenario driven. Excessive cache build is also not recommended.

  Tune application heap size and the application control heap size .It is not possible to recommend a fixed value as it depends on the business condition and solution implementation. In order to determine a value issue the largest message transaction to the database (as per business requirement) and monitor the heap size.

  Tune the bufferpool size if the application has the ability to work with large objects such as BLOBs, CLOBs and VarChars (as these are accessed using the memory area of the database).

  Ensure that locklist and maxlocks are large enough, or else reduce the unit of work by issuing commit statements more often.

  Use indexes wherever possible to reduce the contention between message flow instances and applications.

  Where a message flow only reads data from a table, consider using a read only view of that table. This reduces the amount of locking within the database manager and reduces the processing cost of the read.

  If database operations are unavoidable then at least reduce them by:
Making the database local to the system where message broker resides.
Having high buffer sizes.
Using fast disks for data and logs.

  When using the SELECT statement, make the WHERE clauses efficient to minimize the amount of data retrieved from a database.

  When possible, use stored procedures as they are already compiled and stored in the database. This increase the speed of data retrieval.

  When possible, avoid complex joins as they are expensive due to the processing time consumption.





4. LARGE FILE HANDLING BEST PRACTICES

  Manipulation of a large message tree can, therefore, demand a great deal of storage. If you design a message flow that handles large messages made up of repeating structures, you can code specific ESQL statements that help to reduce the storage load on the broker. These ESQL statements cause the broker to perform limited parsing of the message, and to keep only that part of the message tree that reflects a single record in storage at a time.

  Copy the body of the input message as a bit stream to a special folder in the output message. This creates a modifiable copy of the input message that is not parsed and which therefore uses a minimum amount of memory.

  Avoid any inspection of the input message; this avoids the need to parse the message.

You can refer to the below IBM Knowledge center link to know more information on manipulating a large message tree
http://www-01.ibm.com/support/knowledgecenter/api/content/SSMKHH_9.0.0/com.ibm.etools.mft.doc/ac20702_.htm

5. ESQL CODING STANDARDS AND BEST PRACTICES

        - Below are the Best programming practices during ESQL development in the message flows, including developing reusable code and minimizing optimized ESQL code in the message flows, purely from a performance improvement perspective.
  Array subscripts [ ] are expensive in terms of performance because of the way in which subscript is evaluated dynamically at run time. By avoiding the use of array subscripts wherever possible, you can improve the performance of your ESQL code. You can use reference variables instead, which maintain a pointer into the array and which can then be reused; for example:
DECLARE myref REFERENCE TO InputRoot.XML.Invoice.Purchases.Item[1];
-- Continue processing for each item in the array
WHILE LASTMOVE(myref)=TRUE DO
   -- Add 1 to each item in the array
   SET myref = myref + 1;
   -- Do some processing
   -- Move the dynamic reference to the next item in the array
   MOVE myref NEXTSIBLING;
END WHILE;


  Avoid the use of CARDINALITY in a loop; for example:

WHILE ( I < CARDINALITY (InputRoot.MRM.A.B.C[]

The CARDINALITY function must be evaluated each time the loop is traversed, which is costly in performance terms. This is particularly true with large arrays because the loop is repeated more frequently. It is more efficient to determine the size of the array before the WHILE loop (unless it changes in the loop) so that it is evaluated only once; for example:

SET ARRAY_SIZE = CARDINALITY (InputRoot.MRM.A.B.C[]
WHILE ( I < ARRAY_SIZE )

  Reduce the number of DECLARE statements (and therefore the performance cost) by declaring a variable and setting its initial value within a single statement. Alternatively, you can declare multiple variables of the same data type within a single ESQL statement rather than in multiple statements. This technique also helps to reduce memory usage.

  The EVAL statement is sometimes used when there is a requirement to dynamically determine correlation names. However, it is expensive in terms of CPU use, because it involves the statement being run twice. The first time it runs, the component parts are determined, in order to construct the statement that will be run; then the statement that has been constructed is run.


  Avoid the use of the PASSTHRU statement with a CALL statement to invoke a stored procedure. As an alternative, you can use the CREATE PROCEDURE ... EXTERNAL ... and CALL ... commands.

  When using the PASSTHRU statement use host variables (parameter markers) for data values rather than coding literal values. This allows the dynamic SQL statement to be reused by the dynamic SQL statement processor within database. An SQL PREPARE on a dynamic statement is an expensive operation in performance terms, so it is more efficient to run this only once and then EXECUTE the statement repeatedly, rather than to PREPARE and EXECUTE every time.
For example, the following statement has two data and literal values, 100 and IBM:
PASSTHRU(’UPDATE SHAREPRICES AS SP SET Price = 100 WHERE SP.COMPANY = ‘IBM’’);

This statement is effective when the price is 100 and the company is IBM. When either the Price or Company changes, another statement is required, with another SQL PREPARE statement, which impacts performance.

However, by using the following statement, Price and Company can change without requiring another statement or another PREPARE:

PASSTHRU(’UPDATE SHAREPRICES AS SP SET Price = ? WHERE SP.COMPANY = ?’,
InputRoot.XML.Message.Price,InputRoot.XML.Message.Company);


  Use reference variables to refer to long correlation names such as InputRoot.XMLNSC.A.B.C.D.E. Declare a reference pointer as shown in the following example:

DECLARE refPtr REFERENCE to InputRoot.XMLNSC.A.B.C.D.E;

To access element E of the message tree, use the correlation name refPtr.E

You can use REFERENCE and MOVE statements to help reduce the amount of navigation within the message tree, which improves performance. This technique can be useful when you are constructing a large number of SET or CREATE statements; rather than navigating to the same branch in the tree, you can use a REFERENCE variable to establish a pointer to the branch and then use the MOVE statement to process one field at a time.

  String manipulation functions used within ESQL can be CPU intensive; functions such as LENGTH, SUBSTRING, and RTRIM must access individual bytes in the message tree. These functions are expensive in performance terms, so minimizing their use can help to improve performance. Use the REPLACE function in preference to a complete re-parsing. Where possible, also avoid executing the same concatenations repeatedly, by storing intermediate results in variables.

  Avoid nested IF statements instead use ELSEIF or CASE WHEN clauses to get quicker drop-out.

  Use new FORMAT clause (in CAST function) where possible to perform data and time formatting.

  Performance is affected by the SET statement being used to create many more fields because navigating over all the fields that precede the specified field causes the loss in performance. as shown in the following example:

SET OutputRoot.XMLNSC.TestCase.StructureA.ParentA.field1 = '1';
SET OutputRoot.XMLNSC.TestCase.StructureA.ParentA.field2 = '2';
SET OutputRoot.XMLNSC.TestCase.StructureA.ParentA.field3 = '3';
SET OutputRoot.XMLNSC.TestCase.StructureA.ParentA.field4 = '4';

If you are accessing or creating consecutive fields or records, you can solve this problem by using reference variables for example:

SET OutputRoot.XMLNS.TestCase.StructureA.ParentA.field1 = '1';
DECLARE outRef REFERENCE TO OutputRoot.XMLNS.TestCase.StructureA.ParentA;
SET outRef.field2 = '2';
SET outRef.field3 = '3';
SET outRef.field4 = '4';
SET outRef.field5 = '5';
When referencing repeating input message tree fields, you can use the following ESQL:
DECLARE myChar CHAR;
DECLARE inputRef REFERENCE TO InputRoot.MRM.myParent.myRepeatingRecord[1];
WHILE LASTMOVE(inputRef) DO
SET myChar = inputRef;
MOVE inputRef NEXTSIBLING NAME 'myRepeatingRecord';
END WHILE;

  Wherever possible avoid using Local Environment tree and use Environment tree to store information while the message flow processes the message. Only one copy of the Environment tree exists for all the nodes of the message flow instance but for Local Environment, the message tree is copied for every node it is propagated.

  If there is a need to send multiple output messages of the same input message then Use PROPAGATE function in the ESQL. This helps to reclaim the storage of the output message tree, for every message propagation that is done. This way memory utilization of the message flow can be reduced.

  Use ROW and LIST constructors to create lists of fields. Also it is good to do variable initialization at the time of declaration itself. Thus wherever possible reduce the number of ESQL statements. This increases performance and reduces the amount of internal memory objects that were created and parsed.

  Limit the use of shared variables to a small number of entries, tens of entries rather than hundreds or thousands, when using an array of ROW variables or order in probability of usage (current implementation is not indexed so performance can degrade with higher numbers of entries).

  Code ESQL using the fewest number of lines possible. This will help to reduce memory and CPU usage at runtime. It is logical that the fewer lines of code that are used the more efficient processing will be.

  The throughput of the message processing in ESQL is faster than the JAVA. (Java is at least 10%-20% slower than ESQL) because while processing the messages, JAVA uses Xpath syntax, which slowdowns the performance of the message flow, this is because XPath basically searches the full XML document with each XPath expression. While in esql uses field reference to navigate to the fields in XML.

Monday, 25 September 2017

Error Handling in WebSphere Message Broker


When we design any message flow, we often do not give more emphasis on error handling. Well as per my experience I found this error handling techniques & design principles more crucial than designing the happy path.
So here I am with few details on how to handle unhappy path in WMBV6.0 with information about the message flow error behavior.


Design Consideration

  • Connect the Failure terminal of any node to a sequence of nodes that processes the node's internal exception (the Failure flow).
  • Connect the Catch terminal of the input node or a TryCatch node to a sequence of nodes that processes exceptions that are generated beyond it (the Catch flow).
  • Insert one or more TryCatch nodes at specific points in the message flow to catch and process exceptions that are generated by the flow that is connected to the Try terminal.
  • Ensure that all messages that are received by an MQInput node are processed within a transaction or are not processed within a transaction.


Understanding the Flow Sequence

  • When an exception is detected within a node, the message and the exception information are propagated to the node's Failure terminal ( Diagnostic information is available in the ExceptionList). 
  • If the node does not have a Failure terminal or if it is not connected, the broker throws an exception and returns control to the closest previous node that can process the exception. This node can be a TryCatch node  (Root & LocalEnvironment are reset to the values they had before) or the MQInput node .
  • If the catch terminal of the MQInput node is connected, the message is propagated there (ExceptionList entries are available; Root & LocalEnvironment are reset to the values they had before).Otherwise if is not connected, the transactionality of the message is considered.
  • If the message is not transactional, the message is discarded. Otherwise , if it is transactional, the message is returned to the input queue, and it is read again, whereupon the backout count is checked.
  • If the backout count has not exceeded its threshold, the message is propagated to the output terminal of the MQInput node for reprocessing. Otherwise if it is exceeded & if the failure terminal of the MQInput node is connected then the message is propagated to that path. (Root is available but ExceptionList is empty)
  • If the failure terminal of the MQInput node is not connected, the message is put on an available queue, in order of preference; message is put in backout queue, if one is defined; otherwise in dead-letter queue, if one is defined. If the message cannot be put on either of these queues, it remains on the input queue in a retry loop until the target queue clears.(It also records the error situation by writing errors to the local error log)


Event Monitoring using MQSI Commands


Step 1 :- Create a flow like this


Now dont specify any thing in the Monitoring tab for the nodes.because here we are using commands

mqsicreateconfigurableservice MB -c MonitoringProfiles -o Sampleprofile

mqsichangeproperties MB -c MonitoringProfiles -o Sampleprofile -p 


Step 2:- In the compute node,the code should be like this:-

CREATE COMPUTE MODULE Event_Monitoring_Compute
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
-- CALL CopyMessageHeaders();
CALL CopyEntireMessage();
RETURN TRUE;
END;



Step 3:- Create an Xml file with name Sample.xml on the Desktop & Copy this data


Sample.xml file:

  <profile:monitoringProfile xmlns:profile="http://www.ibm.com/xmlns/prod/websphere/messagebroker/6.1.0.3/monitoring/profile" profile:version="2.0">
  <profile:eventSource profile:eventSourceAddress="MQ Input.transaction.Start" profile:enabled="true">
  <profile:eventPointDataQuery>
  <profile:eventIdentity>
  <profile:eventName profile:literal="MQ Input.TransactionStart" /> 
  </profile:eventIdentity>
  <profile:eventCorrelation>
  <profile:localTransactionId profile:sourceOfId="automatic" /> 
  <profile:parentTransactionId profile:sourceOfId="automatic" /> 
  <profile:globalTransactionId profile:sourceOfId="automatic" /> 
  </profile:eventCorrelation>
  <profile:eventSequence profile:name="creationTime" /> 
  </profile:eventPointDataQuery>
  <profile:applicationDataQuery>
<profile:complexContent>
  <profile:payloadQuery profile:queryText="$Body/emp/eno"> 
  </profile:payloadQuery>
  </profile:complexContent>
  </profile:applicationDataQuery>
  <profile:bitstreamDataQuery profile:bitstreamContent="body" profile:encoding="base64Binary" /> 
  </profile:eventSource>
  <profile:eventSource profile:eventSourceAddress="MQ Input.transaction.End" profile:enabled="true">
  <profile:eventPointDataQuery>
  <profile:eventIdentity>
  <profile:eventName profile:literal="MQ Input.TransactionEnd" /> 
  </profile:eventIdentity>
  <profile:eventCorrelation>
  <profile:localTransactionId profile:sourceOfId="automatic" /> 
  <profile:parentTransactionId profile:sourceOfId="automatic" /> 
  <profile:globalTransactionId profile:sourceOfId="automatic" /> 
  </profile:eventCorrelation>
  <profile:eventSequence profile:name="creationTime" />
  </profile:eventPointDataQuery>
  <profile:applicationDataQuery>
  <profile:complexContent>
  <profile:payloadQuery profile:queryText="$Body/emp/ename"> 
  </profile:payloadQuery>
  </profile:complexContent>
  </profile:applicationDataQuery> 
  <profile:bitstreamDataQuery profile:bitstreamContent="body" profile:encoding="base64Binary" /> 
  </profile:eventSource>
</profile:monitoringProfile>


Now open IBM Command Console & Issue these commands




Now deploy the flow into the Execution Group & issue this command



Now create a subscription point in the MQ Explorer



Look for result in SUB queue and then we get two messages for Transaction.Start and Transaction.End 

Result:-
Transaction.Start Result:




Transaction.End  Result:



Input:

<emp><eno>1000</eno><ename>Ajay</ename></emp>

Wednesday, 31 August 2016


XSL String Functions that can be used in Datapower XI 50








Publish-Subscribe in WMQ


Step 1:- Open MQ Explorer

New-->Topic with name 'Topic1'




Topic string-->COMPANY



Step 2:- Create New-->Subscription with name as 'sub'


Topic Name-->topic1 select
Destination QMGR -->mq
Destination Queue Name --> q1

Finish


Then goto Topic & select topic1
right click on Test Publication


Put some message in mq data--> To view the message open q1


Testing whether message falls in Queue q1 or not.


Tuesday, 12 July 2016

ESQL Best Practices In Websphere Message Broker

 This article describes coding standards for Extended Structured Query Language (ESQL), emphasizing the use of ESQL in the development of IBM® WebSphere® Message Broker message flow applications. Topics include file naming and organization, file layout, comments, line wrapping, alignment, white space, naming conventions, frequently used statements, and useful programming practices.


The following guidelines should be used when constructing the ESQL files that implement a WebSphere Message Broker application:
  • ESQL source files should always have names that end with the extension .esql. For example: NotificationTimeout.esql.
  • ESQL filenames should consist of mixed-case alphabetic characters, with the first letter of each word and all acronyms in uppercase. For example: IBMExample.esql.
In general, ESQL files longer than 2000 lines are cumbersome to deal with and should be avoided, by ensuring that a single ESQL file implements the message flows that relate to each other, and by abstracting reusable code into separate ESQL files.
ESQL files can be grouped in broker schemas, which are a hierarchical way of organizing ESQL files. They also serve to create local name spaces so that procedures and functions can be reused, and yet be distinguished by the schema they are in. In short, broker schemas are organizational units of related code that address a specific business or logical problem. Therefore, related ESQL files should be placed in their own schema.
3. FILE LAYOUT
The content of each ESQL file should conform to the following standards:
  • The file must start with a descriptive header comment, as described in Section 5 below.
  • The header comment should be followed by a broker schema declaration and the PATH clauses that specify a list of additional schemas to be searched when matching function and procedure calls to their implementations. Do not use the default broker schema.
    BROKER SCHEMA com.ibm.convention.sample
    PATH com.ibm.convention.common;
    PATH com.ibm.convention.detail;
The remainder of the file should be divided into three sections:
  1. Broker schema level variables and constants
    They are not globally reusable and can only be used within the same broker schema.
    EXTERNAL variables are also known as user-defined properties. They are implicitly constant. When you use the NAMESPACE and NAME clauses, their values are implicitly constant and of type CHARACTER.
    DECLARE DAY1 EXTERNAL CHARACTER 'monday';
    DECLARE XMLSCHEMA_INSTANCE NAMESPACE 'http://www.w3.org/2001/XMLSchema-instance';
    DECLARE HIGH_PRIORITY CONSTANT INTEGER 7;
    DECLARE processIdCounter SHARED INTEGER 0;
  2. Broker schema level procedures and functions
    They are globally reusable and can be called by other functions or procedures in ESQL files within any schema defined in the same or another project.
    CREATE PROCEDURE getProcessId(OUT processId CHARACTER) 
    BEGIN
        BEGIN ATOMIC        
            SET processId = CAST(CURRENT_TIMESTAMP AS CHARACTER FORMAT 'ddHHmmss') 
                            || CAST(processIdCounter as CHAR);
            SET processIdCounter = processIdCounter + 1;
        END;
    END;
    
    CREATE FUNCTION encodeInBASE64(IN data BLOB)
    RETURNS CHARACTER
    LANGUAGE JAVA
    EXTERNAL NAME "com.ibm.broker.util.base64.encode";
  3. Modules
    A module defines a specific behavior for a message flow node. It must begin with the CREATE node_type MODULE statement and end with an END MODULE statement. The node_type must be either COMPUTE, DATABASE, or FILTER. A module must contain the function Main(), which is the entry point for the module. The constants, variables, functions, and procedures declared within the module can be used only within the module.
    CREATE FILTER MODULE FilterData
        CREATE FUNCTION Main() RETURNS BOOLEAN
        BEGIN
            DECLARE messageType REFERENCE TO Root.XMLNSC.Msg.Type;
            IF messageType = 'P' THEN
                RETURN TRUE;
            ELSE
                IF messageType = 'D' THEN
                    RETURN FALSE;
                ELSE
                    RETURN UNKNOWN;
                END IF;
            END IF;    
        END;
    END MODULE;

4. NAMING CONVENTIONS

In general, the names assigned to various ESQL constructs should be mnemonic and descriptive, using whole words and avoiding confusing acronyms and abbreviations. Of course, you need to balance descriptiveness with brevity, because overly long names are hard to handle and make code harder to understand. The following table provides naming conventions for ESQL broker schemas, modules, keywords, correlation names, procedures, functions, variables, and constants.
EntityNaming rulesExamples
SchemaA schema name reflects the file system name leading the location of files in this schema. Each level in a schema name is mapped to a directory name. To be supported by any file system, schema names should consist of lower case alphanumeric characters. The names should be prefixed with the reverse of the company URL.com.ibm.convention.
sample
ModuleA module name should consist of more than one alphanumeric character, start with an upper case letter, and have mixed case, with the first letter of each internal word and all letters of acronyms in uppercase. It must match any label assigned to a compute, database, or filter node in a message flow that uses the module. If more than one such node in a message flow uses the same module, add additional characters to the node labels in order to differentiate between them.ConstructInvoice1
ConstructInvoice2
IsReconnect
RetrieveIBMData
ESQL keywordESQL keywords should be uppercase, although they are not case sensitive.SET
CREATE PROCEDURE
TRUE
Field reference or correlation nameA field reference or correlation name should start with an uppercase letter and have mixed case, with the first letter of each internal word and all letters of acronyms in uppercase.Environment.Variables.
Invoice.
CustomerNumber
VariableA variable name should start with a lowercase letter and have mixed case, with the first letter of each internal word and all letters of acronyms in uppercase. Since a variable name should start with a lowercase letter, it should not start with an acronym. Trivial variable names such as i or x can be assigned to temporary variables of limited scope and importance at your discretion.i
invoiceItem
currentHL7Section
controlReference
Procedure or functionA procedure or function name should consist of more than one alphanumeric character, start with a lowercase letter, and have mixed case, with the first letter of each internal word and all letters of acronyms in uppercase. The first word of the name should be a verb.setEnvironment
computeIBMValue
ConstantA constant name should start with a letter, use all uppercase letters, and use the underscore ( _ ) to separate words.MIN_VOLUME
MAX_RETRIES

5. COMMENTS

The discussion below classifies ESQL comments into one of two classes:
  • Header comments, used to summarize and demarcate a section of an ESQL file
  • Implementation comments, used to clarify the meaning of a piece of ESQL logic.

5.1. Header comments

5.1.1. File header

An ESQL file should always begin with a file-header comment that provides the name of the file, a brief synopsis of the purpose of the file, the copyright, and author information. The example below illustrates one possible format for such a header, but any suitable alternative that clearly conveys the same information is acceptable. The header is 80 characters in length. The description text should consist of complete sentences, wrapped as needed without using hyphenation. List each author on a separate line.
/*
 *
 * File name: Workfile.esql
 *
 * Purpose:   Sample ESQL file with proper prologue.
 *
 * Authors:   Rachel Shen
 *                 Ankur Upadhyaya
 * Date      21 March 2008
 * Version:  1.0
 *
 * @copyright  IBM Canada Ltd. 2008.  All rights reserved.
 *
 */

5.1.2. Module header

Every module definition must be preceded by a module header comment. A module header contains descriptive text that consists of complete sentences, wrapped as needed without using hyphenation:
/*
 * Module description goes here
 *
 */

5.1.3. Procedure header

Every procedure definition must be preceded by a procedure header comment. A procedure header contains descriptive text that consists of complete sentences, wrapped as needed without using hyphenation. This header should also name and describe each of the parameters handled by the procedure, classifying them as type IN, OUT, or INOUT. The parameter descriptions need not consist of complete sentences -- brief, descriptive phrases should suffice. However, they should be preceded by a hyphen and properly aligned with one another:
/*
 * Procedure description goes here.
 *
 * Parameters:
 *
 * IN:     REFERENCE parameter1 - Description goes here.
 * INOUT: INTEGER   parameter2 - Description goes here.
 * OUT:    TIMESTAMP result     - Description goes here.
 *
 */

5.1.5. Function header

Function headers are essentially the same as procedure headers:
/*
 * Function description goes here.
 *
 * Parameters:
 *
 * IN:     REFERENCE parameter1 - Description goes here.
 * INOUT: INTEGER   parameter2 - Description goes here.
 * OUT:    TIMESTAMP result     - Description goes here.
 *
 * RETURNS: BOOLEAN รข€“ Description goes here.
 *
 */

5.2. Implementation comments

Add comments to ESQL source code to clarify program logic and convey information that is not immediately obvious from inspecting the code. Do not add too many comments -- they can become redundant, complicate code maintenance, and get out of date as the software evolves. In general, too many comments indicate poorly written code, because well written code tends to be self explanatory. Implementation comments can be written in single-line, block, or trailing forms, as described below.

5.2.1. Single-line comments

A single-line comment is a short comment that explains and aligns with the code that follows it. It should be preceded by a single blank line and immediately followed by the code that it describes:
-- Check for the condition
IF condition THEN
    SET z = x + y;
END IF;

5.2.2. Block comments

Block comments are used to provide descriptions of ESQL files, modules, procedures, and functions. Use Block comments at the beginning of each file and before each module, procedure, and function. You can also use them anywhere in an ESQL file. Block comments inside a function or procedure should be indented at the same level as the code they describe. Precede a block comment by a single blank line and immediately follow it by the code it describes. Because shorter comments with expressive code are always preferable to lengthy block comments, block comments should be rarely used within a procedure or function. Example of a block comment:
/*
 * Here is a block comment to show an example of comments
 * within a procedure or function. 
 *
 */
IF condition THEN
    SET z = x + y;
END IF;

5.2.3. Trailing comments

Trailing comments are brief remarks on the same line as the code they refer to. Indent these comments to clearly separate them from the relevant code. If several trailing comments relate to the same segment of code, align the comments with one another. Trailing comments are usually brief phrases and need not be complete sentences:
IF condition THEN
    SET z = x + y;           -- trailing comment 1
ELSE
    SET z = (x - y) * k;    -- comment 2, aligned with comment 1
END IF;

6. STYLE GUIDELINES

6.1. Line wrapping and alignment

Lines of ESQL source code should be wrapped and aligned according to the following guidelines:
  • Every statement should be placed on a separate line.
  • Lines longer than 80 characters should be avoided as they exceed the default width of many terminals and development tools.
  • The unit of indentation used to align ESQL source code should be four characters which is the default setting in the WebSphere Message Broker Toolkit. The specific construction of this indentation, using spaces or tabs, is left to the discretion of the programmer.
  • Line lengths should be limited by breaking lengthy expressions according to the following rules:
  • Lines should be as long as possible, without exceeding 80 characters;
  • Break after a comma;
  • Break before an operator;
  • Break at the highest level possible in the expression;
  • Align a new line with the beginning of the expression at the same level on the preceding line. Should this alignment require deep indentations that produce awkward code, an indentation of eight spaces may be used instead.
The following ESQL code samples illustrate the above rules.
CREATE FUNCTION function1(IN longExpression1 REFERENCE, 
                          IN longExpression2 CHAR, 
                          OUT longExpression3 NUMBER, 
                          INOUT longExpression4 CHAR)
CALL procedure1(myLongVariable1, myLongVariable2, myLongVariable3,
          myLongVariable4);
SET returnValue = function1(argument1, argument2, argument3,
                            function2(argument4, argument5,
       argument6));
SET finalResult = ((x / variable1) * (variable2 - variable3))
               + (y * variable5);
The ESQL sample below illustrates a case where an indentation of eight spaces should be used instead of the usual alignment to avoid deeper indentations that would result in confusing code.
-- INDENT 8 SPACES TO AVOID VERY DEEP INDENT
CREATE PROCEDURE showAVeryLongProcedureName(IN argument1, 
        INOUT argument2,
        OUT argument3)
Finally, the ESQL samples below illustrate line wrapping and alignment practices. The first is preferable because the line break is inserted at the highest level possible.
SET longName1 = longName2 * (longName3 + longName4 - longName5)
                   + 4 * longname6;                         -- PREFER
SET longName1 = longName2 * (longName3 + longName4
                   - longName5) + 4 * longname6;          -- AVOID

6.2. White space

White space should be used to improve code readability.
  • Insert two blank lines between sections of a ESQL file;
  • Insert one blank line between functions and procedures;
  • Insert one blank line between the variable declarations in a function/procedure and its first statement;
  • Insert one blank line before a block or single-line comment;
  • A blank space should follow each comma in any ESQL statement that makes use of commas outside of a string literal;
  • All binary operators should be separated from their operands by spaces.
         SET a = c + d;
         SET a = (a + b) / (c * d);

7. STATEMENTS

Each line should contain at most one statement. Here are some sample statements.

7.1. DECLARE

Put declarations right after the broker schemas or at the beginning of modules, functions, and procedures. One declaration per line is recommended:
        -- EXTERNAL variable
        DECLARE DAY1 EXTERNAL CHARACTER 'monday';
        -- NAMESPACE variable
        DECLARE XMLSCHEMA_INSTANCE NAMESPACE 'http://www.w3.org/2001/XMLSchema-instance';
        -- CONSTANT
        DECLARE HIGH_PRIORITY CONSTANT INTEGER 7;
        -- SHARED variable
        DECLARE processIdCounter SHARED INTEGER 0;
        -- REFERENCE
        DECLARE messageType REFERENCE TO Root.XMLNSC.Msg.Type;

7.2. FOR

        DECLARE i INTEGER 1;
        FOR source AS Environment.SourceData.Folder[] DO
            ...
            SET i = i + 1;
        END FOR;

7.3. IF

        -- IF statement
        IF InputBody.Msg.Report = 'PDF' THEN
            SET OutputRoot.XMLNSC.Msg.Type = 'P';
        END IF;
        -- IF-ELSE statement
        IF InputBody.Msg.Report = 'PDF' THEN
            SET OutputRoot.XMLNSC.Msg.Type = 'P';
        ELSE
            SET OutputRoot.XMLNSC.Msg.Type = 'X';
        END IF;
        -- IF-ELSEIF-ELSE statement
        IF InputBody.Msg.Report = 'PDF' THEN
            SET OutputRoot.XMLNSC.Msg.Type = 'P';
        ELSEIF InputBody.Msg.Report = 'DOC' THEN
            SET OutputRoot.XMLNSC.Msg.Type = 'D';
        ELSE
            SET OutputRoot.XMLNSC.Msg.Type = 'X';
        END IF;

7.4. LOOP

        DECLARE i INTEGER;
        SET i = 1;
        x : LOOP
            ...
            IF i >= 4 THEN
                LEAVE x;
            END IF;
            SET i = i + 1;
        END LOOP x;

7.5. RETURN

        RETURN;
        RETURN TRUE;
        RETURN FALSE;
        RETURN UNKNOWN;
        RETURN ((priceTotal / numItems) > 42);

7.6. THROW

        THROW USER EXCEPTION; 
        THROW USER EXCEPTION CATALOG 'BIPv600' MESSAGE 2951 
                VALUES('The SQL State: ', SQLSTATE, 'The SQL Code: ', 
                       SQLCODE, 'The SQLNATIVEERROR: ', SQLNATIVEERROR, 
                       'The SQL Error Text: ', SQLERRORTEXT);

7.7. WHILE

        -- WHILE statement        
        DECLARE i INTEGER 1;
        WHILE i <= 10 DO
            SET OutputRoot.XMLNSC.Msg.Count[i] = i;
            SET i = i + 1;
        END WHILE;

7.8. CASE

        -- CASE function
        -- Like ?: Expression in C        
        SET OutputRoot.XMLNSC.Msg.Type = CASE InputBody.Msg.Report
                WHEN 'PDF' THEN 'P'
                WHEN 'DOC' THEN 'D'
                ELSE 'X'
                END;       
        -- Like SWITCH Expression in C        
        CASE InputBody.Msg.Report
        WHEN 'PDF' THEN 
            SET OutputRoot.XMLNSC.Msg.Type = 'P';
        WHEN 'DOC' THEN 
            CALL handleDocument();
        ELSE
            CALL handleUnkown();
        END CASE;

7.9. SELECT

        SELECT ITEM segment.No 
                FROM ControlRef.Segments.Segment[] AS segment 
                WHERE segment.No = currentSegment;

7.10. UPDATE

        UPDATE Database.TELEMETRY AS telemetry 
                SET bitmap = refEnvTeleSeg.NewBitmap 
                WHERE telemetry.TelemetryId = refEnvTeleSeg.Results.TelemetryId;

7.11. INSERT

        INSERT INTO Database.TELEMETRY_SEGMENT (TelemetryId, BlockNum, FileSegment)
                VALUES (refEnvTeleSeg.Results.TelemetryId, RefEnvTeleSeg.SegmentNum, 
                          ASBITSTREAM(Body));

8. PROGRAMMING PRACTICES

  • Put variables and constants at the broker schema level only when they need to be reused by multiple modules.
  • Initialize variables within DECLARE statements, especially EXTERNAL variables.
  • Declare REFERENCEs to avoid excess navigation of the Message Tree.
  • Specify a direction indicator for all new routines of any type for documentation purposes although the direction indicator (IN, OUT, INOUT) of FUNCTION is optional for each parameter.
  • Make code as concise as possible to restrict the number of statements. This will cut parsing overhead.
  • Use LASTMOVE or CARDINALITY statements to check the existence of fields in the message tree. This would avoid mistakes.
  • Avoid use of CARDINALITY statements inside loops.
  • Avoid overuse of Compute nodes because tree copying is processor heavy: put reusable nodes into sub-flows.
  • Avoid nested IF statements: use ELSEIF or CASE WHEN clauses to get quicker drop-out.
  • Avoid overuse of string manipulation because it is processor heavy: use the REPLACE function in preference to a complete re-parsing.
  • Use parentheses to make the meaning clear. The order of precedence in the arithmetic expressions is:
  • Parentheses
  • Unary operators including unary - and NOT
  • Multiplication and division
  • Concatenation
  • Addition and subtraction
  • Operations at the same level are evaluated from left to right.

9. CODE SAMPLE

/*
 *
 * File Name: ESQLCodeConvention.esql
 *
 * Purpose:   Sample ESQL file with proper prologue.
 * 
 * Authors:   Rachel Shen
 *                 Ankur Upadhyaya
 * Date      21 March 2008
 * Version:  1.0
 *
 * @copyright  IBM Canada Ltd. 2008.  All rights reserved.
 *
 */
BROKER SCHEMA com.ibm.convention.sample
PATH com.ibm.convention.common;
-- First day of the week
DECLARE DAY1 EXTERNAL CHARACTER 'monday';
-- XML schema instance namespace
DECLARE XMLSCHEMA_INSTANCE NAMESPACE 'http://www.w3.org/2001/XMLSchema-instance';
-- High priority message's priority on MQ
DECLARE HIGH_PRIORITY CONSTANT INTEGER 7;
-- A shared counter to generate process id
DECLARE processIdCounter SHARED INTEGER 0;
/*
 * This procedure generates the process id.
 *
 * Parameters:
 *
 * OUT:   CHARACTER processId     - Description goes here.
 *
 */
CREATE PROCEDURE getProcessId(OUT processId CHARACTER) 
BEGIN
    BEGIN ATOMIC        
        SET processId = CAST(CURRENT_TIMESTAMP AS CHARACTER FORMAT 'ddHHmmss') 
                        || CAST(processIdCounter as CHAR);
        SET processIdCounter = processIdCounter + 1;
    END;
END;
/*
 * This function encodes the input in BASE64 format.
 *
 * Parameters:
 * IN:    BLOB document - Input document for encoding.
 *
 * RETURNS: CHARACTER   รข€“ BASE64 encoded string.
 *
 */
CREATE FUNCTION encodeInBASE64 (IN data BLOB)
RETURNS CHARACTER
LANGUAGE JAVA
EXTERNAL NAME "com.ibm.broker.util.base64.encode";
/*
 * This module has the sample code for the article, ESQL code convention.
 */
CREATE COMPUTE MODULE CreateESQLCodeConvention
    CREATE FUNCTION Main() RETURNS BOOLEAN
    BEGIN
        DECLARE processId CHARACTER;  -- Unique identifier of the working process
        CALL getProcessId(processId);
        SET encodedMessage = base64Encode(ASBITSTREAM(InputBody.Msg));
        RETURN TRUE;
    END;
END MODULE;
/*
 * This module filters data base on the message type.
 * It has the sample code of the second module in the ESQL file.
 */
CREATE FILTER MODULE FilterData
    CREATE FUNCTION Main() RETURNS BOOLEAN
    BEGIN
        DECLARE messageType REFERENCE TO Root.XMLNSC.Msg.Type;
        IF messageType = 'P' THEN
            RETURN TRUE;
        ELSEIF messageType = 'D' THEN
            RETURN FALSE;
        ELSE
            RETURN UNKNOWN;
        END IF;    
    END;
END MODULE;

Monday, 27 June 2016

UDP(User Defined Properties in Message Broker)


UDP
..............

1.If we specified in the message flow's User defined properties panel some values like dsn,schema then in the flow we need to declare using 'EXTERNAL' keyword & give some null values within single quotes.The values that we mention in the ESQL code are not effected with the values in the Message flow User defined properties panel.

Ex:- DECLARE cDATABASE EXTERNAL CHARACTER '';

2. No need to create UDP in flow but using ESQL coding we can create UDP

Ex:-DECLARE cDATABASE EXTERNAL CHARACTER 'EAI';

How to delete a broker that is creating trouble to delete?


1. C:\ProgramData\IBM\MQSI


       There remove the Broker name in the Common,Config,registry folders

2. Queue Manager Deletion in Win 7
     C:\Program Files (x86)\IBM\WebSphere MQ\Qmgrs

Deleting Brokers path in Windows 7

C:\Program Files (x86)\IBM\WebSphere MQ\Qmgrs
C:\ProgramData\IBM\MQSI\components


Another case is when we change the System Password also,Broker does not start & connect to queue manager.In that case goto Run-->Services.msc & make sure that three of them are in started state or else delete the broker & queue manager using the above steps & create a new broker using Default Configuration wizard.




Now change the password here also to the newly changed password



Sunday, 20 March 2016



MS Outlook 2013 Issue which has Unread messages 


Question:- I have an annoying problem in Outlook 2013. My inbox is flagged with "1" unread message, but this is not the case. Even when I empty my inbox folder, this bold "1" stays next to my folder, new as it contains a new message.I've tried to "empty" it, to "clean" it, to "mark all as read" it. Nothing works.


Solution:-

In the “Search Current Mailbox (Ctrl+E)” box, type: read:no and hit Enter.
When it shows “Find More on Server” link, click it. Then the unread email(s) should appear.
EDIT: Works with Outlook 2016 as well.

Friday, 19 February 2016


           Using the Exception plug-in node in IIB V9.0



Introduction
The Exception plug-in node requires IBM® Integration Bus V9 or later, and runs on both
Microsoft® Windows® and Linux®. The plug-in node consists of two parts, a run-time JAR file
(
ExceptionRuntime.jar), and a design-time Toolkit plug-in (ExceptionJavaPlugin.jar), which
provide the node for use in message flows.

Installing
To install the run-time component:
1.
Download ExceptionPlugin.zip at the bottom of the article.
2. Unzip ExceptionPlugin.zip.
3. Copy
runtime/ExceptionRuntime.jar to all of the machines running brokers that are required
to run the node. Place the JAR file in
<IBM Integration Bus Runtime Install Directory>/
plugins
:
• Windows:
C:\Program Files (x86)\IBM\MQSI\9.0.0.0\jplugin
• Linux: /opt/ibm/mqsi/9.0.0.1/jplugin
To install the design-time component:
1. Unzip the plug-in zip file.
2. Copy
toolkit/ExceptionJavaPlugin.jar and place in <IBM Integration Bus
Toolkit Install Directory>/plugins
. For example: C:\Program Files (x86)\IBM
\IntegrationToolkit90\plugins
.
developerWorks® ibm.com/developerWorks/
Using the Exception plug-in node in IBM Integration Bus Page 2 of 7

Uninstalling
1. Stop IBM Integration Bus and close the Toolkit.
2. Remove the runtime JAR file
ExceptionRuntime.jar from the <IBM Integration Bus Install
Directory>/plugin
directory.
3. Remove the toolkit JAR file
ExceptionJavaPlugin.jar from the <IBM Integration Bus
Toolkit Directory>/plugins
directory.
4. Start IBM Integration Bus and open the Toolkit.

How the node works
The Exception node parses the ExceptionTree generated during message flow execution, and
retrieves details such as Exception Code, Text, and Details. The following example shows how you
can use the node in a message flow:





In this message flow,
HTTPRequest throws a socket exception, and the Exception node retrevies the
generated exception details.
Here is the flow execution in Debug mode. Before the Exception node:
After the Exception node. The Exception details are captured in
Enviornment.
ibm.com/developerWorks/ developerWorks®
Using the Exception plug-in node in IBM Integration Bus Page 3 of 7


Conclusion
IBM Integration Bus Exception node makes it much easier to capture exception details in a
message flow, and avoids redundant code that you must reconfigure for every project.

                          Sequence Node in Message Broker


Step 1:- Create a flow with MQInput Node,Sequence Node & MQOutput Node as follows


Step 2:- Sequence Node configuration


Step 3:- Provide the input like this :-
<doc><grp>AJAY</grp></doc>

Step 4:- The result would be like this on first run
<doc><grp>AJAY</grp><seq>0</seq></doc>

Next time you put the same input again,the result would be

<doc><grp>AJAY</grp><seq>1</seq></doc>

The sequence keeps on increasing like this..