MQSeries…Ensuring Message Delivery from Queue to Target

Using MQSeries in DataStage as a source or target is very easy…..but ensuring delivery from queue to queue is a bit more tricky. Even more difficult is trying to ensure delivery from queue to database without dropping any messages…

The best way to do this is with an XA transaction, using a formal transaction coordinator, such as MQSeries itself. This is typically done with the Distributed Transaction Stage, which works with MQ to perform transactions across resources….deleting a message from the source queue, INSERTing a row to the target, and then committing the entire operation. This requires the most recent release of DataStage, and the right environment, releases, and configuration of MQSeries and a database that it supports for doing such XA activity….

So what happens if you don’t have the right release of any of these things, or are using an rdbms that is not supported for XA with MQ?

You can come “real close” and accomplish what you’ll need in most scenarios with the attached Job and text file with DDL. This .dsx defines a technique where messages are read from a source queue, then written to a target rdbms……if the INSERT works, messages are immediately removed from the source queue……but if it fails, removal is not performed, and the messages remain in the source queue. Careful thought, testing and review of recovery strategy is necessary, but this technique may be useful in a lot of situations.

Once again, I haven’t mastered the uploading on this site, and had to rename MQCommitTestv8.dsx and TestDDL.txt with pdf suffixes.   Just rename them and you will be fine.  They are both just simple ascii files.

Ernie

targetddl1 mqcommittestv8

Advertisements

21 Responses to “MQSeries…Ensuring Message Delivery from Queue to Target”

  1. Vincent McBurney Says:

    Nice, keep the MQSeries posts coming! If DataStage cannot deliver a message do you flag it in DataStage or can you flag it back in MQSeries? Do you still need DataStage reject links off the transformer and database target in this type of job?

  2. dsrealtime Says:

    That’s a good question, Vincent….and from the perspective of the technique described here, the message merely “stays” in MQ in the Source Queue and doesn’t get lost (vs getting cleanly removed from the Source Queue). …but there is no special flagging of the message. As to the use of rejection links, it’s worth noting that this particular technique can ONLY work in DS Server edition because of the reliance on two links to a single Stage being part of the same unit of work….

  3. Todd Robinson Says:

    Syncpoint control is not possible with MQ and EE? Good to know and makes sense. We’ve used this Server method with Oracle for the past 6 years without incident. The interface between Oracle and DataStage is well defined. I would be leery of other DBMSs and your thorough testing comment is dead on. Getting the RDBMS to accurately report the status of an operation back to DataStage 100% of the time is the problem as is preventing a commit enabling a rollback, when there is a problem.
    A “cross your fingers and hope” approach also used is to save the MsgIds to a file in the DBMS write job and then do the syncpoint reads in a downstream job after the first has finished successfully. This would fall under the heading of coming “real close”.

  4. dsrealtime Says:

    Hi Todd…I like the Server implementation also because it doesn’t care what you do between the MQ Stages….in fact, you could end the job and start up another, and perform the final delete/Put in another job much later in the overall process. That being said, while I haven’t explicitly tested it yet, we should be able to accomplish this with EE and an MQ Connector as the source and methodology for the majority of the job, and then a Server side Shared Container for the final target.

  5. Vincent McBurney Says:

    Is the Distributed Transaction Stage new for DataStage 8.1? I can’t find it in any 8.0.1 documentation. Can it be used on it’s own without a queue service like MQ? For example can you read a complex flat file and flatten it and deliver different rows to the DTS stage as a single unit of work?

  6. dsrealtime Says:

    I spoke too soon…thought it was ready, but it’s still “in the oven.” I had the opportunity to do some work with it early on, and it’s close, from what I hear, although I can’t tell you exactly when we’ll see it……. by design it will let me read from a queue and target one or more relational tables, all in a “true” single unit of work…. it exploits MQSeries’ strengths as a formal transaction coordinator. Not sure if it will be able to work independently of MQ, because it exploits MQ under the covers. Suppose you could put the data into a queue for the sake of driving the uow at the other end……

  7. Surajit Says:

    Hi,
    I am not able to open the pdf attachments for MQ Series. Appreciate if you could post the pdfs again.

    Thanks,
    Suraj

    • dsrealtime Says:

      …they are just text files. Named PDF, but the mqXXX one is a .dsx and the other is just a .txt file… do just “save” them and rename!

      Ernie

  8. Tim Smith Says:

    Sorry – I dont get it. Why cant we use EE? What is the point of EE if it cannot provide transaction consistency?

    • dsrealtime Says:

      Hi Tim.

      It depends on what you are trying to do, and what you need transaction consistency for…. if it’s for targeting DB2 or Oracle, you’ll soon have that. There is a Distributed Transaction Stage in the oven (I don’t know the exact date of delivery, but I played with it earlier and it looks very good for certain things) that takes advantage of MQ’s ability to have a unit of work between a queue and a database….and from the spec I reviewed, and comments I made while testing at that time, two queues. I’ll be sure to put a report here once things are closer to production and/or generally available and I can give it a good exercise.

      On the other hand, Server is still a finely honed tool for doing transactional things, and quite often such transactional things are not in need of massive parallelism as necessary in batch….and Server is no slouch in performance — it can’t touch EE at the high end, but can stand on its own in most cases, and if all you need is ensured queue to queue delivery, Server has done that since release 4.

      Ultimately, there are highly parallel activities that justify EE and need detailed unit of work control. I’ve had the privilege of working with some of the advanced consultants on our Center of Excellence team who have written custom operators (and in one scenario, a complex JMS example using JavaPack) when specific requirements are needed (multiple queues, multiple databases and fairly sophisticated failure processing).

      Ernie

  9. Rob Says:

    Hi,

    Can you please let me know where I can find detailed information about using the DataStage MQ connector?

    Cheers,
    Rob

    • dsrealtime Says:

      Hi Rob….

      The doc for the MQ Connector is pretty good….the properties are in alphabetical order (as opposed to functional order that you would fill them in), but that doesn’t impact the detail. It’s fairly self explanatory once you get used to the GUI (on some of the properties, pull down and indicate “yes” for a main property, and then the sub-properties will become un-greyed). For 90% of requirements you will only need about 10% of the properties. The MQ Connector can deal with a lot of special situations (types of messages, transactions, etc.) which is why there are so many. http://www.dsXChange.com (a public forum on DataStage) always has some good discussions on MQ, and there is some additional detail on the new Connector in this redbook… http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/sg247576.html?OpenDocument

      Lastly, MQ has been supported in DataStage for over 10 years, and you will find that the MQ Plugin is still on the canvas. For those 10% requirements, it will fill most of your needs also, although I would encourage you to use the MQ Connector if you can…it has a few more goodies.

      Ernie

  10. Krishna Says:

    Hi Ernie,

    I think u need to keep those file again.
    I am not able to save file only blank page is getting opend.

    One more thing do we need to set any environment variables after installing MQ clients in datastge machine.

    Please guide me.

    Thanks
    Krishna

    • dsrealtime Says:

      Try them again…I just did a “save as” and they downloaded fine. Be sure to rename them first…they are just text files, not pdfs. There are a variety of pre-reqs….if you are using the MQ Connector, then you can use the MQ Client or the MQ Server. If MQ Plugin, you will have had to choose that you want client or server for MQ usage when you did the initial install. Be sure that MQ is working fine “as is” outside of DataStage before trying to get it to work with DS.

      • dayrunner12001 Says:

        I am trying to do commit a Mainframes DB2database and 6 other tables in a different DB2 database.Do I need to use 6 links in a single DTS stage or need use 6 DTS stage
        Please respond.
        Thnsk
        Dayrunner

  11. dayrunner12001 Says:

    I am using parallel jobs.
    Can I use MQConnector stage in Job1 ,and output them into several datasets and then in Job2 take these datasets and load the database tables using DTS stage for ensuring commits on 6 tables.Is it possible to do that way or both MQConnector and DTS Stageneed to be in a single parallel job
    Thanks for your response

    • dsrealtime Says:

      I haven’t done it with the new DTS Stage, but the methodology is designed to handle exactly your scenario…..the MQ Connector provides for a work queue, where the message goes under transaction control…..then later (another job), you pass the MQID of this work queue message into DTS and a transaction is created with your database, again using MQ’s cross resource transaction control.

      Ernie

  12. Vekat Says:

    Hi Ernie,
    I love your website.I have a message with MQRFH2 header.Do you know how to read this message using datastage?

    • dsrealtime Says:

      Hi Vekat….sorry I missed this one. If you haven’t solved this already, what release are you using? In version 8, the MQ Connector provides the ability to skip past MQRFH2 header details……

      Ernie

  13. Steely Says:

    How to configure DataStage V 8.1 MQ Conector to put
    a message with MQRFH2 header to queue ?
    In documentation is said that a column with type WSMQ.FORMATHEADERS must be present.
    Is there somewhere an example how this is done ?

    • dsrealtime Says:

      I haven’t done much with the MQRFH2 support in awhile, but if memory serves correctly, the WSMQ.—- options are part of the Data Element list…. You might have a column called XXX and have WSMQ. in ts Data Element. Check the pull down for data element in the properties of each column, and also look at the data elements as well as the various included table definitions. That should at least get you pointed in the right direction.

      Ernie


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: