DataStage and Minecraft (just for fun…)!

Hi Everyone!

Do your kids play Minecraft? Do you? Here is a “just for fun” recording of DataStage in a Minecraft world….. if you use DataStage and you and/or your family members play Minecraft, we hope you’ll enjoy this little adventure into the “world of Transformation”….. ; )

http://youtu.be/YFbLxbPuScA

Ernie

Methodology for Building Extensions

Hi Everyone…

In the last post I talked about “why” Metadata Extensions are useful and important (Building Metadata Extensions….”Why?”). Today I want to review the basic steps in a high level “methodology” that you can apply when making decisions about extensions and their construction that will help you meet the objectives of your governance initiatives.

Step 1. Decide what you want to “see” in lineage and/or accomplish from a governance perspective. Do you have custom Excel Spreadsheets that you would like to illustrate as little “pie charts” in a lineage diagram, as the ultimate targets? Do you have mainframe legacy “green screens” that business users would like to see the names of as “icons” in a business lineage report? Are there home grown ETL processes that you need to identify, at least by “name” when they move data between a flat file and your operational data store? Lineage is helping boost confidence to the users, whether they are report and ETL developers, or DBAs tracking actual processes, or reporting users who need some level of validation of a data source. Which objects are “missing” from today’s lineage picture? Which ones would add clarity to the users “big picture”. Each of the use cases above represent scenarios where lineage from the “known” sources (such as DataStage) wasn’t enough. There are no industry “bridges” for custom SQL, personalized spreadsheets, or home grown javascript. And in the green screen case, the natural lineage that illustrated the fields from a COBOL FD left business users confused and in the dark.

The “…accomplish from a governance perspective” in the first sentence above takes this idea further. The value of your solution is not just lineage — it will be valuable to assign Stewards or “owners” to those custom reports, or expiration dates to the green screens. Perhaps those resources are also influenced by formal Information Governance Rules or Terms in the business glossary. The need to manage those resources, beyond their function in lineage, is also something to measure.

Step 2. How will you model it inside of Information Server? Once you know which objects and “things” you want to manage or include in lineage, what objects should you use inside of Information Server? The answer to this is a bit trickier. It requires that you have some knowledge of Information Server and its metadata artifacts, how they are displayed, which ones exist in a parent-child hierarchy (if that is desirable), which ones are dependent upon others, what does their icon look like in data lineage reports, etc. There aren’t any “wrong” answers here, although some methods will have advantages over others. There are many kinds of relationships within Information Server’s metadata, and nearly anything can be illustrated. Generally speaking, if the “thing” you are representing is closest in concept to a “table” or a “file”, then use those formal objects (Database Tables and Data Files). If it is conceptual, consider a formal logical modeling object. If it looks and tastes like a report, then a BI object (pie chart) might be preferred. If it is something entirely odd or abstract (the green screen above, or perhaps a proprietary message queue), then consider an Extended Data Source. I’ll go into more details on each of these things in later posts, but for now, from a methodology perspective, consider this your planning step. It often requires some experimentation to determine how best to illustrate your desired “thing”.

Step 3. How much detail do you need? This question is a bit more difficult to answer, but consider the time-to-value needed for your governance solution, and what your ultimate objectives are. If you have a home grown ETL process, do you need to illustrate every single column mapping expression, and syntax? Or do you just need to be able to find “that” piece of code within a haystack of hundreds of other processes. Both are desirable, but of course, there is a cost attached to capturing explicit detail. More detail requires more mappings, and potentially more parsing (see further steps below). A case in point is a site that is looking at the lineage desired for 10’s of thousands of legacy cobol programs. They have the details in a spreadsheet that will provide significant lineage…..module name, source dataset and target dataset. Would they benefit by having individual MOVE statements illustrated in lineage and searchable in their governance archive? Perhaps, but if they can locate the exact module in a chain in several minutes — something that often takes hours or even days today — the detail of that code can easily be examined by pulling the source code from available libraries. Loading the spreadsheet into Information Server is child’s play — parsing the details of the COBOL code, while interesting and potentially useful, is a far larger endeavor. On a lesser note, “how much detail you need” is also answered by reviewing Information Server technology and determining things like “Will someone with a Basic BG User role be able to see this ‘thing’? “…..which leads to “Do I want every user to see this ‘thing’?”. Also important is whether the metadata detail you are considering is surfaced directly in the detail of lineage, or if you have to drill down in order to view it. How important is that? It depends on your users, their experience with Information Server, how much training they will be getting, etc.

Step 4. Where can you get the metadata that you need? Is it available from another tool or process via extract? in xml? in .csv? Something else? Do you need to use a java or C++ API to get it? Do you have those skills? Will you obtain the information (descriptions, purposes) by interviewing the end users who built their own spreadsheets? Is it written in comments in Excel? Some of the metadata may be floating in the heads of your enterprise users and other employees. Structured interviews may be the best way to capture that metadata and expertise for the future. Other times it is in a popular tool that provides push-button exports, or that has an open-enough model to go directly after its metadata via SQL. ASCII based exports/extracts have proven to be one of the simplest methods. Most often, governance teams are technical but do not often have resources with lower level API skills. Character based exports, whether xml, or .csv or something else, are often readable by many ETL tools, popular character based languages like PERL or similar, or even manipulated by hand with an editor like NotePad. I use DataStage because it’s there, and I am comfortable with it — but the key is that you need to easily garner the metadata you decided you need in the previous steps.

Step 5. Start small! This could easily be one of the earlier steps — the message here is “don’t try to capture everything at once”. Start with a selected set of metadata, perhaps related to one report, or one project. Experiment with each of the steps here with that smaller subset — giving you the flexibility to change the approach, get the metadata from somewhere else, model it differently or change your level of detail as you trial the solution with a selected set of users. Consider the artifacts that will have the most impact, especially for your sponsors. This will immediately focus your attention on a smaller set of artifacts that need to be illustrated for lineage and governance, and allow you to more quickly show a return on the governance investment that you are making.

Step 6. Build it! [and they will come :) ] Start doing your parsing and construct Extensions per your earlier design. Extension Mapping Documents are simple .csv files…no need for java or .net or other type of API calls. Adding objects and connecting them for lineage is easy. Extended Data Sources, Data Files, Terms, BI objects — each are created using simple .csv files, and/or in the case of Terms, xml. I suggest that you do your initial prototypes entirely by hand. Learn how Extensions and other such objects are structured, imported, and stored. As noted earlier, I will go into each of these in more detail in future posts, but all of them are well documented and easily accessible via the Information Server user interfaces. Once you have crafted a few, test the objects for lineage. Assign Terms to them. Experiment with their organization and management. Assign Stewards, play with adding Notes. Work with Labels and Collections to experience the full breadth of governance features that Information Server offers. Then don’t wait — get this small number of objects into the hands of those users — all kinds of users. Have a “test group” that includes selected executives, business reporting users and decision makers in addition to your technical teams. Get their feedback and adjust the hand crafted Extensions as necessary. Then you can move on and investigate how you’d create those in automated fashion while also loading them via command line instead of via the user interfaces.

Keep track of your time while doing these things so that you can measure the effectiveness of the solution vis-a-vis the effort that is required. For some of your extensions, you may decide that you only need a limited number of objects, and that they almost never change — and no future automation will be necessary. For others, you may decide that it is worth the time to invest in your own enterprise’s development of a more robust parser or extract-and-create-extension-mechanism that can be implemented as metadata stores change over time. This also makes it simpler to determine when it makes sense to invest with an IBM partner solution for existing “metadata converters” that already load the repository. These are trusted partners who work closely with all of us at IBM to build solutions that have largely answered the methodology questions above in their work at other locations. IBM Lab Services also can help you build such interfaces. When appropriate and market forces prevail, IBM evaluates such interfaces for regular inclusion in our offerings.

Ultimately, this methodology provides you with a road map towards enhancing your governance solution and meeting your short and longer term objectives for better decision making and streamlined operations via Information Governance.

-Ernie

Business Glossary and Cognos — Integrated together…

Hi Everyone…

Just wanted to share a video I completed today that illustrates the integration of Cognos reporting with InfoSphere Business Glossary…. showing a user inside of Cognos, using the right-click integration that Cognos provides to do a context-based search into Business Glossary to display a term, and then navigate further through the metadata to find details about a value and concept in the report.

This is very much like the Business Glossary Anywhere, except that it is a capability built directly into the Cognos Report Studio and Cognos Report Viewer. Enjoy!

Ernie

Creating Data File objects from inside of DataStage

A seldom used object in Metadata Workbench is a “Data File”. It is not as common because it has to be manually created. Database Tables are created whenever you use a Connector or other bridge to import relational tables from a database. Data Files, however, can only be created manually, using the istool workbench generate feature, or from inside of the DataStage/QualityStage Designer.

Why create Data Files?

A Data File is the object available in the Metadata Workbench that represents flat files, .csv files or DataSets. It is able to connect to the Sequential Stage or Dataset Stage for data lineage purposes. A Data File object might be used also for pure governance reasons — a special transaction file might be defined by a particular Business Term, or you might want to assign a Steward to the Data File object — the subject matter expert on one particular file. Of course, if you are a DataStage user, you probably use regular Sequential Table Definitions all the time. Data Files are similar but are more “fixed” — they are designed to represent a specfic flat file, on a given machine, and in a particular sub-directory, as opposed to being a general metadata mapping with proper column offsets for any file that matches the selected schema.

The simplest way to create a formal Data File is to start with a DataStage Table Definition. You may already have one that was created when you imported a sequential file, or can easily create one using the “Save” button on any column list within most Stages. Once you have the Table Definition, double click on it. Review all of the tabs across the top. Pay special attention to the “Locator” Tab. Click on that one. Look at its detail properties. Values at the Locator tab control the creation of Data Files or Database Tables.

Set the pull-down option at the top to “Sequential”. If that value is not already in your pull-down list, type it in… Towards the bottom you will see an entry for the Data Collection — put in the name you want for your file. Close the Table Definition.

Now put your cursor on that Table Definition in the “tree”. Right mouse and select “Shared Table Creation Wizard”. When that dialog opens, click Next. Then open the pull-down dialog and select “create new”, and click Next. Notice the properties at this new page….you have the Filename, the Host (pick a machine or enter a new one) and Path. Make the filename the SAME as what you have hard coded in your Sequential or Dataset Stage, or the filename of any fully expanded Job Parameter default values that you are passing into it. Then set the “Path” value to the fully qualified path of the expanded Job Paramters or what you have in the same filename property. For example, if your filename in the Stage looks like this:

/tmp/myfile/#myfilename# …and #myfilename# has a default value of mySequentialFile.txt

Then use mySequentialFile.txt as the Filename and /tmp/myfile (without the final slash) for the path. Now you will have a Data File inside of Metadata Workbench that you can govern with Steward and Term assignments, and it also will stitch to the Stages that use its name in hard coded fashion or expanded Job Parameters for Design time or Operational lineage.

Ernie

Always use an Input Link AND an Output Link with the XML Stage

Another way to say this is to “avoid using the XML Stage to perform i/o.” The XML Stage is capable of reading an xml document directly (a feature in the Parser Step) and is also able to write to a new document on disk in the Composer Step. However, while it may seem simpler to do that initially, it makes your Jobs and Stage designs less flexible and less re-usable. You should have an Input link that feeds XML to the XML Stage when you are “reading” or parsing xml (and of course you will have output links that send the results downstream), and you should have an Ouput link that sends your completed XML document(s) downstream when you are writing XML (and of course you will have input links that feed in the source data).

Let’s see why.

When you are first learning the XML Stage, it seems convenient to just “put in the name of the xml document” and keep going. The Parser Step allows you to specify the filename directly (or it can be parameterized), and then you continue with the assignment of the Document Root. Similarly, when creating a new XML document, the Composer Step allows you specify the actual document to be written to disk.

Then someone comes along and says “Our application is changing. The xml documents we currently read from disk will now be coming from MQ Series…” …or maybe “…from a relational table” …or “from hadoop”…. Well, you can’t just “change the Stage type at the end of the link” in that case. You have to “add” the link, and then make what could potentially be extensive changes to your Assembly. While not especially difficult once you are familiar with the Stage, if you have moved on to other projects, or have been promoted and are no longer supporting the Job, a less experienced DataStage developer will be challenged.

So…when using the Parser Step, use one of the options that describes your incoming content as either coming in directly as content (from a column in an upstream Stage), or as a set of filenames (best use case when reading xml documents from disk, especially when you have a whole lot of them in a single sub-directory [see also Reading XML Content as a Source ] )

xmlParserStepOptions

The same thing is true for writing XML. Send your xml content downstream — whether you write it to a sequential file, or to DB2, or to MQ Series or some other target, the logic and coding of your XML Stage remains the same! In the Composer Step, choose the “Pass as String” option and then in the Output Step, map the “composer result” to a single large column (I like to call mine “xmlContent”) that has a longvarchar datatype and some abitrary long length like 99999. While there may be times when this can’t be easily done, or when you need to use the option for long binary strings (Pass as Large Object), for many/most use cases, this will work great.

xmlComposerStepOptions

Get in the habit of always using Input and Output Links with the XML Stage. Your future maintenance and changes/adaptions will be cleaner, and you can take better advantage of features such as Shared Containers for your xml transformation logic.

Ernie

XML Stage: Establish Meaningful Link Names

…and then stick with them! Decide early what you want your Link names to be, before you even open up the Stage and begin your work on the Assembly, and then lock them in. Make a conscious decision not to change or alter them. Why? The XML Stage is not immune to Link name changes like other Stages and Connectors on the DataStage canvas.

How many of you are perfectly happy with DSLink2 and DSLink35 or other automatically generated Link names? I know I don’t spend time on every Job, running around putting on fancy Link names, especially when I’m first building it. It’s nice for documentation, and I know that I should always create meaningful names, but how many of us do?

And how often do we “go back” and edit the Link names later? That’s actually a good thing — for most Stages and Connectors. But for the XML Stage, it is something you want to avoid. Changing Link names will break your Assembly and require that you edit the stage and make changes.

Here is an example of the XML Stage reading xml documents from a subdirectory and performing validation. Valid xml will be sent down the “goodXML” Link, and rejected, invalid xml content will be send down the “badXML” link.

linknames

Notice how, inside the Assembly, these link names are used. Here in the Assembly Parser step, you see the toXML linkname used for the specification of the xml Source:

linknameParserStep

…and here, in the Assembly Output Step, you can see how the Link names are used in the Mapping:

linknamesOutputStep

Those screen shots illustrate how the link name becomes critical to the internals of the Assembly. If you change the link names outside the Stage, the Assembly will end up with errors (various red marks throughout the Assembly, depending on how complex it is):

redLink

Are you able to correct the Assembly when this happens? Of course…and for most scenarios, it’s not difficult…you might just need to change a setting or re-map a couple of columns. But save yourself the trouble. Decide on your Link names, set them up early (preferably before you ever enter the Stage) and then don’t touch them!

—Ernie

Follow

Get every new post delivered to your Inbox.

Join 82 other followers