Monday, September 26, 2022

Reading my Journal

All,

Ever had a situation where data was being erroneously written, or simply disappeared and you had no idea why.  For anyone with a journaled database on the IBMi this will be an all to familiar issue.  A commitment control boundary may undo your updates, or the data simply doesn't look right as it is being updated elsewhere in your call stack, far away from the code area you are presently maintaining.

When this happens to me I usually like to review the entries in the journal and analyse the database I/O data to get a feel for what is happening.

So how do I go about doing this.....

Firstly I would identify the journal that is being used.  This may change from time to time so it is best to check each and every time you perform these steps.  Do a WRKOBJ for the library and main file you are trying to track entries for.

WRKOBJ OBJ(DATALIB/TARGETFILE)

Take option 8=Display Description and page down three times or do a DSPFD for the given file and search of the Journal name from the output.



Now you know the journal.  Perform the action(s) you wish to monitor for and note the times.  (start and Finish)

You can then extract this data from the journal using the DSPJRN command.  Below is an example of isolating the entries in a particular environment.  I have gone for all files in library MSLTSTDTA in this example but there are plenty of filtering options.  I then chose to output the resultant data into a file so that I could query it more easily.

DSPJRN JRN(TB_JRN/JRN0103000) FILE((DATALIB/*ALL)) FROMTIME(280222 074200) TOTIME(280222 074300) OUTPUT(*OUTFILE) OUTFILE(LDARE/LD_JOURNAL).  The OUTFILE keyword is optional but will help if you want to perform some queries are a saved dataset.

Please note that the date and time parameters are particularly important as this can take a while to extract otherwise.  Just this small sample took a few seconds interactively.  If querying for more, you might want to consider submitting the task.

The resultant data file is formatted as follows:-

 


The main item to look for is the 'Entry Type'.  These codes identify when data is inserted, updated or deleted from the database table as well as if it is rolled back.

The table below are the main ones that I refer to.  All of these are for Journal Code 'R' - Operation on Specific Record. For a full list of the items covered by journaling see this link

https://www.ibm.com/docs/en/i/7.2?topic=information-journal-code-descriptions

 

Entry Code

Description

BR

Before-image of record updated for rollback operation

DR

Record deleted for rollback operation

PT

Record added to a physical file member. If the file is set up to reuse deleted records, then you may receive either a PT or PX journal entry for the change

PX

Record added directly by RRN (relative record number) to a physical file member. If the file is set up to reuse deleted records, then you may receive either a PT or PX journal entry for the change

UB

Before-image of a record that is updated in the physical file member (this entry is present only if IMAGES(*BOTH) is specified on the STRJRNPF command)

UP

After-image of a record that is updated in the physical file member

UR

After-image of a record that is updated for rollback information

 For a full list of journaling codes you can use this link within the IBM i documentation.

 https://www.ibm.com/docs/en/i/7.2?topic=information-all-journal-entries-by-code-type

 

 



Sunday, February 13, 2022

*MOVE and *MOVE ARRAY with 2E date and time fields

Every 2E model has functions that are no longer required, yet they persist and continually get reused.  A good example of this common issue is the continued usage of legacy date conversion routines that exist in most 2E data models.

How many of you have the following functions (or very similar) in your model?



The reason why is quite simple!
 
They were probably written long before 2E supported the date fields indicated.  After all, the DT8 (Date 8) and DT# (Date ISO) were added relatively late in the tools evolution.

I recall working at a company in London (pre 2000), who had solved the Y2K problem by transitioning their date fields from DTE to an user defined field type CDT (Century Date), this was basically a DT8 i.e. YYYYMMDD but implemented way before 2E had supported it.  I believe 2E were quite late to the party and implemented DT8 support around 1999.  Synon Inc started supporting the DT8 field specifically for people transitioning their models from DTE to DT8 and to help overcome any potential Y2K date rollover issues within their code.  There was even a model analysis tool that helped people identify these issues.

The problem is..... Many sites had already solved their issues (as above), whilst others stuck with the DTE format and its limitations and perhaps only targeted certain field like DOB.  I reckon there are dozens of sites around the world that continue to define dates as DTE out of habit!  I would be guilty of doing so for sure.

Anyhow, for those that have moved on and are defining DT8 or DT# or TS# (ISO Timestamp), I still believe that many programmers will be using the self coded legacy functions to perform date conversion.  I know I have too been guilty of this crime from time to time in the heat of coding.
 
The thinking goes something like this?

  1. Navigate to your system functions or date functions (scoping file DFN).
  2. Filter on Date or Convert (cnv) and pick the function that suites your needs.
  3. Test function and everything works fine.
  4. Great, Job done!

However, there is one small flaw with this.  You didn't need to use the legacy date conversion functions anymore.  Remember, these were likely written before 2E supported the source or target date format or have been written more recently by a developer who didn't realise that 2E already handles automatic date conversion between its (shipped) data types.
 
As long as you are moving data from a field type that is supported and that it has 'date like' data in it, 2E will automatically handle the conversion for you.

The table below (directly from the 2E online documentation) helps showcase all the automatic conversions that are handled by 2E and below that a table highlighting the limitations or rules.

 
 
* Conversions for the shipped D8# and the user-defined DT8 (8-digit internal representation) data types are identical.
 


What does this look like in the code?  Well let's take a look at some generated code to find out.

The following 'mock up' function is trying to covert the DTE (*JOB DATE) to a DT# (ISO) and DT8 format.


The source (RP4) for this is generated as follows:-


The ISO conversion is a little more complicated and is also generated into a subroutine so that it can be called for any date conversions from DTE to DT#.  The DTE to DT8 conversion is something I am sure you have done many times in code using the *ADD etc these are generated inline and not passed to a subroutine due to the small number of lines of code.

The subroutine code for ISO is below.



A final point to note is that if you have a number field of 7.0 length masquerading as a date, you can move it into a date field DTE date.  Same with 8.0 to DT8, which will be more common if interfacing to data derived from more modern databases that never had a history of supporting dates like DTE or Julian.

You can then apply 2E date functions as usual.


Finally, we are not limited to dates, times and their myriad of formats are also interchangeable.

Thanks for reading. 
Lee.

Thursday, January 20, 2022

SQL is your friend

Today I had a minor task to implement.  The task was to initialise some new data in a file.

Typically, I would write a program as an EXCEXTFUN and do my processing via standard 2E code.  The problem with little data update programs is that unless you manage them well (i.e. remove them, label them so you can ignore them etc), they start to clutter the model.  On the other hand they are handy to have around in case someone needs to do something similar in the future.

However, unless you are a software house, you probably don't care too much that this 'product field' was initialised 8 years ago by xyz user.

That all said and done, my task was to insert a data record and there were some fields that needed to be initialised that contained the business data, and, there was also an AUDIT STAMP for the INSERT.  If you don't mind me saying, it was quite a convoluted one at that too with numerous fields.  This AUDIT STAMP required a User, Date, Time, Job Number and Job Name among other fields.  With 2E all of these are simple to MAP and are available to us via the JOB context.

My task was to implement the data without adding to the setup/fix program maintenance burden and on this occasion we have a few choices.

  1. Write a document explaining to the implementer how I want them to add a record. Either by a YWRKF hack or perhaps a user screen.
  2. Give them a small data file and ask them to CPYF the records into the target file as *ADD.
  3. Write that 2E program I spoke of above and get them to call it.

Or, as I did in this instance.  

    4. Simply give them a line of SQL I wanted them to execute.

Sweet, job done, but how do I initialise the 2E formatted date, or get the current system date and time from the machine and, where do I get the JOB details from that are so easily available within 2E code via the JOB context.

The answer is as follows: SQL has what it calls global variables (Blue) and functions (Red).  It just turns out that some of these are useful for me to update my fields in the AUDIT STAMP.

In order to initialise my fields I just substitute my insert data with the following:-

User Profile = USER
Date = DEC(CURDATE())-19000000
Time = DEC(CURTIME())
Job Number = LEFT(JOB_NAME,6)
Job Name = SUBSTRING(JOB_NAME, LOCATE('/',JOB_NAME,8)+1,10)

You will note that to get the 2E DTE format we simply minus the 19000000.  For the standard TIME and USER fields we just use the variables without any manipulation.

For Job Number we need to extract the data the JOB_NAME variable.  Usually this is formatted as 123456/LDARE/QPADEV0001 for example.  However, for people with longer names it could be 654321/PLONGNAME/EOD, also the job running the task may not be an interactive screen if it was submitted for example.

This means that to get the job name data from the JOB_NAME SQL global variable we need to call a little function that locates and substrings all the data after the 2nd '/'.  Given that the JOB_NUMBER is always 6 long I've just arbitrarily started from the 8th character.  I could have embedded more LOCATE calls to replace this hardcoded value, but as always, time was against me.

Hopefully, in the future if you need to update a table via SQL, but were put off by now knowing how to get at some of these global variables, the above will stick in your mind.

Thanks for reading. 

Lee.