Wednesday, May 9, 2018

It's not depressing so get suppressing!



Would you do work you didn’t need to do?  NO would be the obvious answer right!!!  That said there are probably thousands of 2E programs out there that are doing unnecessary updates to a database using a CHGOBJ.

Many of these programs have been working seamlessly for years and years, however, they are like a volcanic field.  Stable for years, centuries or even millennia but then one day……BOOM!!!!

This isn’t because they are really badly coded.  After all, we should only be updating one record at a time etc.  Most CHGOBJ’s are probably inline (i.e. a screen or a single instance business transaction). 
Mass bulk updates being the lesser spotted usage for a CHGOBJ.  But it does happen!

Recently we had an issue where a long standing bulk update processing program (EOD) went from executing in  sub 1 minute (so it didn’t get too much love in the maintenance department) to almost 30+ minutes (overnight).

Upon first inspection, the program hadn’t changed.  This points to an environmental based solution.  The dataset or data volume hadn’t significantly changed either….  The subsystems configs hadn’t changed and there was no system resourcing issues or spikes…..

The simple reason for the increase was that a new trigger (2E) was added to the file being updated, this trigger had a little business logic and this was required processing.  There was limited tuning to be done in the trigger program.

However, I did notice that the data was summary statistical style (reporting categories for data like current balance etc).  This was being recalculated each night and updated on an account by account basis.

On closer inspection of the rules around the categorisation it was obvious that the vast majority of accounts stayed in their categories for years and years and only with major events in the accounts lifecycle did they switch.  This mean that the activity was effectively calculating the same values each night and then updating the fields in the file every night with the same values.  This in turn NOW triggered additional functionality with a real-time trigger.

Option 1.

It was quite obvious by now that we needed to stop the execution of the trigger.  We didn’t have the option of removing and reading the triggers after the process.  The simplest method was to not perform the database update in the first instance.  This can be done by simply comparing the newly calculated values with the those on the database record and NOT call the CHGOBJ.

This method works and is relatively easy for a developer to read the action diagram and ascertain what is happening and on the surface seems like a good option.  I have seen this done in many functions.

However, the developer must (potentially) do a read to compare to the database.  This data may itself be old (retrieved much earlier in the cycle).  The developer needs to do this everywhere the CHGOBJ is used.

Option 2.

Code could be added inside the CHGOBJ to exit if DB1 and PAR are the same.  I’ve seen this approach too.  This is a bit cleaner but for any functions creates since 6.1 of 2E (last century) this is also the incorrect approach.

Option 3.

The correct approach in this instance is to switch on a function option on the CHGOBJ and utilise the built in suppression code relating to Null Update Suppression. (See highlighted options below).





The options are quite simple.
M -  is the default model value.  In this model it is ‘N’ so this implies NO suppression will occur.
Y  - means that the DB1 will be checked against PAR twice.  Once upon initial read of the data from the file and then once the record lock is in place and that data is about to be written.
A  - (After read) only does the first part (see above).

The generated code

The diagram below gives a visual of the code that is generated for each of the options.



NULL Update Suppression works regardless of how you define CHGOBJ’s


Benefits of the suppress option for CHGOBJ.  
  1. Record level audit stamps won’t get corrupted with unnecessary update
  2. Performance
  3. Triggers won’t get fired
  4. Encapsulated


When to use?


  Thanks for reading. 
Lee.

Wednesday, March 28, 2018

Look back... The good old days

Some very fond memories involving the early adoption of plex and some great friends and colleagues at one of the UK's Plex evangelists.



Thanks for reading.
Lee.

Thursday, March 15, 2018

Trigger Performance

After a month or two back and forth with CA support about the poor performance overhead of YTRIGGER vs a TRGFUN being declared on the physical file.  You should know that I provided lots of feedback and upfront documentation of the issue with detailed analysis of exactly where the issue manifests itself etc.

CA come back with this.

"I have discussed this issue with SE and they confirm that there is nothing further they can currently do to improve Trigger performance beyond that which has already been achieved. 

Further performance improvements would require a redesign of processes within CA 2E and are therefore a product enhancement. So the next step would be if you could raise this enhancement request to Trigger processing on the CA 2E online community. 

This will allow other CA 2E users to comment and gives Product Management customer feedback on the requirement. "

So reading between the lines as I concluded as follows:-

Lee we are happy that the code is working as designed and we are not really sorry that our design means your process runs 700% slower using YTRIGGER than if you declared the 2E trigger individually. 

As you said in your response you'd do your own workaround if performance of YTRIGGER cannot be improved (or we won't improve it), we decided to make you jump through more hoops and ask that you spend more time putting it on a 2E forum (enhancements page - that hardly anyone reads) in the hope that people will +1 it and give it some priority.  Even though we have all the information we need to do so ourselves.

This gives us several years to ignore this improvement and we can continue to charge you $30k per annum for mediocre support and channel this profitability into other CA products that you do not use.



Thanks for nothing CA.
I'll implement the workaround.
Lee.