Wednesday, May 9, 2018

It's not depressing so get suppressing!



Would you do work you didn’t need to do?  NO would be the obvious answer right!!!  That said there are probably thousands of 2E programs out there that are doing unnecessary updates to a database using a CHGOBJ.

Many of these programs have been working seamlessly for years and years, however, they are like a volcanic field.  Stable for years, centuries or even millennia but then one day……BOOM!!!!

This isn’t because they are really badly coded.  After all, we should only be updating one record at a time etc.  Most CHGOBJ’s are probably inline (i.e. a screen or a single instance business transaction). 
Mass bulk updates being the lesser spotted usage for a CHGOBJ.  But it does happen!

Recently we had an issue where a long standing bulk update processing program (EOD) went from executing in  sub 1 minute (so it didn’t get too much love in the maintenance department) to almost 30+ minutes (overnight).

Upon first inspection, the program hadn’t changed.  This points to an environmental based solution.  The dataset or data volume hadn’t significantly changed either….  The subsystems configs hadn’t changed and there was no system resourcing issues or spikes…..

The simple reason for the increase was that a new trigger (2E) was added to the file being updated, this trigger had a little business logic and this was required processing.  There was limited tuning to be done in the trigger program.

However, I did notice that the data was summary statistical style (reporting categories for data like current balance etc).  This was being recalculated each night and updated on an account by account basis.

On closer inspection of the rules around the categorisation it was obvious that the vast majority of accounts stayed in their categories for years and years and only with major events in the accounts lifecycle did they switch.  This mean that the activity was effectively calculating the same values each night and then updating the fields in the file every night with the same values.  This in turn NOW triggered additional functionality with a real-time trigger.

Option 1.

It was quite obvious by now that we needed to stop the execution of the trigger.  We didn’t have the option of removing and reading the triggers after the process.  The simplest method was to not perform the database update in the first instance.  This can be done by simply comparing the newly calculated values with the those on the database record and NOT call the CHGOBJ.

This method works and is relatively easy for a developer to read the action diagram and ascertain what is happening and on the surface seems like a good option.  I have seen this done in many functions.

However, the developer must (potentially) do a read to compare to the database.  This data may itself be old (retrieved much earlier in the cycle).  The developer needs to do this everywhere the CHGOBJ is used.

Option 2.

Code could be added inside the CHGOBJ to exit if DB1 and PAR are the same.  I’ve seen this approach too.  This is a bit cleaner but for any functions creates since 6.1 of 2E (last century) this is also the incorrect approach.

Option 3.

The correct approach in this instance is to switch on a function option on the CHGOBJ and utilise the built in suppression code relating to Null Update Suppression. (See highlighted options below).





The options are quite simple.
M -  is the default model value.  In this model it is ‘N’ so this implies NO suppression will occur.
Y  - means that the DB1 will be checked against PAR twice.  Once upon initial read of the data from the file and then once the record lock is in place and that data is about to be written.
A  - (After read) only does the first part (see above).

The generated code

The diagram below gives a visual of the code that is generated for each of the options.



NULL Update Suppression works regardless of how you define CHGOBJ’s


Benefits of the suppress option for CHGOBJ.  
  1. Record level audit stamps won’t get corrupted with unnecessary update
  2. Performance
  3. Triggers won’t get fired
  4. Encapsulated


When to use?


  Thanks for reading. 
Lee.

Wednesday, March 28, 2018

Look back... The good old days

Some very fond memories involving the early adoption of plex and some great friends and colleagues at one of the UK's Plex evangelists.



Thanks for reading.
Lee.

Thursday, March 15, 2018

Trigger Performance

After a month or two back and forth with CA support about the poor performance overhead of YTRIGGER vs a TRGFUN being declared on the physical file.  You should know that I provided lots of feedback and upfront documentation of the issue with detailed analysis of exactly where the issue manifests itself etc.

CA come back with this.

"I have discussed this issue with SE and they confirm that there is nothing further they can currently do to improve Trigger performance beyond that which has already been achieved. 

Further performance improvements would require a redesign of processes within CA 2E and are therefore a product enhancement. So the next step would be if you could raise this enhancement request to Trigger processing on the CA 2E online community. 

This will allow other CA 2E users to comment and gives Product Management customer feedback on the requirement. "

So reading between the lines as I concluded as follows:-

Lee we are happy that the code is working as designed and we are not really sorry that our design means your process runs 700% slower using YTRIGGER than if you declared the 2E trigger individually. 

As you said in your response you'd do your own workaround if performance of YTRIGGER cannot be improved (or we won't improve it), we decided to make you jump through more hoops and ask that you spend more time putting it on a 2E forum (enhancements page - that hardly anyone reads) in the hope that people will +1 it and give it some priority.  Even though we have all the information we need to do so ourselves.

This gives us several years to ignore this improvement and we can continue to charge you $30k per annum for mediocre support and channel this profitability into other CA products that you do not use.



Thanks for nothing CA.
I'll implement the workaround.
Lee.

Thursday, July 13, 2017

2E Code Review - pet hates - part II


Back again...

My intention was that the first post would cover my major pet hates...  Being the perfectionist (that I am when it comes to coding) I quickly realised that I see many more coding mistakes/habits/bad practices that irritate me.

So here are another 3 things (there are still others on the list) that "get on my goat".  I laugh inwardly because I can see each and every developers face who I associate with some of these bad habits.  I also frustratingly recall the countless times I've explained why 'xyz' is bad without achieving a behavioral change.  Stubbornness is a very special human trait.

I go back to a previous post, way way back in 2008/2009 in fact.  I state that without a declared mandate/role to reject bad code, things never change, performing code reviews by democracy simply doesn't work.  This is why I am such a fan of 'automated development standards' review technology as it takes away the conflict.

Back to the list of bad habits that are consuming me today :-)

4. Externalising a single function to get over file limits.

Ever hit the RPG 50+ file limit?  Often you will see people externalising a single RTVOBJ (for example) in order to get back to 50 files or less.  This is the most shortsighted piece of coding I ever see and will only damage your code base.  In this instance RPGLE wasn't an option but as you can tell the option for structuring wasn't taken.

Here is a rather extreme example of a (now deprecated) function from my model at work.  I have altered some of the images for privacy reasons.









Associated hates: Not only will the multiple external functions increase the call stack, reduce performance, introduce errors as people forget about passing the *Return Code back (common issue when externalising) it is only ever a short term option as applications expand significantly overtime.

There are numerous other options which I have covered before like RPG to RPGLE conversion, externalising a major block of associated logic i.e. Load, Validate or Update processing or full function refactoring etc.

Tip: My preference is to aim for a mixture of RPGLE migration and also some refactoring options for better function isolation.  This depends on how much user source you have etc.


5. BOTH parameters that are initialised from within.

Often a developer will need to total up some values across multiple records.  In 2E we would typically use a RTVOBJ to work through a set of records and increment a value.

A quick and dirty way to do this is to declare a value that you want to sum up as a both parameter.  Then within the AD (Action Diagram) the code to increment the value is PAR = PAR + DB1 and an average developer would argue that the passing of the value to the output variable via the PAR is automatic.

Their logic is sound if you are trying to keep your action diagram code to the minimum.  Look at the two example below.

The lazy way





The correct way







On the surface it looks like the lazy way is okay.  It ticks the boxes for.

  • Clear function naming
  • Minimal AD code (x lines vs y lines)

Then it fails miserably. 

Interface




The BOTH method's parameter interface is not as easy to decipher as the correct method.  You wouldn't know from the function interface that the field(s) being totaled are being initialised back to Zero from within.


What should happen if a value is passed in.  You would expect that the value passed in, say 100.00 would be included in the total.  Here is the confusion.  The function name and interface should clearly depict what is going in inside. 

6. Leaving the MAP on.

MAP parameters are designed to pass a value passed into a function (via parameters) and place it on a device design and automatically populate the data.  i.e. MAP the parameter value to the screen or report.  There is NO other use of the MAP role within 2E.


Leaving the MAP on internal functions like a CRTOBJ or RTVOBJ. 


Whilst not impacting code, it is a sign that the developer doesn't truly understand the role of MAP and if they do, then by leaving it on they are implying that it is satisfactory to be tardy.  To me it is like not dotting an i or crossing a t.

I am going to give the developer a little out here.  CA could make it easier by NOT setting MAP by default and making it a 'opt in' option rather than something we have to constantly switch off.


Thanks for reading.
Lee.

Thursday, February 9, 2017

2E Code Review - pet hates - part I


Today I thought I'd get a few things off my chest. 

Whenever I review code, look at old code or have the job of maintaining code (not my favourite part of the job by a long shot) I am often left dumbfounded.  I see the same old mistakes being made regardless of where I have worked. You will often hear me saying,"Whoever wrote this should be shot!!".

The most annoying part is that in many places the developers agree about the best way to code but quickly use the excuse of "I don't have enough time", or "Well it's done now, so don't worry" to not code properly.  My experience tells me it is often the most experienced developer(s) who are the hardest to convince of solid development standards with the terms "Old dog" and "new tricks" preeminent in my mind....

IMHO the difference between a run of the mill programmer (I've worked with plenty of these) and a good one (fewer but they are out there) is adherence to the little details that make long term maintainability of your code as easy as it should.  This is especially true considering the inevitable change and enhancement that is required during an applications growth and evolution.  I have espoused the standard development quote that 90% of an applications life cycle is maintenance numerous times before.

Anyhow, my post today is a list of pet hates that I see when people are "unprofessional", "selfish","crude", "Lazy" with their 2E coding.

1. Legacy commented out code.

I recently worked on some maintenance and had to work through some action diagram code.  Using the Find Services option within 2E I started to quickly get frustrated that the majority of the usages that were commented out.

What was worse still is that there were developer comments (from 2002) stating that the code was commented out and I know that this area has been enhanced many times since.  There is simply no reason to keep such old code.  The developer concerned is normally quite good but has a couple of really bad habits like this one.

Here is a snippet of code from an old function (screen print kindly shown with permission).  Note, I have redacted any client or developer or brand comments (hence some empty white space).

 

I am all for commenting out code as part of debugging, rapid prototyping, unit testing and quick wins (hot fixes) where you are not quite sure of the change you are making, but please annotate when and why.  However, I am aware that some of this code was moved to their own program(s) so IMHO should just have been removed.

Associated hates:
  • Having to weed through the chaff to get to the code to change.
  • Any old code will not have been maintained so will not (often) be fit for purpose (if you decided to uncomment it) so why leave it there.
  • Extra impact analysis (especially for internal functions). 
  • Confusion with the impact analysis usage.  A future post is planned for this.
  • Better to take a version.
  • Commenting out code doesn't change the timestamp for the AD line so we don't know when it was commented out.
Tip: Comment out code sparingly and preferably not at all.  Be confident with your code and solution.  After you have completed the work (and tested it), if you are 100% happy with the results revisit the action diagram and remove commented out code.  Commented out code is a maintenance burden and others will not understand why.

Here is the same code with the bad commented out code and legacy comments removed.

Still not perfectly structured but a whole lot more readable.  Perhaps CA can add a hide/show commented out code option for the action diagrammer.

2. "GET All fields" RTVOBJ not doing as described.

Every 2E developer has written the standard RTVOBJ that returns the full record. Yes, we all have forgotten the *MOVE ALL, but my biggest annoyance is when people don't visit these functions when the underlying file structure changes.

Imagine the "*GET", "GET Record", "Get All", "RTV All Fields" function without the new fields as parameter.


The next developer comes along and low and behold, there are a few fields missing.  Then they create a new one called Get All (New) and probably flag the other one with some kind of encoding system like DNU for Do Not Use etc.

Associated hates:

Numerous versions of the same type of function.  Too many choices.
Description vs reality can be misleading and frustrating.

Tip: If you have to change the file, you may as well change the "Full Record" type function as you have to regenerate up everything anyhow.

3. Using all 9 parameter definition lines for functions parameters.

In the old days when we only had *FIELD, Access Path(s) or Structure files for defining parameters it could sometimes get a little busy and we'd fill the 9 lines leaving limited options for the next developer.  We shouldn't create structure files just for the sake of it, after all, their primary purposes was for consistency of repeating data groups like audit stamps etc not for easier passing of parameters.

Since version 4.0 (over 20 years) we've been able to define parameter arrays.  There really is no excuse now for taking up the last line of a functions parameter block unless "I don't have time" or "I'll do it when we get to 10" is a valid reason.

Tip: When maintaining a function and say 6 or more parameter block lines are used consider refactoring with a parameter array.

I still think that even using parameter arrays as being a bastardised solution for this problem and that 2E should just have had more that 9 lines....but it is the lesser or two evils.

This is the tip of the iceberg.  Plenty more to follow I am sure.

Thanks for reading.
Lee.

Tuesday, January 24, 2017

Don’t forget the *return code



When we are developing in 2E we often reach the 50 file limit if we are generating for RPG.  RPG ILE (RP4) is more resilient as it allows more files to be opened.  I’ve preached before regarding proper function construction. 

Do you really need 50 or more files open for any given process? 
Can your function be better constructed or to be accurate………deconstructed?


What are the options?

1.       The easiest method to get over these limits is to change to RP4.
2.       The best is probably to re-architect your function properly and switch to RP4.
3.       The next best approach is to probably hive off a chunk of the processing into a new function to reduce the open files.
4.       The worst thing to do, is to externalise a single RTV in order to get under the limit.  As this can cause all sort of issues.
o   The 50 file limit will be breached on the next maintenance (most likely) leaving the potential for a string of EXT/RTV type functions.
o   If called in a loop or RTVOBJ then an external (if not set to Close Down ‘No’) may impact performance.

Another item to add the list of don’ts (above) is the preservation of the *Return Code.  An Execute External Function does not automatically return a *Return Code.  By default, standard processing is to pass back *Normal or an empty *Return Code.

If you are relying on a *Return Code for your now externalised RTVOBJ it will always come back as empty (*Normal)


unless you explicitly pass the return code back with the *EXIT PROGRAM built in function.

 
Another trap for young players but something that catches even seasoned developers out.  Had the RP4, re-architect or block of code to EEF option been taken the issue was likely to not have manifested.  

Good programming practice is to always be conscious of the *return code and more importantly (where required) test for it.
 
Thanks for reading.
Lee.

Tuesday, January 17, 2017

Condition of the timestamp field type

 Hi,

Recently we had a small issue in our office with a program not providing the expected results when a case statement was created to see if the timestamp was entered or not.

Here is a sample program that I mocked up in another model to demonstrate the point.

The example below is a simple EEF (Execute External Function) that compares a couple of timestamp fields.  The first LCL.Lee's Timestamp is initialised with a value of JOB.*System timestamp.  In the second part field LCL.Lee's Timestamp 2 is left uninitialised so this will be NULL so in synon 2E terms that means '0001-01-01-00.00.00.000000'.


As one field is initialised and the other isn't you expect that the code to be performed will generate two messages in the joblog and these would say 'Timestamp is entered' and 'Timestamp not entered".

However, as you can see from below the messages are not as expected.



Let us take a little look as to the potential reasons why this has happened. First of all let's take a look at the generated code (RPG) for the comparison conditions.



As you can see these are pretty standard apart from the fact that they are comparing a CONSTANT value rather than a field or a hardcoded value like 'A' which you often see for status fields etc.


In our case we created a condition called 'Entered' for the timestamp field.  We left it blank (upon creation) and 2E conveniently defaulted it to the NULL value '0001-01-01-00.00.00.00000'

Now, we know that 2E generates an array for the constants used by a program and references them in the code.  The array is 25 long(see above) and the value(s) have been correctly placed in the program source as below.


The condition value is stored as a reference via file YCNDDTARFP.  You will see that this condition references to another surrogate within 2E.


The clue is given in the source code snippet above where it refers to long constants.  Taking a look at the file YCONDTARFP for 1003038 we see the value that is used by the source.



 You can also see that this is 25 long in the file and as per the array declaration above.


Herein lies the issue....  The code is looking to compare the timestamp value vs a constant value.  The timestamp field is 26 long and has 6 decimal places of precision for the millisecond element.  If we place the program in debug and interrogate the value in LCL.Lee's Timestamp 2 we see that our field as below



  But, this is being compared to below.  These two are NOT the same.....


Now we understand the problem we need to consider how we can fix the issue.  Fortunately there are several workarounds for this...

  1. Move the value (LCL.Lee's Timestamp 2) into a shorter field 25 long (truncating the last 0).  We can then compare a 25 long field with a 25 long constant.  As we are really only ever interested in EQ, NE, GT or LT comparison operators this will be okay.
  2. We can modify the source (highlighted above) to set the constant array to 25 and add a 0 to the CN value.  But, we use a code generator so this is only recommended if you use source modifier programs and the pre-processor.
  3. Our chosen option was to NOT compare to a condition for 'Entered' or 'Not Entered' etc and instead compare to an empty field.  We created a NULL Timestamp field and referred to its LCL context for a comparison (LCL so that it is always initialised within it function bounds) and not compromised like the WRK context.
The code below is a simple example of how to implement option 3.



 The results are now as expected.


I've raised the issue with CA and expect a response soon.  Perhaps it is fixed, or on a list.  But considering it will mean a generator change and a file change for YCONDTARFP the workarounds above may be a better option so I hope that you all find this useful.

Thanks for reading.
Lee.

EDIT - I have heard from CA and the workaround option is the current recommend method. I agree with them that the scope and size of the change is quite high. Pick one of the three options above that works for your shop.  Happy to hear if there are other options.  LEe. 23/01/2017.