Showing posts with label CHGOBJ. Show all posts
Showing posts with label CHGOBJ. Show all posts

Wednesday, May 9, 2018

It's not depressing so get suppressing!



Would you do work you didn’t need to do?  NO would be the obvious answer right!!!  That said there are probably thousands of 2E programs out there that are doing unnecessary updates to a database using a CHGOBJ.

Many of these programs have been working seamlessly for years and years, however, they are like a volcanic field.  Stable for years, centuries or even millennia but then one day……BOOM!!!!

This isn’t because they are really badly coded.  After all, we should only be updating one record at a time etc.  Most CHGOBJ’s are probably inline (i.e. a screen or a single instance business transaction). 
Mass bulk updates being the lesser spotted usage for a CHGOBJ.  But it does happen!

Recently we had an issue where a long standing bulk update processing program (EOD) went from executing in  sub 1 minute (so it didn’t get too much love in the maintenance department) to almost 30+ minutes (overnight).

Upon first inspection, the program hadn’t changed.  This points to an environmental based solution.  The dataset or data volume hadn’t significantly changed either….  The subsystems configs hadn’t changed and there was no system resourcing issues or spikes…..

The simple reason for the increase was that a new trigger (2E) was added to the file being updated, this trigger had a little business logic and this was required processing.  There was limited tuning to be done in the trigger program.

However, I did notice that the data was summary statistical style (reporting categories for data like current balance etc).  This was being recalculated each night and updated on an account by account basis.

On closer inspection of the rules around the categorisation it was obvious that the vast majority of accounts stayed in their categories for years and years and only with major events in the accounts lifecycle did they switch.  This mean that the activity was effectively calculating the same values each night and then updating the fields in the file every night with the same values.  This in turn NOW triggered additional functionality with a real-time trigger.

Option 1.

It was quite obvious by now that we needed to stop the execution of the trigger.  We didn’t have the option of removing and reading the triggers after the process.  The simplest method was to not perform the database update in the first instance.  This can be done by simply comparing the newly calculated values with the those on the database record and NOT call the CHGOBJ.

This method works and is relatively easy for a developer to read the action diagram and ascertain what is happening and on the surface seems like a good option.  I have seen this done in many functions.

However, the developer must (potentially) do a read to compare to the database.  This data may itself be old (retrieved much earlier in the cycle).  The developer needs to do this everywhere the CHGOBJ is used.

Option 2.

Code could be added inside the CHGOBJ to exit if DB1 and PAR are the same.  I’ve seen this approach too.  This is a bit cleaner but for any functions creates since 6.1 of 2E (last century) this is also the incorrect approach.

Option 3.

The correct approach in this instance is to switch on a function option on the CHGOBJ and utilise the built in suppression code relating to Null Update Suppression. (See highlighted options below).





The options are quite simple.
M -  is the default model value.  In this model it is ‘N’ so this implies NO suppression will occur.
Y  - means that the DB1 will be checked against PAR twice.  Once upon initial read of the data from the file and then once the record lock is in place and that data is about to be written.
A  - (After read) only does the first part (see above).

The generated code

The diagram below gives a visual of the code that is generated for each of the options.



NULL Update Suppression works regardless of how you define CHGOBJ’s


Benefits of the suppress option for CHGOBJ.  
  1. Record level audit stamps won’t get corrupted with unnecessary update
  2. Performance
  3. Triggers won’t get fired
  4. Encapsulated


When to use?


  Thanks for reading. 
Lee.

Monday, September 5, 2016

CHGOBJ and easy file maintenance

Today I want to write about a neat little method of setting up CHGOBJ DBF internal functions to assist with long term file maintenance.  I have touched on this before in my standards posts but decided that as I keep seeing people do this incorrectly (wherever I work) that it deserved a blog post of its own.

Imagine a nice simple file being added to your data model.  I have added one below.


As you can see above it has one key and a few neatly named attributes.


The file was a reference file but for cleanliness I have removed the default edit file and select record functions.  Also in our shop we have a standard of preceding the primitive DBF functions with an asterisk so I have done this also.

The parameter structure for the CHGOBJ is as expected for a full record CHGOBJ.  Nothing spectacular here.  This thankfully is default 2E behaviour.



In order to show the proposed methods I want you to use, we need to imagine how we are going to modify the data in this file.  Typically we won't change the entire record with the exception being an EDTFIL/EDTTRN which will as default use the full record update or if you have created a W/W + PMTRCD maintenance suite which is quite common. type function .  

In the real world we update a total amount, a status, or more typically a subset of relevant data.  To do this we often create individual CHGOBJ's named something like 'Change Account Status' or 'Update Address Only'. 

Below I have created two separate CHGOBJ's to update the status field on this file.  I have imaginatively named as below.


Method 1 is (as defaulted) passing the entire file structure as RCD.  The parameters at the detail level are setup as follows.  Note that I have set the parameters we are not updating as NEITHER and turned off the MAP role  Nothing gets me more irritated in Synon coding than leftover MAP roles...


Method 2 has the data passed differently.  I am using two parameter blocks.  The first one for the keys or key in our case.  The second line for the data attributes where I have set the status filed we wish to update as input.  Again I switched off that darn MAP role.




Both these CHGOBJ's (Method 1 and Method 2) now have the same interface as far as calling programs are concerned.  i.e. two fields.  The key 'MSF My Key' and the fields we want to update i.e. the status field 'MSF Attribute 03 (STS)'.

There is one caveat though.  Method 2 won't work. 

Not yet anyway.  Let me explain why...

There is action diagram code inside all CHGOBJ's that implies the function moves the data passed in via the parameters (PAR) to the database record (DB1) just prior to update.  You can see this in the picture below.


However, the Synon generator has been written (by design/bug/undocumented feature) to only move fields passed in the first parameter block.

Yes.  Shortsighted I know but it is a known limitation.  Go ahead try it for yourself.

This means that in this instance it will only move the values passed in the highlighted row below.  In our example for Method 2 this would be the key only.


The way we get around this is to do the move ourselves in the user point immediately after the badly generated code.


This now moves the attributes into the DB1 context from PAR.

Job done.  This function will perform valiantly and won't let you down.  However, at this stage there is no advantage doing the CHGOBJ this way.  Why would you separate the parameters and add extra complexity forcing developers to add the *MOVE ALL if the two functions are now (functionally) identical?

If your shop has standards then it's likely you've learned the hard way.  Remember the average application spends 90% of its functioning life in maintenance, and therefore, it is these activities that cost the real pounds/dollars and take the time to implement.

Change is inevitable and most files will require some form of change during their lives and the most common type of change for a file is adding new fields. This is where method 2 outshines method 1.


Let's make some basic adjustments to the file.  In the example below we are adding three extra fields which have been appropriately named.


So how has this impacted our functions that we have created in the blog post?  The standard CHGOBJ (the *CHG) for the entire record has the three additional parameters automatically added.  We just need to visit its usages and set the values accordingly.



Our two examples (Method 1 and Method 2) fair quite differently.  Let's discuss these below.

First;y, I will quickly remind you that the two methods for the CHGOBJs were only updating a subset of fields of this file rather than the entire record, in our case the status field 'MSF Attribute 03 (STS)'.

Method 1 has had the extra fields added automatically which is not ideal or what we want.  We now need set these to NEITHER.  Forget this at you peril.  Note: I also removed the MAP!


Method 2 however, keeps the existing input parameter structure.  There are no changes to be made other than regeneration due to the changed file.


So on the surface it would appear that method 2 is best for CHGOBJ's where a subset of data is changed.

At this point I would recommend you utilise *Template if you haven't already looked at them.  I even have templates for a standard RTVOBJ so I don't forget the *MOVE ALL for CON and DB1.

Below is an example of how I implemented method 2.

Function name.


Parameter block.


Detail for parameter block 1 .


Detail for parameter block 2.


Add the AD code for the move all.


That's the template completed.   Use Shift F9 to create the function from the EDIT FUNCTIONS screen.  Select the template type and then name it appropriately.


Set the parameters you want to include in the CHGOBJ.


Switch off the map and you are up and running. 


The Action Diagram code has automatically been added to your new function as long as you put the *MOVE ALL in the *Template.

In Summary.

It hasn't got to be a one size fits all approach.  There is no harm in choosing a hybrid approach, so.
  • Use Method 1 for full record updates.
  • Use Method 2 for subset or single field updates.
  • Consider using templates to enforce your standards and to reduce any mistakes or omissions.

Thanks for reading.
Lee.

Wednesday, November 12, 2008

2e Development Standards - (Composite Functions)

Todays topic is composite functions.

I have said before that there are many ways to skin a cat and with development regardless of tools and languages used, it is no different.

To date I have concentrated on the generic principles of development and also on the CA 2E tool from Computer Associates. I have put quite a few posts in place around 2E with many many more to go. To be honest I am less than 20% through what I intend to post from the technical perspective and I have barely touched the Plex product. Still good things come to those who wait.

http://www.dictionary.com/ has the definition of 'composite' as "made up of disparate or separate parts or elements". In 2E terms it means the linking of two or more functions to serve a business purpose. For example to clear a file using 2e function types you may either call a EXCUSRPGM and use the operating system or you may chose to use a RTVOBJ with no restrictor/positioner keys and then call a DLTOBJ for each record in the file. You could also call a SQL routine, embed the OS400 command in a message etc etc etc. You get my point. The composite in this example is the RTVOBJ/DLTOBJ combination.

There are other composite types that are more often encountered. Especially around creating or changing records in a database file (table for the SQLites amongst you).

I have created functions named CRT/CHG or CHG/CRT to solve a common problem of what do do if the record does or does not already exist in the database.

This lead me to consider is there is a preferred default method and are there any variations. Once again a big thanks to Ray for his contributions here.

CHG/CRT v CRT/CHG

There are times when if a CRTOBJ fails due to the record already being in existence or a CHGOBJ fails because the record does not exist. To solve these issues we generally create combination functions named either CRT/CHG or CHG/CRT. Or if you follow my recommended standards and if these are default functions then they would be named *CRT/CHG or *CHG/CRT.

In general you should select the one that is most likely to succeed, so depending on your knowledge of the environment and the data being processed, if the record is likely to not be there then you need to use the CRT/CHG combo.

There are some performance considerations over and above the availability or otherwise of the underlying record.

A CHGOBJ that contains a CRTOBJ if a record does not exist is inefficient as it generates the following code. This is particularly true for SQL generation.

Pseudo SQL Code

DECLARE CURSOR FOR UPDATE
FETCH
UPDATE if record found
INSERT if no record found
CLOSE CURSOR

Pseudo RPG Code

CHAIN
UPDATE if record found
SETLL & INSERT if no record found

An alternative coding style with a CRTOBJ calling a CHGOBJ if record already exists will generate the following code. The CHGOBJ must be an 'update style' that does not use a cursor.

Pseudo SQL Code

SELECT by key
INSERT if record not found
UPDATE if record exists

Pseudo RPG Code

SETLL
WRITE if record not found
CHAIN & UPDATE if record exists

A CHGOBJ with a little bit of grunt.

To create an 'Update style' CHGOBJ:-

For performance reasons we need to omit the prior SELECT generated for SQL CHGOBJs. This is mainly for CHGOBJs called repeatedly in batch type processing. But it can only be done if there is no requirement to read existing DB1 context fields. To do an immediate update we need to create a special version of a CHGOBJ called an UPDATE function. This will have following characteristics:-

1 - Ensure that there is no coding inside the CHGOBJ. Commonly we must transfer the timestamp coding from inside the CHGOBJ to input parameters and setting the timestamp fields directly on the CHGOBJ call.

2 - Ensure that the CHGOBJ parameters are defined in the default way using the UPD ACP structure. Fields that do not need to be changed should be made NEITHER parameters.

3 - The CHGOBJ function option for Null Update Suppression should be = No. This ensures that there is no attempt to perform an image check.

4 - UPD style CHGOBJs should ideally only have the attributes that are being changed. This is particularly important when calling a CHGOBJ from inside a RTVOBJ over the same file. Passing in DB1 context for those filed not being changed is not conducive to performance since the optimiser cannot differentiate between changed & non changed attributes.

Thoughts, opinions, concerns or fanmail gratefully accepted. Cash donations to charity please.

Thanks for reading.
Lee.

Friday, October 31, 2008

2e Development Standards - (Hints and Tips CHGOBJ)

Another post about some of the finer points for the internals functions available in the 2e tool. I have covered off the RTVOBJ and CRTOBJ in some detail. I have left the obvious stuff for the user manuals and am really covering off best practices and gotchas in these sections.

Any observations and comments are gratefully received and a big shout out to Ray Weekes who passed on many of these in these sections from his experience.

The default CHGOBJ will have all parameters open except for Time Stamp and any other derived attributes which will all be made NEITHER. Action diagram coding added will only be for Time Stamp and derived data i.e. Audit records or calculated values.

Further CHGOBJ functions may be created where only subsets of attributes are to be changed or where special processing is applicable. All CHGOBJs should replicate the standard support for Time Stamp and derived data where appropriate.

Structuring your parameters. There are 2 methods for defining a CHGOBJ where only subsets of attributes are to be changed. The default method uses the UPD ACP RCD defined over the first parameter block line as the input parameter list. Database attributes not to be automatically changed from input parameter fields are set to NEITHER.

The second method uses the UPD ACP KEY as the first input parameter list line. The UPD ACP RCD is then optionally used on the second parameter line. This second approach suppresses any automatic update to the DB1 context and therefore requires a *MOVE ALL PAR to DB1 to be added to USER:After DBF Read. This KEY method has the advantage that no NEITHER parameters need be defined and no unnecessary coding is generated for NEITHER parameters.

It also protects the CHGOBJ from the effects of future attributes being added to the file.

The default RCD method of a defining CHGOBJ is sufficient for most small files or where most attributes are being replaced. The KEY method is preferred for large files where only a few attributes are being changed. It is also the preferred method on any file where existing database attributes are being incremented rather than replaced with new parameter input values since it makes the action diagram coding simpler. E.g. A CHGOBJ designed to get the next surrogate# to be used.

General pointers.

It is possible to suppress the default error message if the record does not exist by setting PGM.*Return Code = *Normal in USER:Record Not Found. Alternatively you may add a call to a CRTOBJ over the same file. But if the CRTOBJ is conditional then still set PGM.Return Code = *Normal where CRTOBJ is not called.

Any UPD ACP RCD parameter field defined as OUPUT or BOTH will be automatically moved from DB1 context to PAR context after the database update. This allows a CHGOBJ to also function like a RTVOBJ GET and to return current database values.

When using a CHGOBJ to increment values on a database remember to move the current DB1 values to a holding field as the PAR to DB1 move will overwrite the original values. Then in the Process before Update you do the *ADD arithmetic. A good standard could be to prefix the function like UPD nnnnnnnnnnnnnn.

Null update suppression is the preferred choice for all CHGOBJs. Model value YNLLUPD=*AFTREAD. This means that an image check takes place after USER:After DBF Read. The USER: Before DBF Update is only executed if the record image has been changed. Time Stamp logic and other derived processing must therefore be in USER: Before DBF Update.

In general do not use a CHGOBJ to change the primary key. Although this technique works in RPG it causes problems in COBOL. However, it may be used in RPG & SQL. SQL CHGOBJ requires the key fields to be set to MAP to force them into the SET clause.

Gotcha's

You must never *QUIT from a CHGOBJ since this may leave a lock on the record. Use PGM.*Record Changed = *Yes if any conditional processing is required inside the CHGOBJ.

Do not use a CHGOBJ or DLTOBJ over a PHY ACP because of a bug in the 2E SQL generator.
Next time the DLTOBJ.

Thanks for reading.
Lee.