Showing posts with label Tips. Show all posts
Showing posts with label Tips. Show all posts

Monday, September 5, 2016

CHGOBJ and easy file maintenance

Today I want to write about a neat little method of setting up CHGOBJ DBF internal functions to assist with long term file maintenance.  I have touched on this before in my standards posts but decided that as I keep seeing people do this incorrectly (wherever I work) that it deserved a blog post of its own.

Imagine a nice simple file being added to your data model.  I have added one below.


As you can see above it has one key and a few neatly named attributes.


The file was a reference file but for cleanliness I have removed the default edit file and select record functions.  Also in our shop we have a standard of preceding the primitive DBF functions with an asterisk so I have done this also.

The parameter structure for the CHGOBJ is as expected for a full record CHGOBJ.  Nothing spectacular here.  This thankfully is default 2E behaviour.



In order to show the proposed methods I want you to use, we need to imagine how we are going to modify the data in this file.  Typically we won't change the entire record with the exception being an EDTFIL/EDTTRN which will as default use the full record update or if you have created a W/W + PMTRCD maintenance suite which is quite common. type function .  

In the real world we update a total amount, a status, or more typically a subset of relevant data.  To do this we often create individual CHGOBJ's named something like 'Change Account Status' or 'Update Address Only'. 

Below I have created two separate CHGOBJ's to update the status field on this file.  I have imaginatively named as below.


Method 1 is (as defaulted) passing the entire file structure as RCD.  The parameters at the detail level are setup as follows.  Note that I have set the parameters we are not updating as NEITHER and turned off the MAP role  Nothing gets me more irritated in Synon coding than leftover MAP roles...


Method 2 has the data passed differently.  I am using two parameter blocks.  The first one for the keys or key in our case.  The second line for the data attributes where I have set the status filed we wish to update as input.  Again I switched off that darn MAP role.




Both these CHGOBJ's (Method 1 and Method 2) now have the same interface as far as calling programs are concerned.  i.e. two fields.  The key 'MSF My Key' and the fields we want to update i.e. the status field 'MSF Attribute 03 (STS)'.

There is one caveat though.  Method 2 won't work. 

Not yet anyway.  Let me explain why...

There is action diagram code inside all CHGOBJ's that implies the function moves the data passed in via the parameters (PAR) to the database record (DB1) just prior to update.  You can see this in the picture below.


However, the Synon generator has been written (by design/bug/undocumented feature) to only move fields passed in the first parameter block.

Yes.  Shortsighted I know but it is a known limitation.  Go ahead try it for yourself.

This means that in this instance it will only move the values passed in the highlighted row below.  In our example for Method 2 this would be the key only.


The way we get around this is to do the move ourselves in the user point immediately after the badly generated code.


This now moves the attributes into the DB1 context from PAR.

Job done.  This function will perform valiantly and won't let you down.  However, at this stage there is no advantage doing the CHGOBJ this way.  Why would you separate the parameters and add extra complexity forcing developers to add the *MOVE ALL if the two functions are now (functionally) identical?

If your shop has standards then it's likely you've learned the hard way.  Remember the average application spends 90% of its functioning life in maintenance, and therefore, it is these activities that cost the real pounds/dollars and take the time to implement.

Change is inevitable and most files will require some form of change during their lives and the most common type of change for a file is adding new fields. This is where method 2 outshines method 1.


Let's make some basic adjustments to the file.  In the example below we are adding three extra fields which have been appropriately named.


So how has this impacted our functions that we have created in the blog post?  The standard CHGOBJ (the *CHG) for the entire record has the three additional parameters automatically added.  We just need to visit its usages and set the values accordingly.



Our two examples (Method 1 and Method 2) fair quite differently.  Let's discuss these below.

First;y, I will quickly remind you that the two methods for the CHGOBJs were only updating a subset of fields of this file rather than the entire record, in our case the status field 'MSF Attribute 03 (STS)'.

Method 1 has had the extra fields added automatically which is not ideal or what we want.  We now need set these to NEITHER.  Forget this at you peril.  Note: I also removed the MAP!


Method 2 however, keeps the existing input parameter structure.  There are no changes to be made other than regeneration due to the changed file.


So on the surface it would appear that method 2 is best for CHGOBJ's where a subset of data is changed.

At this point I would recommend you utilise *Template if you haven't already looked at them.  I even have templates for a standard RTVOBJ so I don't forget the *MOVE ALL for CON and DB1.

Below is an example of how I implemented method 2.

Function name.


Parameter block.


Detail for parameter block 1 .


Detail for parameter block 2.


Add the AD code for the move all.


That's the template completed.   Use Shift F9 to create the function from the EDIT FUNCTIONS screen.  Select the template type and then name it appropriately.


Set the parameters you want to include in the CHGOBJ.


Switch off the map and you are up and running. 


The Action Diagram code has automatically been added to your new function as long as you put the *MOVE ALL in the *Template.

In Summary.

It hasn't got to be a one size fits all approach.  There is no harm in choosing a hybrid approach, so.
  • Use Method 1 for full record updates.
  • Use Method 2 for subset or single field updates.
  • Consider using templates to enforce your standards and to reduce any mistakes or omissions.

Thanks for reading.
Lee.

Sunday, November 17, 2013

Why the notepad is such a cool feature in 2E

The notepad feature in 2E is used by most of us during our development.

More often than not its a case of loading the notepad via the NR or NA commands and then navigate to the target function and insert the logic using the NI command.   We generally use it as a Copy function across function boundaries.

This is pretty neat but there is a particular flaw with this.

The notepad in its default configuration is session bound.  i.e. log in tomorrow and it'll be empty.  In fact its slightly worse than that.  Once you have exited all open programs the notepad is cleared.  This means you may often find yourself reloading the notepads with the relevant code.

Imagine a maintenance scenarios where you are visiting 10, 20, 50 or 250 programs to apply the same maintenance......

Thankfully, our friends at Synon/CA have had a little hidden gem for years and years and it amazes me that nearly every site I go to, no one has ever heard of this little feature.

Firstly, let's understand the science.  The notepad is basically and execute internal function that isn't saved.  What the developers did was to effectively allow us to configure a function that will be saved.

Here's how to go about setting this up at your site.
  • Create a structure called #Notepads or any other name that suits your site standards.  I prefer to have a structure file so all developers notepads are in one place rather than added to a business entity.
  • Create a function similar to those in the screen print below.  The choice is yours how you name it of whether its an EIF or EEF.
  • Now go to the services menu (The F17 one) and take option 11 Edit model profile (YEDTMDLPRF).  Towards the bottom of the screen you will see a section for configuring your notepad.

And that it pretty much it.  You will now get asked if you wish to save the notepad when you use it but it remains over many sessions.  You will also have guessed that you can have many of these 'code snippets' waiting for you.

About the only fly in the ointment is that the notepad can get locked out with multiple sessions but if using for cut and paste I believe the benefits outweigh this.

Thanks for reading.
Lee.

Wednesday, January 20, 2010

Implementing a 'Generic' Data Driver File + Printing/Displaying Arrays in Subfiles (Part III)

Firstly.  Sorry for the delay in finishing off this series.  I have been away on holiday for 6 weeks and when I returned I have been in hospital having my knee operation.  I am now in recovery (bed rest) and finally have a little spare time to finish off the blog.

So let's do a little recap. 

In part one, I discussed the merits of a generic data driver file.  This is a different approach to normal 2E data driven programming but as indicated is particularly useful for implementing *Arrays for DSPFIL's or a PRTFIL or for merging head/footer details into one DSPFIL/PRTFIL.

http://leedare-plex2e.blogspot.com/2009/11/implementing-generic-data-driver-file.html

In part two, I was merely trying to walk you through the solution I was required to provide for one of my customers.  These screenshots have been modified from their original form for confidentiality reasons but are posted with permission from my employer http://www.sasit.co.nz/.

http://leedare-plex2e.blogspot.com/2009/11/implementing-generic-data-driver-file_20.html

Today I am just going to do a quick walk through of what was required to create the solution.  With a little bit of effort (and luck) you should be in a position to work this out for yourself.  Of course, I am always happy to take questions and assist if neccessary.

Step 1.  Implement a Data Driver file.

A very simple file with one key.  I just made mine a simple numeric field.



Here are the field details




I just went with the default sizes for a NBR field.

Remember, once you have created this file you will need to populate it.  I simply populate this with two records 1 and 2 as the keys.  I have seen other implementations where people have put all 99,999 records in the file to make their programming a little easier.  I prefer a slightly different method of key jumping.  Explained a little later.

Step 2 - Some AD coding for the Super 14 table.

I have already computed my table placings based on the scores that have been entered into the system and the Array is keyed in position order.  In the AD below you will see that I set a counter.  This was initialised to 0 in the initialise program section.  The counter is there so I get the correct record for the correct position in the table as my data driven data only has two records.  If I wanted to, I could have populated more and had the counting already available in the DB1 context as I read each record. Personnal preferences here I guess.  I'd be interested in your viewpoints.

Next, I simply retrieve the data from the Array and do some populating of the fields in the subfile.  Now because I am reading down the data driver file and there are only two records I simply have to reset the cursor by doing a re-read of the first record if I think that there are other records to be displayed.  If not, the second record is read, not a lot happens and the subfile loading has done its job.



This is a screenshot of the device design for the Super14 table.



And the final table with 2009 results in place is as follows:-




That's it.   Simple.  If there are other areas in 2e you'd like me to cover.  Drop me a line.
Thanks for reading.
Lee.

Friday, November 20, 2009

Implementing a 'Generic' Data Driver File + Printing/Displaying Arrays in Subfiles (Part II)

The greatest rugby competition on the planet. Alright, I live in the Southern Hemisphere now and as a direct result have begun to believe the hype. That said the Super 14 (Super 15 from 2011) competition is recognised as one of the strongest leagues in Rugby Union and has team from Australia, South Africa and my current place of residence, New Zealand. Sorry to all those that think I have sold out and not created a Football (Soccer) or NFL example.

System Overview

The requirement was to build a system that allowed the user to make simple sports results/margins predictions on a group of games on a weekly basis. The fuxtures would be published and the predictions made. Once the results were known they would be entered in to the system and participants points (awarded for correct or near correct predictions) would be calculated.

Requirement

Not everyone had the time to trawl the internet looking for a league table that may assist them with making their predictions (Hopefully they have enough time to read this article though). The requirement was to show a realtime league table as fixtures results were entered.  It was decided to record the points the actual teams achieved for each fixture and then simply build the league table on the fly.

Additional information

One could have built a simple file and recreated each and every time the results changed. However, due to the limited number of teams and fixtures in a period it was decided to build on the fly in an array. Also this also meant there was no physical file to maintain and promote and the user could easily view any of the previous years.

Solution

Using a generic data driver file. Build an array that computes and sorts the team table into the correct order and then read from the array the teams and show in that order they are in the table.

Next week I will show you how this was achieved. There will be otherways to receive the same result and all notes are intended as a guide. Your individual circumstances and requirements may vary but feel free to emulate and utilise.

As an appetiser the screen below is a DSPFIL based over an Array.



If you require any further assistance you can always email me at (leedare at talk21 dot com)

Thanks for reading.
Lee.

Tuesday, November 17, 2009

Implementing a 'Generic' Data Driver File + Printing/Displaying Arrays in Subfiles (Part I)

This is a three part story.

I can think of quite a few occassions in 2E where I have needed to display or print information from a non 2e standard file i.e. A non 2E defined file, an array or even a Data Queue.

I have also had the need to build PRTFIL's and DSPFIL's which needed to aggregate data in a master/detail arrangement. The example below is from a change management application I worked on years ago.  It shows a diary note (Header) and the detailed comments (Detail) in one screen and uses a toggle button to determine the entires shown for either the summary mode or detail mode.




To implement these solutions I have used the 'Generic Data Driver' file concept.  I have introduced this at the last 3 2E sites I have worked at.

A worked example of how to do this with screen shots and code sample code will be in part III .  I have also included some notes to help you set up your own generic data driver file and one example of how to utilise it.  This example also has the added bonus of showing you how you can show arrays on a DSPFIL.  Whooarah....Yippee....Get on with it..... I can hear you all say........

This might save Rory and Simon some hassle anyway!!!!  At least with fending off this often requested enhancement to the base 2E tool.

Until then.....(Next Week).

Thanks for reading.
Lee.