Tuesday, December 30, 2008

Merry Christmas CA

Well it is that time of year again when we all take a well deserved break (this is subjective I know), send those work clothes to the dry cleaners for a much need overhaul, eat too much, drink even more and come back to work refreshed in the new year with at least one broken new years resolution by the 5th of January.

Christmas is often a busy time of year for many of us and we sometimes also spend the time talking about those that are no longer with us at those family reunions. I presently have some of my relatives visting us and have also had a long time family friend popping in to say "Hi" even though we live 12,000 miles apart.

Anyhow, Christmas is a time of giving (well it is in my world) and CA thankfully think no differently.

If you haven't already seen the news feeds then you may not know that our friends at CA have gone GA on CA Plex 6.1. There are quite a few neat features in this release that are worthy of a mention here, highlights include:-

Model-based SOA support
- WCF Service Generation
More Extensible Development Environment
- Plex API Enhancements and Add-Ins
- Code Library Service Generator Plug-Ins
Windows Vista Support for development
Group Model Update History
Runtime Backwards Compatibility - Easy Upgrade from 6.0
- No re-gen or re-build from r6 to r6.1
Platform Compatibility Updates
- IPv6
- Java SE 6.0, ANT 1.7.0
- SQL Server 2008, Oracle 11g
- Windows Server 2008
- 64-bit Windows

For full details follow some of the following links (Note I am not responsible for external site content):-

http://www.ca.com/us/press/release.aspx?cid=194906

or the product brief

http://www.ca.com/files/ProductBriefs/ca_plex_product_brief.pdf

I am particularly keen on the additions and improvements to the Model API and what this will allow third party vendors to create to support the CA Plex ecosystem.

But then there is more.

CA have also announced that the user voting system polls used successfully in 2007 will be re ran early 2009 to help provide CA product management with user driven requirements for the enhancement of the toolsets (Both 2e and Plex). But remember you are voting in enhancement requests made. In order to get your requests in the list you need to contac CA support and raise a ticket.

But then there is more.



2e 8.5 is already Alpha and going Beta in the new year. Highlights for this release are:-

ILE Service Program support
- New object type to combine modules into service programs
- Supports CA 2E-generated and external modules
Web Services support
- Deploy ILE Service Programs as web services
- Based on IBM i Integrated Web Services server for ILE
Improved Impact Analysis
- Take account of commented-out code
- Voted No. 1 in 2008 Enhancement Request Survey
Better search/positioning facilities in the model
IPv6 compatibility
CA 2E Web Option Environments
- Separates user data from the product libraries

I believe GA is planned for July 2009.

But then there is more.

There are webcasts to be heard in the new year. There are events to attend. There are conferences to consider. There are powerpoints for CA world to be downloaded.

How do I know all this. Well I am a member of the PLC (Product Line Community) and as such I get regular email updates from the product management team at CA.

To join this ever growing list, simply click the link below.



Mery Christmas and a Prosperous (Credit Crunch Recovery) New Year.


Thanks for reading.
Lee.

Sunday, November 30, 2008

2e and your support network

Updated - 12/06/2009
I recently received a communication via the blog feedback feature. It was from one of our colleagues in France who was a little concerned about the support he receives from CA and also the usage of the tools in general.

There are quite a lot of interweaving discussions to answer his question below.

"In France,we have CA/2E 8.1SP1fr RPG.Here,we have no news from this product,it seems dead! We have to contact US support to obtain some news/upgrades,it's very difficult to have a schedule of upgrades.We use 2E since 95 and nothing has changed in the way of using it. Please reply to discuss. Thanks. "

So I guess I need to answer each of these and share some opinion along the way. After all, isn't this the point of a blog in the first instance.

Is it Dead?

Well the short answer to that is "NO". A quite emphatic one actually. CA have a roadmap for the 2e product with a host of community voted enhancements being included in 8.5 (GA July 2009, I understand). I am sure these were discussed at CA World. There was also a call from Daniel Leigh recently to the community about utilising Web Services directly in 2e as well as catering for the creation of ILE service programs from within the toolset.
I have been part of the Alpha and Beta testing and these is certainly some good potential in these areas.

Keeping updated about the product.

CA is a big company and I would guess that the local offices are often the last to know about product centric announcements. The guys responsible for the 2E and Plex products are particularly vocal online and via a series of user groups. The US is the biggest installed base for these products so I guess it is only right that the focus starts states side. You can sign up to the forums, my blog and other fan sites as well as the Plex2E PLC (Product Line Community). Not to mention the local user groups in some regions. US and UK.

There is also Alpha and Beta testing programmes which I encourage you to consider.

Other useful links below which I have also added as a dedicated link on the blog home page.

The CA Forums

http://caforums.ca.com/ca/?category.id=caplex

Communities Product Line Community

http://causergroups.ca.com/usergroups/UserGroupHome.aspx?ID=391

Also check out the other links I have on the blog home page. I link to the Plex and 2e Wikis which are resources we can all contribute to as well as specialist sites of key CA product partners.

Once you are a member of the PLC you will receive very regular updates from Bill Hunt and the team as well as invitations to webcasts.

And then you always have the other forums around the platforms the tools generate for. i.e. SystemiNetwork, although I guess this will be changing its name again soon once the Power System branding gains momentum.

Upgrades.

CA have explained in the past on numerous occassions and the release schedule would confirm this, they issue major upgrades for the products every two years. Service packs in between which add functionality as well as a fix roll up. See the CA website for more details about each of the product roadmaps along with release and support cycles.

On a side note SP2 has been available for quite a while (November 2007) and as I referred to earlier 8.5 is GA for July 2009.

Features introduced into 2e in recent years.

The following link (on the 2e wiki) highlights the changes that have been added to 2e in recent years. If you are considering or using Plex also then you should check out it's wiki also. See the links at the top of the blog.

http://wiki.2einfo.net/index.php?title=Versions

Depending on how you use the tool or how you have approached extending the tool would dictate which of the features you are using. Often, as the tool has been around a while we just use it as we always have. Not too different to those of us who use Word as if we were using Word for Office 4.3 instead of 2007.
If you are into componentisation of your 2e systems then you will be utilising many of the other features. However, there is no right or wrong way to use the product, simply what works for you. But do take a look at what is there and see if the new features can be applied to your environment.
I was quite shocked to see on a recent PLC 2E users survey that many shops are still on 7.0 and some on earlier versions.
My strong and hence bolded recommendation here is to get current.

Certainly 15 years ago the focus was that 2e generated every aspect of your system. Nowadays if forms part of a hetrogenous platform of servers and tools.

The base tool has remained largely the same with the focus on value add but 8.5 has quite a few base tool enhancements. I like the filtering now. Very neat. And recent releases have had quite a lot of neat shortcut function keys and subfile options that can increase developer productivity.

Lastly, I guess there is also the advent of the internet. With so much information available, if we are not sourcing our information via the net then we could be missing out on lots of it. And if we looked closely or joined the PLC you would know about the 4th Annual conference taking place in Ft Lauderdale, Florida, USA in September 2009.

Thanks for reading.
Lee.

Tuesday, November 18, 2008

What next?

My regular followers for which there are many. I am pleased to say that I have over 100 unique visitors per month visiting the blog and the number is increasing. If have dabbled with a mixture of real life stories and content, technical 2e content, management styles, observations about software development and a couple of other stories related to me and my experiences as long as there is a loose IT twist.

I have spent much of the last two months writing more about 2E. I plan to finish off my posts in this area over the coming months and years. I certainly have plenty of material left.

Finish 2e Development Standards
Model Management
Database Adminstration Guides
Advanced Development Techniques
Model Tidy
Modernisation Options for 2E generated systems
Plex Development

........ are all blog series candidates in their own right.

I thought that I'd take a week off this week and ask you the readership what you would like to see. The hits to date make it a worthwhile exercise for me to put this content on the web. If you have any content you wish to share, requests for additional coverage then place a comment on the blog and I'd be happy to start doing some of this for you.

So.

Let me know. I reckon I have a year or two left in me looking at my current roadmap. A little more now won't hurt.

I look forward to hearing from you.

Thanks for reading.
Lee.

Wednesday, November 12, 2008

2e Development Standards - (Composite Functions)

Todays topic is composite functions.

I have said before that there are many ways to skin a cat and with development regardless of tools and languages used, it is no different.

To date I have concentrated on the generic principles of development and also on the CA 2E tool from Computer Associates. I have put quite a few posts in place around 2E with many many more to go. To be honest I am less than 20% through what I intend to post from the technical perspective and I have barely touched the Plex product. Still good things come to those who wait.

http://www.dictionary.com/ has the definition of 'composite' as "made up of disparate or separate parts or elements". In 2E terms it means the linking of two or more functions to serve a business purpose. For example to clear a file using 2e function types you may either call a EXCUSRPGM and use the operating system or you may chose to use a RTVOBJ with no restrictor/positioner keys and then call a DLTOBJ for each record in the file. You could also call a SQL routine, embed the OS400 command in a message etc etc etc. You get my point. The composite in this example is the RTVOBJ/DLTOBJ combination.

There are other composite types that are more often encountered. Especially around creating or changing records in a database file (table for the SQLites amongst you).

I have created functions named CRT/CHG or CHG/CRT to solve a common problem of what do do if the record does or does not already exist in the database.

This lead me to consider is there is a preferred default method and are there any variations. Once again a big thanks to Ray for his contributions here.

CHG/CRT v CRT/CHG

There are times when if a CRTOBJ fails due to the record already being in existence or a CHGOBJ fails because the record does not exist. To solve these issues we generally create combination functions named either CRT/CHG or CHG/CRT. Or if you follow my recommended standards and if these are default functions then they would be named *CRT/CHG or *CHG/CRT.

In general you should select the one that is most likely to succeed, so depending on your knowledge of the environment and the data being processed, if the record is likely to not be there then you need to use the CRT/CHG combo.

There are some performance considerations over and above the availability or otherwise of the underlying record.

A CHGOBJ that contains a CRTOBJ if a record does not exist is inefficient as it generates the following code. This is particularly true for SQL generation.

Pseudo SQL Code

DECLARE CURSOR FOR UPDATE
FETCH
UPDATE if record found
INSERT if no record found
CLOSE CURSOR

Pseudo RPG Code

CHAIN
UPDATE if record found
SETLL & INSERT if no record found

An alternative coding style with a CRTOBJ calling a CHGOBJ if record already exists will generate the following code. The CHGOBJ must be an 'update style' that does not use a cursor.

Pseudo SQL Code

SELECT by key
INSERT if record not found
UPDATE if record exists

Pseudo RPG Code

SETLL
WRITE if record not found
CHAIN & UPDATE if record exists

A CHGOBJ with a little bit of grunt.

To create an 'Update style' CHGOBJ:-

For performance reasons we need to omit the prior SELECT generated for SQL CHGOBJs. This is mainly for CHGOBJs called repeatedly in batch type processing. But it can only be done if there is no requirement to read existing DB1 context fields. To do an immediate update we need to create a special version of a CHGOBJ called an UPDATE function. This will have following characteristics:-

1 - Ensure that there is no coding inside the CHGOBJ. Commonly we must transfer the timestamp coding from inside the CHGOBJ to input parameters and setting the timestamp fields directly on the CHGOBJ call.

2 - Ensure that the CHGOBJ parameters are defined in the default way using the UPD ACP structure. Fields that do not need to be changed should be made NEITHER parameters.

3 - The CHGOBJ function option for Null Update Suppression should be = No. This ensures that there is no attempt to perform an image check.

4 - UPD style CHGOBJs should ideally only have the attributes that are being changed. This is particularly important when calling a CHGOBJ from inside a RTVOBJ over the same file. Passing in DB1 context for those filed not being changed is not conducive to performance since the optimiser cannot differentiate between changed & non changed attributes.

Thoughts, opinions, concerns or fanmail gratefully accepted. Cash donations to charity please.

Thanks for reading.
Lee.

Thursday, November 6, 2008

2e Development Standards - (Hints and Tips DLTOBJ)

The humble DLTOBJ is the topic of this weeks post. This finishes of the series on the main 2e internal function types. I have covered the RTVOBJ, CRTOBJ and CHGOBJ so far. Next weeks post will be about the mertis of using a CRT/CHG or CHG/CRT combination function.

Following that I will be blogging about standards for EXCUSRSRC, EXCUSRPGM and the EXCINTFUN/EXCEXTFUN function types before moving on to a series on the screen function types.

So plenty to get though over the next few weeks. For those of you following the series, bare with me as I get these completed and published.

Back to todays topic. The delete object (DLTOBJ).

General considerations.

Only one DLTOBJ, the default (*DLT), should be present on any file. It will have no action diagram coding added.

If other associated files must be deleted or updated at the same time as the DLTOBJ. An EXCINTFUN or RTVOBJ should be created as a standard wrapper and named *DLT (Cascade).

You can use a DLTOBJ to clear an array. Simply create the DLTOBJ function over the array and remove all keys. Note this technique is not applicable for files. You should prefix the array clearing delete with CLR. I.e. "CLR Standards Sample" dropping the ARY prefix from the actual array. Note: If using the Cobol generator then this is inefficient and you are better to use the traditional RTVOBJ/DLTOBJ approach.

Auditing at record level. Remember that you no not need to add auditing records for any entry you are deleting.

Gotcha's

It is not possible to suppress the error message if the record to be deleted does not exist. Therefore if the DLTOBJ is conditional it should be called from a RTVOBJ based over the same file. Note: If called directly then the *Return code must be set to *Normal as deleting a record that is no there is generally not considered and trapped as an error.

Thanks for reading.
Lee.

Friday, October 31, 2008

2e Development Standards - (Hints and Tips CHGOBJ)

Another post about some of the finer points for the internals functions available in the 2e tool. I have covered off the RTVOBJ and CRTOBJ in some detail. I have left the obvious stuff for the user manuals and am really covering off best practices and gotchas in these sections.

Any observations and comments are gratefully received and a big shout out to Ray Weekes who passed on many of these in these sections from his experience.

The default CHGOBJ will have all parameters open except for Time Stamp and any other derived attributes which will all be made NEITHER. Action diagram coding added will only be for Time Stamp and derived data i.e. Audit records or calculated values.

Further CHGOBJ functions may be created where only subsets of attributes are to be changed or where special processing is applicable. All CHGOBJs should replicate the standard support for Time Stamp and derived data where appropriate.

Structuring your parameters. There are 2 methods for defining a CHGOBJ where only subsets of attributes are to be changed. The default method uses the UPD ACP RCD defined over the first parameter block line as the input parameter list. Database attributes not to be automatically changed from input parameter fields are set to NEITHER.

The second method uses the UPD ACP KEY as the first input parameter list line. The UPD ACP RCD is then optionally used on the second parameter line. This second approach suppresses any automatic update to the DB1 context and therefore requires a *MOVE ALL PAR to DB1 to be added to USER:After DBF Read. This KEY method has the advantage that no NEITHER parameters need be defined and no unnecessary coding is generated for NEITHER parameters.

It also protects the CHGOBJ from the effects of future attributes being added to the file.

The default RCD method of a defining CHGOBJ is sufficient for most small files or where most attributes are being replaced. The KEY method is preferred for large files where only a few attributes are being changed. It is also the preferred method on any file where existing database attributes are being incremented rather than replaced with new parameter input values since it makes the action diagram coding simpler. E.g. A CHGOBJ designed to get the next surrogate# to be used.

General pointers.

It is possible to suppress the default error message if the record does not exist by setting PGM.*Return Code = *Normal in USER:Record Not Found. Alternatively you may add a call to a CRTOBJ over the same file. But if the CRTOBJ is conditional then still set PGM.Return Code = *Normal where CRTOBJ is not called.

Any UPD ACP RCD parameter field defined as OUPUT or BOTH will be automatically moved from DB1 context to PAR context after the database update. This allows a CHGOBJ to also function like a RTVOBJ GET and to return current database values.

When using a CHGOBJ to increment values on a database remember to move the current DB1 values to a holding field as the PAR to DB1 move will overwrite the original values. Then in the Process before Update you do the *ADD arithmetic. A good standard could be to prefix the function like UPD nnnnnnnnnnnnnn.

Null update suppression is the preferred choice for all CHGOBJs. Model value YNLLUPD=*AFTREAD. This means that an image check takes place after USER:After DBF Read. The USER: Before DBF Update is only executed if the record image has been changed. Time Stamp logic and other derived processing must therefore be in USER: Before DBF Update.

In general do not use a CHGOBJ to change the primary key. Although this technique works in RPG it causes problems in COBOL. However, it may be used in RPG & SQL. SQL CHGOBJ requires the key fields to be set to MAP to force them into the SET clause.

Gotcha's

You must never *QUIT from a CHGOBJ since this may leave a lock on the record. Use PGM.*Record Changed = *Yes if any conditional processing is required inside the CHGOBJ.

Do not use a CHGOBJ or DLTOBJ over a PHY ACP because of a bug in the 2E SQL generator.
Next time the DLTOBJ.

Thanks for reading.
Lee.

Saturday, October 25, 2008

2e Development Standards - (Hints and Tips CRTOBJ)

Here we go again. Another in the series around 2e development standards. Today I thought I'd cover the CRTOBJ function type and share what I have gathered on the road over the years.

Cutting straight to the point.

Keep numbers to a minimum. In general there should only be one CRTOBJ, i.e. The default (*CRT), per file. This will have all or most parameters open and will not contain any complex processing. This makes the function reusable. The exception being surrogates that are generated inside the function and control fields like audit stamps.

These will generally be set to NEITHER and primed inside the function.

The exception to one per file would be for files that are archived or image copied. But this would not in general usage be the default *CRT. I.e. CRT Archive Image.

Shielding developers from the parameters. For simple files where records are only created under standard application control then time stamp fields may be set as NEITHER parameters and the DB1 context primed inside the CRTOBJ. Similarly, where an internal surrogate number is generated as part of the primary key then an appropriate function call over another file may be inserted. The surrogate number will normally be NEITHER, but it may be OUTPUT or BOTH if required for lower level files.

Wrappering the CRTOBJ with an internal. In some cases there may be a hierarchy of EXCINTFUN calls. Each call then performing some specific function at one level before calling the next level. This may be necessary, for example, because associated files have to be updated when a new record is created. Each main line request to create a new record will call the EXCINTFUN appropriate to the level of complexity required. If requiring just an image copy then it will call the CRTOBJ direct.

An EXCINTFUN designed to participate in the create process should have its parameters defined using the UPD ACP RCD, similar to a CRTOBJ.

Be aware of the probability of a records existence. Where it is not clear whether a record exists for a given primary key then it may be necessary to define a special CRTOBJ i.e. CRT/CHG, which suppresses the error condition and optionally changes the record. I will post a CHG/CRT v CRT/CHG vs RTV/CHG or RTV/CRT debate once I have covered off the intricacies of each of the four main internal function types.

Gotchas

To suppress the error message & error condition if a record already exists then a CRTOBJ may be created where the PGM.*Return Code is set to *Normal in USER:Record Already Exists.

Alternatively if a record already exists and it is appropriate to change attributes then a call to a CHGOBJ may be inserted in USER:Record Already Exists. In this case it is important to ensure that the parameters passed to the CHGOBJ are PAR and not DB1 context.

It is very important in any CRTOBJ to ensure that the DB1 context for any fields not directly input by parameters are initialised or primed with data inside the CRTOBJ. (They will not be input if they are set to NEITHER or if the parameter definition is changed to KEY). There is no automatic move from PAR to DB1 for NEITHER parameters and the DB1 context may contain old data from previous database functions.

By default a SQL CRTOBJ will not check for duplicate records if there is no coding in any action diagram user point other than ‘Processing Before Data Update’. However, there are times when you must or it would be better if you did check. This is accomplished by inserting code, such as comments, in ‘Processing if Record Exists’. This also suppresses the error message if the row already exists.
Next up. The CHGOBJ.

Thanks for reading.
Lee.

Monday, October 20, 2008

2e Development Standards - (Hints and Tips RTVOBJ)

I thought I'd have a break from the 2e naming conventions thread that has occupied most of the last month and launch into a series of smaller posts that talk about some of the hints and tips associated with the 2e function types.

Today's topic is the RTVOBJ. One of the most useful function types in 2e. One which is often misused or understood.

The following are some hints and tips I have learnt, taught, embellished and/or stolen over the years.

RTVOBJ Tips

Naming. Most RTVOBJ functions can be divided into one of two types. Single record GETs to bring back attributes for a given key value. Multiple reads to perform some sort of process logic or test. Different naming conventions, 'GET By ...' or 'RTV Last four transactions' will identify these two types. See previous post re:naming standards.

Getting Records. A GET will be based upon an ACP with a unique key, usually the RTV and pass in a fully RST key value. The function will only return database attributes + the xxx Record Found Y/N flag (optional for your site). The return code will not typically be explicitly modified. If the record is not found then output parameters must be cleared. Parameters, both INPUT key fields and OUPUT attributes should be defined using the same access path (ACP) that the function is based upon. Avoid using the individual field definitions unless they are not scoped by an access path.

On certain files with fixed key values the RST key may be set to NEITHER and primed inside the RTVOBJ at initialisation. Examples here are system tables that have one record.

GET's will return ALL attributes including spare fields. (A future post will talk about the benefits or otherwise of using spare fields). If the standard *GET will do, then use it!!!. There may be other ‘Get xxxxxxxx’ functions currently in your models which may have a subset of the fields. You can replace with a single *GET if preferred.

Other types of RTVOBJs will perform various processing requirements over multiple records these could be prefixed as RTV.

General Hints. When testing for certain conditions you should *QUIT the RTVOBJ for performance reasons once the condition has been found thus minimising I/O and decreasing function runtime. There are exceptions here for *Arrays. (future post).

In general try to avoid changing the default PGM.*Return Code setting, although there may be circumstances where this is justified. If you do need to identify a specific condition other than normal the user CND.*User Quit Requested appears to be a standard I have seen at many companies.

If the RTVOBJ is built over a RSQ ACP and uses RST/POS key fields then explicitly define the key fields using a KEY structure. This will allow the developer to zoom into the key structure and understand what the key fields are.

Do not leave the fields as MAP. There is no relevance for a RTVOBJ and it only confuses new developers. CA please change this. I know it doesn't generate anything!!!!!!

Default RTVOBJ's i.e. *GET should have parameters passed as RCD. Get By and RTV's over alternative access paths should use the KEY/RCD method.

If using a RTVOBJ to retrieve reference data. It is best to not do the call if the value is optional. Saves an unnecessary I/O.

Gotcha's

Best Practice

There is a well-known bug (feature) in a 2E generated RTVOBJ in that USER:Exit Processing is executed if no records are found and there is no coding inside USER:Record Not Found. Therefore if any action diagram coding is added to USER:Exit Processing then you must also code something (comment or *QUIT) inside USER:Record Not Found. This ensures that the function always behaves in a predictable manner.

If the data passed in as RST or POS parameter is of a different domain and is alphabetic for a shorter length then unpredictable results may occur. This is because the generated code is failing to clear fields when building the RPG key list. – I could not replicate in the latest release.

Best Practice

If you change the ACP that the RTVOBJ is based upon then also remember to change parameter definition. It is best to keep these consistent for developer readability.

Version 7.0 of 2E allows RTVOBJ's to be written over physical files. This reads the file in relative record number order. This is quicker than using a logical. This technique is useful when you need to read an entire file. A fix program would be a good example.

I will cover CRTOBJ in the next blog posting.

Thanks for reading.
Lee.

Sunday, October 5, 2008

2e Development Standards - (Further Naming Conventions)

So here we go. Another part in the CA 2E development standards series. Here I discuss/publish some naming conventions I have found useful with common functions in a 2E model.

I have worked at a few sites over the years that have varying levels of standards applied to the model and different approaches aiming to solve the same problem.

I have worked at sites that like to use an incremental prefix i.e. RTV56 etc, some that leave the defaults and others that follow naming convention prefixes. As with any standard it is not the actual standard that it is important. The area of concern for any development shop is the continued adherence of the standards.

What I have found works best for me is to use a common set of naming standards for the core functions in a model. Today we will discuss core functions and their naming conventions.

I will finish with a couple of examples of why I settled on my approach for 2E naming standards and as always I expect these to cause some debate in the community.

Default Create Object

I prefer that the default create object (CRTOBJ) on a file is named as '*CRT'. This function will have all the relevant processing that is appropriate to the most common use of the function. Some shops will have auditing and others use surrogate systems. The default will be named as above but other variations may be named 'CRT with Audit' or 'CRT (No Audit)'. However, if most of the time you only need to perform your sites default processing when creating a record, the '*CRT' is much easier for everyone to remember.

The important fact here in the *CRT means the default behaviour. The others are variations and are therefore not classed as default functions and will be named slightly differently. i.e. without the *. The CRT prefix should remain.

As only one function of any name can appear on any file, for *Arrays the standard is extended slightly to be '*CRT Array name' as all arrays share the same file. They simply use different definitions for the access path (sorry, array structure).

Default Change Object

Follows the same rules as the create object but will use *CHG and CHG prefixes as appropriate.

Default Delete Object

Follows the same rules as the create object but will use the *DLT and DLT prefixes as appropriate.

Also a *DLT (Cascade) can also be created. This will actually be based on a RTVOBJ and will delete all child records before deleting the parent key. Thus reducing orphaned records.

Default Retrieve Objects

As a retrieve object (RTVOBJ) can be used in many ways. I will refer to the two most common usages. The fetch of a record and the checking for a record's existence. Also remember that a RTVOBJ is also used for processing files. The next two default functions are based on a fully restricted key.

I have used the *GET naming convention for a full record being fetched. This function will be based on the default retrieval access path. Variations that bring back fewer fields or use an alternative access path will begin with GET. If using another access path use the standard of 'GET By Access Path Name'. If getting a subset of fields use the 'GET subset' - replace subset with a meaningful description. Once again I am highlighting the default usage of this function type.

I have worked at one site that prefers to bring back a Record Found Y/N flag rather than use the return code. This is site specific and I'll leave you to decide on the correct usage of the flag versus the *Return Code. See my other blog which provides further details.

Another usage of the RTVOBJ is for checking for the existence of a record. I have seen this implemented as either a *CHK or and *XST. Either are fine as long as you are consistent.

Again, note that for both of these functions types the array version will need to supply the array name as a qualifier.

There are a further two usages of the *GET and *CHK/*XST function type. This is only related to files that use the 'Qualified By' file relation to describe the key. You will know that the 'Qualified By' is used for file structures that may have a term or a date to identify a value in the key. i.e. A product price file might be qualfied by a field like 'Product Effect Date'. This saves you having to have a record for each day that a price is valid.

You will also know that this relation will automatically read backwards or forwards to find the correct record. Note this only works for the default retrieval access path. Therefore a *GET or the *CHK/*XST will generally find a record but your business logic is trying to determine if a record exists for the actual date. There are several ways to do this. You can either create an alternate RTV access path and write a 'GET By RTV 2' type function. This is perfectly reasonable.

However, and there always is one, I would suggest that a function named '*GET (Exact)' which is only used for files with a 'Qualified By' relationship will do additional processing to determine if the KEY is equal to DB1. This will help to identify to a developer when they are programming, that the file has a 'Qualified By' key but also negates the need for an unnecessary access path to be created.

This pretty much summarises my recommended standards for internal default functions. There are a couple of regularly used external function types that might also be useful and form part of my DBA recommendations of functions that should be created for every file.

The rule of thumb for creating default functions should be.

All Files - *GET, *CRT, *CHG, *DLT, *CHK/*XST and *GETEXT.
Files with a qualified By - *GET (Exact) and *CHK (Exact)/*XST (Exact)
Reference Files - '*SLT filename'
Parent/Child - *DLT (Cascade)

Default External Functions

The *SLT function is named this way as it will be used by the generator when building implicit field prompting code. The routine is also used to validate a record existence in relation checking. As such a default select must be first on the file and also never have its parameter interface changed unless the keys to the file changes.

The *GETEXT (An EXTEXFFUN with a parameter interfaces the same as the *GET) is often used for several reasons.

RPG Limits. Used to limit the number of subroutines or files in a compiled program, not a fully recommended reason. There are other specialist reasons why you would want to use this routine.
Cursor awareness. Wanting to re-read an access path whilst reading the same access path. , therefore not corrupting the cursor.
Externalisation or Program API. Externalising a RTVOBJ function for use within a non Synon environment i.e. CL Program.

If externalising based on another access path call the function 'GETEXT By access path'.

Best Practice

If creating these functions ask yourself this simple question. Am I creating this because of a file limit and if so, is this really the best section of code to externalise? Remember reducing a program from 51 files to 50 (so it compiles) only leaves a maintenance issue for the next developer, and it might be you. Afterall, there is nothing worse than seeing a string of externalised RTVOBJ functions.

Summary

I have generally preferred a prefix and/or suffix when naming functions. The prefix has been used to identify format of the function and the suffix as a sub group. This will become clearer as you see the examples build up on this blog.

You will notice that some are prefixed with an asterisk (*). This is usually reserved to indicate internal Synon objects in a model. I have chosen to indicate my core functions using this method because the * for default function types means that are all grouped them nicely on pick lists. This is done to ensure that they appear at the top of the list of functions. I am aware of other sites using numeric or other acceptable characters to attain the same result.

Note: If you choose to use this approach a known issue is that searching for a *CRT in the open functions screen doesn’t work. This is the only known limitation of this approach from a technical perspective. Other may well say that this encroaches on 2E objects. I fully agree that files should not start with * but the odd function that is core to your development model, no worries.

My next 2e naming conventions post will focus on ideas for other non-default functions.

Thanks for reading.
Lee.

Wednesday, September 24, 2008

2e Development Standards - (Extra Naming and General Standards Part 1)

Continuing with the series on 2E development standards.
A model-naming standard is very important, particularly for large models, to aid understanding and navigation. There are several types of naming standards to consider.

Model Names

A CA:2E data model has a 25-character object text names associated with files, access paths, fields, field conditions, functions and messages etc. The guidelines for each object type are detailed below. As a general rule:

Apart from files, try and only use 24 characters for the field, function, access path etc. This make it a lot easier to navigate around in the model and to use the ‘?’ prompting facility.

Capitalise all words except articles, prepositions, conjunctions, and the 'to' in infinitives. E.g. 'Currency of Invoice'

Avoid punctuation.

Try to establish a vocabulary of preferred names. Where abbreviations are necessary then they should conform to a common abbreviations list. It is recommended this list be generic across all of your development models and other languages and environments.

OS/400 Object Names

OS/400 object names and DDS names are required for actual implementation. CA:2E will assign names for all object types according to its own naming standard. Messages, DDS format names and fields will default to CA:2E assigned names.

If using a Multi-Version Multi-Model setup extra care needs to be taken to ensure that and new model objects are created equally if coding into multiple models i.e. Bug fixing a current and development release at the same time.

Queries can be written to keep tabs on objects and their names that do not cross reference satisfactorily. I.e. Model Name to Model Name as well as Implementation Name to Implementation Name. This forms part of a wider blog on model management techniques. Something for the future?

Some development shops have chosen to replace the 2e naming conventions for some objects in a development model. The Pro’s and Con’s of this approach are as follows:-

Pro's

Nicely named objects. Great for when calling programs from a menu or command line.

Encourages a standard form and reinforces the principle of the naming conventions.

Easy to detect if not renamed as 2e prefix is still associated to object.

Con's

After a while it is difficult to get a unique and meaningful object name, therefore, some names can seem ambiguous.

Extra work for a developer. Often forgotton.

Will generally require a mnemonic standard i.e. XXYYYYnnZ so may as well have the defaults.

Model Files

All new files and file changes should be managed by the Database Administrator (DBA) or a central owner for that model. This of course, depends on the size of your development shop but it good practice to separate the roles. This will allow good control of the model.

Implement a TLA?. I have implemented a Three-letter Acronym (TLA) following the name at some sites I have worked at. E.g. ‘My Sample File MSF’, The TLA will occupy the last 3 bytes of the available space for the file. Note this is the only model object type where I have deliberately use the last byte.

These 3 letters should precede all fields for that file.

File descriptions should represent a single instance of an entity, e.g. 'Customer' not 'Customers'. However, if the name describes many attributes of a single record then a plural may be ok, e.g. 'Account Parameters'.

Avoid using descriptive names like 'Details' or 'File'. E.g. 'Customer' rather than 'Customer Details File'. However, sometimes it may be necessary to differentiate between different levels of detail, e.g. 'Order Header' and 'Order Line'.

Short names will avoid truncation when assigned by CA:2E to default DBF functions or foreign key fields using 'For' text although the DBA processes of replacing or renaming of fields should negate the need for the developer to be to concerned.

Try to group Parent/Child file relationships by using common name prefixes.

Future blogs will expand on the area of model management and database administration guides.

Access Paths

Unlike files, access paths can be created by a developer with *PGMR access. It is recommended that you query the 2E model files and keep an eye on any new access paths that are created and their purpose. These should be reviewed from time to time to ensure that best practice has been considered when access path was created.

Do NOT create new access paths without thinking if there is an alternative e.g. Using one that is similar and/or doing selection within the functions. If in doubt, talk it through with your DBA or a senior developer.

Virtuals should, in general, not be used, and should NEVER be on standard access paths. There may be times when they are the best solution e.g. For resequencing a query access path rather than creating a work file.

The description should try and explain the purpose of the access path.

In general there should only be 1 UPD (Update) style access path per file.

In general access path relations should NOT be dropped. If this is used then the default access paths should be clean and an alternative RTV or RSQ created.

No Virtual Virtuals. Avoid multiple nested levels of virtual fields if you are going to use them.

It is highly recommended that default access paths should never contain Virtuals or have Select/Omit. Use an alternative RSQ or RTV if virtual fields are deemed necessary.

If creating an access path over an assimilated file from another model then care must be taken with the default naming of the access path.

Next time we will talk about functions and I will provide some recommendations for naming standards as well as some standards for default functions.

Thanks for reading.
Lee.

Thursday, September 18, 2008

2E - Development Standards (That damn *Return Code ;-))

Yet another post debating some of the finer points of application development using CA 2E (Synon).

Over the years apart from justifying my use of 4GL's and model based development tools for rapid development of applications (CA 2E and CA Plex) to the non believers. Oh the arguements that this has caused have been blogged before. See my series on the 3GL v 4GL debate.

The biggest single point of contention from within the 2e community itself is relating to the correct usage and trust factor of the simply named field, "PGM.*Return Code". It may as well have been named Devil's spawn with the amount of hot air I have seen it generate.

For those that don't know, the return code is a floating variable within a program that indicates the current state of the program at the time it is queried? Roughly speaking

When programming a RTVOBJ as a full record fetch type function or as a check existence style function the developer generally has two options. They can either use the return code that is set implicitily by 2e (this can be overrideen by the developer) or declare an explicit field like Record Found Y/N and pass back a value indicating success or otherwise depending if the record was found.

Below I indicate the Pro's and Con's as I see them.

PGM.*Return Code Method

Pro's

A return code is the default way to test for success or otherwise when calling a program.

Requires no additional developer intervention as nearly all the 2e function types automatically set the return code to the appropriate value.

Con's

A Return code is a global variable within the function and is only as accurate as the last line of code that set it. If additional code is added between the called program and the testing point then the original context of checking the return code is broken. Therefore developer beware.

Explicit Field Method

Pro's

Means you can save the value in a LCL field.

If other code is inserted into the action diagram. As long as the value is not overridden you can check this value later in the action diagram or pass the value into another function.

Con's

Adds extra work to default functions that previously required no extra coding. i.e. a record existence check.

If you have two or more calls the original call may need to be stored. If this is required then the flag can be set on by querying the return code anyhow.

This approach doesn’t work for execute messages or user programs which would generally use a return code so the practice doesn’t fit all scenarios.

But what do i think?

Personally. I always preach the KISS principle and to use the tools as they were designed.
Therefore, you will see me using the return code out of choice, but I would as always, follow any incumbent standards even if they are wrong but I would definately have a go at explaining why they are wrong.......

Thanks for reading.
Lee.

Friday, September 12, 2008

Caring and conscience

As I write this, some time before it is published, I am less-than-fondly recalling my day in the office. It has been another of those days where head has been in hands, where walks were taken away from the desk to avoid sending later-embarrassing emails and where close colleagues were asked the question "Where does the queue start for strangling ?"

I work in a large multi-national company and regularly wonder whether all departments, accounts and countries within this behemoth operate in the same way as I have experienced now for almost four years. I have a pretty good feeling they're all similar in nature with differing degrees of madness, and I am almost positive that Scott Adams used to work here.

I've mulled the issue over with several colleagues over the years and one clear pattern has emerged. Those people whose work ethic I have any respect for have either left the company or changed to substantially different roles. Conversely, those who seem to have been around a long time are, with a few exceptions, the worst offenders when it comes to dodgy work practices.

What it boils down to is having a conscience.

Putting this firmly in the development context, I am going to ask you, the reader, a question by which I risk offending you deeply. As a developer, can you produce software which fits any or all of the below criteria?
  • You know it will fail under certain circumstances.
  • You know some aspects of design have not been considered and should be.
  • You know that it is doing something wrong.
  • You know that you haven't been as careful or thorough as you could.
Now, before anyone jumps on me I will qualify my question with this. Can you produce software meeting any of those criteria without bringing it to someone's attention to cover your own butt.

It's one thing to go to your architect and say "you didn't consider this" and get the response "don't worry about it, just build it without" or to go to your supervisor and say "you didn't give me enough time to test this" and get the response "don't worry about it, just deliver it" - especially if you can get it in writing. It's another thing to think either of these thoughts to yourself and then think "To hell with it, it's not my problem anyway".

Personally, I cannot bring myself to write any software that has any of the above traits without making an honest attempt to remedy the problem and, finally, getting someone in a position of responsibility to sign off the shortcomings if necessary.

Most weeks now I come across multiple people who don't fit this mould. Many times I find myself being their conscience and I'm usually not thanked for that. But there is one individual with whom it is often necessary to communicate and for all their shortcomings that have been discussed amongst my team we have finally come to a simple conclusion.

They just don't care. And that's sad.

Ignorance, misguidedness and inexperience can all be overcome in a postive way. But just how do you make someone care?

Thanks for reading.
Allister.

Monday, September 8, 2008

2E - Development Standards (Ad-Hoc Tips)

One of my new colleagues has been reading these posts whilst he is learning to program in 2E.

This part is a collection of some general tips in 2E around Format Relations, Function Options, Function Wrapping and Sharing Subroutines.

As always, any comments good and bad are welcomed as are requests for subject matter.

Format Relations Considerations

For certain function types relations can be dropped (PMTRCD and PRTFIL). This is useful to make your functions as efficient as possible.

For other functions you will generally have the option to influence the amount of default code and functionality that will be generated by the 2E code generators by detailing the level of referential integrity you wish the function to have.

Careful consideration should be given to setting the format relations to the desired level. These are MANDATORY, OPTIONAL, USER, NO ERROR and of course, DROPPED. I fully recommend the best practice of RTFM.

Function Options

Most function options are self explanatory and most of us the industry standard which don't deviate too far away from the model defaults. Some function options in particular are important to understand how they are used.

Close down program when set to ‘Y’ will close the program. If this program is likely to me called repeatedly in an iteration or a data loop (i.e. RTVOBJ) then consideration should be given to setting this to ‘N’

Reclaim resources is generally set for functions that are on menu’s.

Share Subroutine. See chapter on setting this value.

Function Wrapping

Function wrapping is the process of converting snippets of action diagram logic into a standalone function. This can be either an EXCEXTFUN or an EXTINTFUN. By copying the action diagram code into the developers notepad you have the option to convert the code into the standalone function.

At this point 2e will create parameter interfaces for each of the function contexts that are used in the code snippet and reference these as duplicate parameters using the PR1 to PR9 special contexts.

Whilst this works perfectly well you will notice that the function parameter interface is quite unwieldy. If this is the case sometimes you might find it easier to build the function yourself. You choose?, flip a coin.

To minimise this, the developer must manually make alterations.

Review the fields in the WRK and LCL contexts and determine if they are local to the code snippet only or need to be passed into the new function. If local;-

Replace all action diagram logic referring to the PRx contect of the fields with WRK or LCL.
Remove unused fields from the parameter interface.
If it turns out thar all fields are not needed then remove the parameter line entry for the context.

Note the restrictions of the EXCEXTFUN and EXCINTFUN function types when wrapping code. This chapter will be coming soon.....

Share and share alike – Subroutines that is

Any internal database function can be generated as a shared or reusable subroutine. The system option (YSHRSBR) will be set to NO, therefore the individual function option must be used to identify a shared subroutine. Additionally an EXCINTFUN can be implemented as a subroutine rather than inline code and thus subsequently also shared.

The decision to make an internal function shared is subjective. In many cases there will be no particular advantage because the generated code is relatively small or because the number of times the function is called is small. Sharing a function will also in itself add extra lines of code for parameter passing.

You should only consider using shared subroutines where there is an obvious benefit in reducing the number of lines of generated code.

The answer to a large function with many lines of code may be to redesign the call structure rather than to just use the easy option of using shared subroutines. Note that a function called many times may not be the best one to make as shared. It may be a higher level function in the call structure, which will provide the most benefit.

A shared subroutine cannot cope with specific indicators. Therefore, a shared subroutine function should not send error messages to a screen. It will be unable to identify the correct screen field error indicator so that the field cannot be highlighted and the cursor cannot be positioned.

It is good programming style that any function, which has output parameters, should always perform a logic path that initialises or sets output fields. E.g. A RTVOBJ to GET attributes should initialise or set all output parameters whether a record is found or not. A shared subroutine will always cause the output fields to be set with some value but this is not the case if the subroutine is not shared. Thus if a subroutine is shared and contains output parameters that are not set they may contain unpredictable data.

An EXCINTFUN may be made into a subroutine but not necessarily shared. This enables a *QUIT to be added. The *Quit will jump to the end of the subroutine.

As a matter of best practice a model file's default CHGOBJ, CRTOBJ or DLTOBJ should remain unshared.

Next time the great PGM.*Return code debate.........

Thanks for reading.
Lee.

Sunday, August 31, 2008

2E - Development Standards (AD & Contexts)

Continuing the series of development standards for 2E (Synon). Today's topic is Action diagramming and contexts.
Once again in no particular order.

Beware of a negative list. Do not use a 'negative' LST on a STS field. E.g. If STS field has Val's A,B,C,D then do not create a LST 'Not B' which contains A,C,D. If a new VAL is added to the STS field then the negative LST immediately becomes out of date and also needs amending.
Note: As with Access Paths and File changes, status conditions should be considered a DBA level change and appropriate care and attention taken.

When not to call the *RTVCND. Do not use *RTVCND to get a condition name if the STS field is blank. This is not required as the result should be blank on the screen. Can use a derived field, although these should be discouraged if you are considering thin screen clients and business services for easy migration.

Do not use a *RTVCND in programs that may be converted to other platforms. Feature not supported on other platforms. Not sure on the impact the ADCMS though.

Know how to quit. Never use *QUIT unless you understand explicitly where you are jumping to. Only a few recognised action diagram user points should ever contain a *QUIT. Note, you can quit out of a user defined subroutine. This is a useful technique if you wish to cut down on conditional code.

Pro's
Allows you to strategically exit a subroutine for efficiency i.e. RTVOBJ.
Allows you to suppress PRTOBJ.
Allows you to exit a user defined subroutine. Especially useful for functions with lots of validation logic.

Con's
If used in wrong subroutine can seriously impact the flow of a function.
Care should be taken if code is being copied into other subroutine as the desired affect of the *QUIT may not be the same.
Not directly supported in CA Plex if you are considering a migration.

Do not use *Exit Program in internal functions.

When trying to determine the success or otherwise of a called function you should always check PGM.*Return Code.

Be extra wary of arithmatic operations that might overflow. In particular DO NOT use any tricks to truncate numeric data such as multiplying a 7-digit date by 1 into a 2 byte result field in order to extract the day number. This causes a numeric overflow and may give rise to invalid data. This is especially true (Pre 8.1) for RPGIV generate programs that will also abend with a runtime error that is not a good look to the end user.
Keeping it all in Context.
The following section is about the usage of contexts in 2e.
NLL Context.

NLL context can be used to discard any unwanted output parameters on both internal and external functions.

It will be used predominately on GETs to avoid having to define numerous RTVOBJ functions, each with different parameters. But be aware that a NLL context parameter still creates a usage for that field within the model. Thus you may create lots of spurious field usages. In earlier versions of 2e pre 8.1 the NLL also created NLL fields in the source. This has been resolved and NLL fields are no longer generated.

Pro’s
Cleaner looking action diagram.
In 8.1+ the NLL fields are no longer generated so program are smaller
Easier and quick to default whilst action diagramming.

Con’s
Dumper fields are easier to determine usage for impact analysis as NLL fields are still shown as used.

WRK/LCL context.

Try to use LCL context in preference to WRK context. Unlike WRK context the LCL context is only scoped to the function it's used in. WRK context being scoped to the whole external that the function is generated into.

WRK context should never be used to pass data between function calls, bypassing any parameter declaration.

Be careful about introducing LCL context into existing functions that use WRK context. Mixing the two contexts could be dangerous.

Be EXTRA careful about replacing WRK context. You need to follow through all usages of internal functions that use it to ensure its integrity.

Be aware that LCL context fields are not the same as NEITHER parameters. NEITHER parameters are automatically initialised upon each entry to the function. LCL context fields are only initialised upon program entry and can be used to carry over data between call invocations of the internal function. That is, LCL context can be used to cache function level data.

When reading/writing or processing lots of fields from a given record or model file then PAR context arising from NEITHER parameter access path definitions are preferable to LCL context. PAR context requires a specific parameter declaration that can be queried, and can participate in action diagramming logical parameter defaulting mechanism.

Do not use WRK or LCL context as target of a *MOVE ALL. It doesn't work. You can enter these in the action diagram and 2E will generate a comment but no code will be created.

JOB Context

Care should be taken with the use of the JOB *System Timestamp field. It is only refreshed if moved into a field. Therefore a subsequent query of the JOB context of the field will only be of the last set value. This can equate to Zero.

CON/CND Context

Best Practice - Always use conditions (CND) rather than constants (CON) for non-trivial values. That is, values other than blank or zero. This allows the exact usage of special values to be obtained from the model, and allows translation of condition names. Exceptions are simple values – see below.

CON *BLANK and CON *ZERO are fine for input parameters.

CON is also acceptable for trivial arithmetic operations such as incrementing by 1, sign reversal by multiplying by -1, percentages by dividing by 100, special characters used in *CONCAT such as periods or commas, character position numbers used in *SUBST. National language impact should be considered. i.e. When to use a comma or not. This is especially true for German as they use a comma as the decimal character.

CON.xxxxxxxxxx can also be used to initialise descriptions and text fields in one off initialisation programs.

Note the CON context is not supported in CA Plex and must be converted before any migration can occur.

CON values do no appear in impact analysis.

CON values cannot be exposed for use via the NLS product.

If you have any questions or enquiries about anything published on this blog, please do not hesitate to contact me. I welcome all comments and opinions.

Thanks for reading.
Lee.

Thursday, August 28, 2008

2E - Development Standards (Parameters)

This is Part 4 in a series of articles.

Once again I have decided to focus on development standards and best practices for 2E (Synon) development. You can search this blog for the other posts around performance, defensive programming and general coding considerations.

Today's topic is parameters and how they are used and defined in 2E.

In no particular order my tips and techniques in this are as follows:-

Use Files, Accessss Paths or *Arrays as pick lists. In general try to use ACP (Access Path) RCD (Record) structures as picking lists for parameter definition rather than individual *FIELD entries. This makes it easier to identify the impact arising from future database changes.

Never use the last parameter line. If you find yourself using the last parameter line it is time that you consider re-architecting the functions parameter interface using an array structure or access path picking lists. It is quite selfish to use that last line for when another developer needs to maintain the function. Also if creating a function from scratch and you are using 6 or 7 parameter lines then it would be prudent to consider a parameter array.

Choosing the correct passing method. In general RCD is always preferable to Field (FLD) for normal parameter definition since this has a performance benefit. Only one actual parameter address is passed rather than many. This is the preferred method for program to program calls.

However, if an external function is likely to be called from a CL program, menu, message, command line then it is easier to use FLD. This is because passing values i.e. numeric fields via a command line or CL can get complicated. This is especially true for numeric and date fields.

Pro’s
RCD uses less PAG’s and therefore system memory.
Easier to determine impact of file changes using in-built analysis tools.

Con's
Harder to call programs directly when required.
Complicated CL programming for parameter passing.
Incorrectly declares NEITHERs as parameters and can cause confusion.

Understand the impact of a neither parameter and the generated code. If an ACP definition only contains NEITHER parameters then always use FLD rather than RCD. This ensures that the parameter definitions are not implemented as actual parameters and calling functions do not need to be generated as the definition changes.

This is because Neither Parameters passed as RCD will generate a parameter entry in the RPG. Therefore it is strongly recommended that neither parameters are always passed as FLD.

No harm in using *Arrays to define your parameters. If a suitable ACP is not available, or if more than 9 parameter block lines are needed, than an array may be used for parameter definition. Typically an array is a good choice for bringing together various NEITHER MAP fields on a device design.

Never use *NONE as the ACP definition. It is not possible to track the usage of *NONE for any model file. This has been resolved and now shows *FUNPAR usages in 8.1 and above but I have included it as you may be on an earlier version of the tool.

Tip - MAP = Device Design Only. Only use MAP where it is being used specifically to map a field to a screen design. In all other cases turn the default MAP off. This helps make it clearer to understand where MAP is actually being used. It serves no purpose to have MAP on most other functions other than to confuse a beginner.

Trick - Getting at the CTL context for a SLTRCD. Use neither MAP parameters for placing values on the CTL format of a Select Record function. It is not possible to influence these CTL control fields via the action diagram as there is no Control User point. Instead set to Neither MAP and populate during the initialisation.

You can also use MAP when changing a primary key via a CHGOBJ.

Be explicit with a parameters type and role and also ensure that the parameter field name is meaningful.

Best Practice - Never use BOTH parameters for convenience i.e. Totalling inside a RTVOBJ. I have witnessed far too many functions which have a BOTH parameter for a total which then gets initialised to Zero in the initialise user point. Any developer choosing to use this function will immediately believe that they need to supply a valid value.

This is bad practice. The parameter interface is more important than a developers extra effort inside the function.

Best Practice - Parms not WRK Context. Parameters should always be passed if they are going to be used between functions. DO NOT rely on the fact that WRK fields are global.

Using sequencing to indicate parameter importance. Neither parameters can be sequenced as 999 on the EDIT FUNCTION PARAMETERS screen. This helps identify the actual parameters. If you require Neither and normal parameters i.e. Input, Output or Both then declare the file/access path twice and placing the neither parameters on the second declaration and remembering still to sequence it 999.

The only exception here is you are using duplicate parameters. There is a 2E limitation that these must be sequenced 1 to 9 only.

Thanks for reading.
Lee.

Monday, August 25, 2008

2E - Development Standards (General Coding Considerations)

My regular readers will be aware that I have begun adding articles related to 2E (Synon) development. See links on Defensive programming and Programming for performance.

Future chapters planned for the coming weeks include:-

Parameters
Action Diagramming & Action Diagram Contexts
Messages and Message Handling
Subroutines
Wrapping, Relations and Function Options
DBA Best Practices
Function Type Best Practices
Gotcha's Guide
Suggested Naming Conventions
+ Much more.

When I have finally worked out how to use the Wiki. I will consolidated all my 2e postings and update.

I welcome all comments regarding these standards and guides and will endeavour to update the blog postings with your additional feedback and comments. At this stage I would also like to publicly thank Darryl Millington, Ray Weekes, Kim Motteram, Rene Belien, Rudy Moser and Mark Schroder for providing materials for review. As well as my now ex colleagues Kay, Kate, Nilesh and Sumit for their input and validation.

Today's theme is general programming considerations.

Functions should be as readable as possible. Try to fit the whole action diagram onto a single page by hiding case blocks and/or using sequences. Give these constructs meaningful names. Code should be easy to follow, where it is not intuitive add sufficient comments.

Always group and tidy up your code. Add comments where functionality is not obvious and always add comments for actions, *COMPUTE built in function, iterations, case blocks and sequence blocks.

Be careful of adding sequence blocks for tidying code. Ensure that there are no usages of the *QUIT built in function that can be negated by nesting into a sequence block. Remember the *QUIT only exists the sequence it is in.

Remove commented out code. Commented out code should only be used to temporarily change code and for assisting with debugging. Once changes are considered permanent all commented out code should be removed. This helps with both action diagram readability and impact analysis.

Keep ‘parameter passing’ to a minimum. Functions are more reusable if they do any required retrieving of database values internally. However, consider performance issues of this for large applications or performance critical programs where the opposite will be more practical.

Define all User Programs to the model. (CL, COBOL, RPG or RPG ILE). These should be declared in the model. If there is no obvious file to associate them with, add them to a generic scoping file. There must be a special case to write in native COBOL, RPG or RPGLE and this should be given careful consideration and were practical discuss with your peers.

Modularise. When copying large chunks of code from one function to another, put the code into an EXCINTFUN or EXCEXTFUN function. This avoids ending up with the same code in lots of programs.

Centralise common business calculations. Common calculations such as GST, Interest, withholding tax (finance examples) should be defined in one function and reused.

Ensure correct setting of critical function options.

Close Down Program Flag. Set to 'N' if program is called repeatedly in a loop or a processing function like a RTVOBJ. This means that any files that are open will remain open, thus improving the performance of the function dramatically. Set to 'Y' otherwise if the program is typicially a top level programs.

Reclaim Resource flag. Set to 'Y' on W/W functions, e.g. W/W Clients or other top level, W/W Funding Account. Otherwise lots of files stay open all day and will eventually consume more system resources than the system requires.

Best Practice - Updating balances or other value fields. Never use a RTVOBJ to get values from a record on the file, calculate the new values and then do a CHGOBJ with new values. You must pass the values to be added as separate fields into the CHGOBJ and then add them on to the DB1.fields inside the CHGOBJ before update. This is to take care of the automatic AS400 locking functionality on the update index as well as ensuring that you are incrementing the latest value available.

Use appropriately named fields. When using a field as a work field or screen/report field (i.e. A user field), create one with a meaningful name if one does not exist already. Using a field just because it has the correct domain attributes can be very confusing for people maintaining the function after you. Avoid generic named fields like Count 1 or Text (25).

Consider correct use of action diagram contructs. The use of *OTHERWISE blocks for grouping code is slightly misleading. Always create a sequence block. Note with RPG4 the limits for the number of subroutines is unlikely to be reached. The action diagram colouring of Pink, Green, White and Blue has inherent meaning and this fundamental principle in your action diagramming should remain. Put simply, Pink equates to a Decision, Green an Action, White as a collection of statements and Blue as an iteration.

The *OTHERWISE is still valid if deliberately trying to force a compilation error.

Avoid use of Derive Fields in screen designs. Not all of the CALC function points work correctly and this also hinders your ability to externalise code for thin client development.

Thanks for reading.
Lee.

Sunday, August 17, 2008

Version 1.0

I thought that today I would blog a little about the concept of version 1.0.

There is a saying in IT that you never buy version 1.0 of a product. Obviously this can't be true as no new products would sell or make market penetration. Also, there must be those out there who enjoy the concept of working with an early version of a new tool or perhaps even a alpha or beta.

I guess these people can be summarised as conservative at one end of the scale or riding on the crest of a technology wave (rogue developer?, entrepreneur?) and the opposite end. They are clearly hoping to find the next super skill or application that they can become expert in, become recognised as an early adopter and advocate of the technology. Along with the kudos and recognition also comes the higher rates available.......Money, Money, Money..... "Its a rich man's world......". Guess who saw Mamma Mia the movie recently.

It was brought to my attention recently that an editor for a new game called spore is available on the Internet. The editor allows you the player to create your game characters.

Nothing too special at this stage I hear you say.

You are quite right but considering that the actual game is not currently available. At the time of writing it is 'gold' which means it is nearing release. I understand that millions of characters have now been created which is a pretty realistic barometer of the potential success of the game when it is finally released.

It is this model of releasing something early and benchmarking the idea that has probably allowed the developers to consider adding budget as required. How many people spend years building a product or website for that matter and then struggle to get sales or number of breakeven users to make the project viable.

Have we just witnessed version 0.0 as the new baseline and also a shift in mindset to a internet full of early adopters. Has the balance changed?

I will certainly think differently about the version 1.0 debates in future. After all, how do you get the balance between enough information to generate interest in your product or service but not too much so as to give away your plans and secrets to your competitors.

I will watch this unfold with a great deal of personal interest as I begin to build my online businesses and portals that I have been promising the world for so long.

Thanks for reading.
Lee.

Thursday, August 7, 2008

Is honesty always the best policy?

At various times in my life I have had this dilemma and I am sure many of you reading this have too.

I have always believed that honesty is the best policy.

Why?

This was drummed into me as a child. You would all remember being younger and hearing one of your parents or elders in your family/whanau say, “Just tell the truth, you won’t get into trouble as long as you tell me what happened.”, “Nobody likes liars.”, “You must be honest otherwise the truth will come out one day and come back and bite you on the bottom.”.

In various articles in this blog I have made reference to my experiences in general and I have been 100% honest in delivering balanced articles. IMHO.

Just recently I found myself in a situation where I was beginning to doubt my values/beliefs in this area. This was quite upsetting and worrying after 38 years of preaching and living by the aforementioned ideals.

So here are a few examples of where we all probably choose to ignore this policy.

Consider your answers to these questions.

“Honey, does my bum look big in this dress?”
“Who farted?”
“Do you love me?”


After looking at a baby picture “Isn’t he/she a stunner?”

Or, have you ever heard yourself saying that “The cheque is in the post” knowing full well that the cheque hasn’t even been written, let alone cashable.
Actually I have thought of a few extra ones but I will leave those to your imagination…………

So back to that dilemma. I truthfully explained why I was resigning from the company that I worked for at the time. I explained the major reason quite clearly and to be fair there were a few more including my desire for a fresh challenge so I could rediscover that "Woo factor!".

But.
The truth however, backfired I guess. Why.....
I was ignored for the remainder of my notice period and I struggled to provide the business with a true handover that it deserved. I guess as with all leavers my name will be mentioned for the things that go wrong. That is until the next person leaves. Actually i found out today that it was mud for some implementation tha I was working on. Despite my willingness to ensure it was completed.........

In summary, I believe that on this occasion it was because the truth hurts and sometimes you find yourself trying to be open and honest but if the listener doesn’t like it, they don't hear it.
I guess this has taught me that there is no judging reactions.

So I guess it all depends on the context that the honesty was received.

In my case, I believe on reflection that the combination of honesty (on my part), truth hurts (receivers reaction), arrogance and politics (receivers modus operandi and expertise), I may have chosen the wrong option, or did I?
I doubt this situation will happen again, but if I find myself in this situation in the next 25 years of my working life. I will then have a real dilemma.

Do I stick to the values that I was brought up with?
or,
  • stoop to the lowest common denominator and play dead like so many others do and stagnate in their jobs.
  • Add/have no opinion or worth to an organisation.
  • Destroy that drive and energy within me.


I don’t know the answer to this question at this stage but it will either be:-

I’ll just say nothing, stay patient and look forward to the time when karma resolves my issue for me.

Or.

Tell it as it is and hope for a more professional response and keep one of my core personal life values in place.

What would you choose? I'm leaning to the later.

Thanks for reading.
Lee.

Monday, July 28, 2008

The context of programming

Recent events in my place of work have lead me to ponder the concept of programming context once again. I suspect it is a pervasive concept as I seem to come across it on a regular basis in quite different circumstances. Let me explain.

If I am asked to write a program that accepts two numbers and returns a third number, being the product of the two, then there is not a lot more I need to know. Perhaps knowing the possible range of input numbers would be useful, but really this is a pure mathematical problem and has no context.

If I am asked to write a program that accepts two numbers and returns a third number - the number of residential addresses in a database that fall between those two numbers - then there is quite a bit more I need to know. I need to know whether just street numbers alone should be checked, or whether street names should be included (5th Avenue, for example). Even within street numbers alone, what about flat numbers? It's a bit more complex than the first example as there is a context. I.e. what are we actually trying to achieve here?

Now in a third example, I am asked to write a program that accepts two numbers (x and y) and returns a third number which is the number of active users who have been logged in between x hours and y hours. Again, now the context is complex. How do I define a "logged in user"? Do I define one interactive session as one user, or do I need to reduce this to unique users because some may be logged in more than once. What about "special" users such as system supplied IDs? Should they all be counted, none, or only some?

But the third example is even more complex than I have shown so far. Consider that this function needs to work in a function test environment, in an integrated test environment, and in production. There are some processes that occur only in production, some only in test and some on both. Will this affect the outcome? Is testing on the test system going to be good enough to know it works in production?

Hang on a minute - aren't we talking about system programming? Well, maybe yes and maybe no. If this program is needed to manage software licensing, then it's a system program. But, if it is needed to manage the number of customer service representatives assigned to different parts of the call centre, then no it is not system programming. If it is being used to achieve load balancing for application service jobs then it could go one way or the other.

Now that was a somewhat contrived example, but it helps me to illustrate my point. In all three cases, take two numbers and return a third. The first example I would expect absolutely any programmer to be able to achieve. The second example I would expect any programmer to be able to achieve if complete requirements are provided. If the problem is only defined as I described, then you would need an analyst programmer. For the third example, who would you give the job to, generically speaking?

This is where I see a massive gap. I, myself, have been fortunate to have been involved in both application and systems programming fairly extensively and even if I say so myself I think I'm pretty good at covering off the sorts of issues described above. It also means I am frequently seeing other programmers who are failing to account for the "system" level factors.

In a specific recent case, a developer insisted that my team (who are a development & test support team) replace one version of a program with another so that it 'behaved like production'. That should have been the first red flag. (I was not involved at this stage so I don't know whether I would have caught this at the start.) Why was the test system behaving differently to production?

Well, the developer got his wish and proceeded to begin to make his related code work. Meanwhile, large numbers of other people were tripping over the problems introduced. After several days of analysing the problems we concluded we had to put things back the way they were. To quote Spock - "The needs of the many far outweigh the needs of the few." This programmer was looking in far too narrow a context in defining what needed to be done. He had no concept of the roles this particular program was playing, nor the large number of dependencies it had. For instance, an automated regression testing suite completely failed because of the change.

But perhaps the most spectacular case of lack of context that I have ever encountered was in a previous role.

The product in question was enterprise software being used all around the world and it was incredibly complex. Customers had requested the ability to use off-the-shelf reporting tools (such as Crystal Reports) to create their own reports. The development organisation realised this meant less work on such things for us and considered this was a good idea - but dangerous. Great they can write their own reports, but how to let them into a massive, complex database without (a) massive confusion and (b) the opportunity to corrupt it.

So a plan was hatched to deliver a new library (for self containment) of logical files (views) which would collate the data into meaningful constructs and, importantly, be read-only. My team (again in development & test support) figured out how to deal with this new library for the purposes of the testing done on it. For the most part we just manually created and destroyed these libraries as required and used some of our own toolset which, importantly, is not delivered to customers.

At some point I got to thinking...How are we going to deliver this? The initial response I got from the designer was "on a tape/CD with the rest of it." To cut a long story short, I soon proved that it is impossible to ship a library full of logical files. Period. Can't be done. I took this information back to the designer, along with a rough sketch design of a simple tool which could alleviate the problem, and also be useful within the development shop.

The response? "We didn't budget for that." * Sigh *.

In the end, I wrote a quick (hack) version of that tool on the day we packaged the software. Some months later someone contacted me saying that there was a bug in my code. I sent them to the designer to have it sorted out.

Thanks for reading.
Allister.

Monday, July 21, 2008

2E - Development Standards (Defensive Programming)

Part two in the series and takes a look at defensive programming techniques and how these help you to create reliable programs.

The following a guidelines for creating robust code.

Always check for a divide by zero (runtime) error by checking the divisor field for zero value prior to performing the *DIV operation.

Never move numeric fields into a field with a smaller domain.
With RPG this can cause truncation of the value and with RPG ILE Pre 8.0 will cause an RPG ILE runtime error.

Ensure that your iteration values and counters are large enough to cater for your anticipated maximum.

Ensure that your field sizes for database attributes are sized sufficiently to cater for the number of records anticipated.

Ensure that your arrays are sized to cater for the maximum number of array records anticipated.
Thus avoiding array index out of bounds issues. Remember to balance this with not overly sizing the array and thus causing a performance degragetion.

Always ensure that any substring operations utilising position and length parameters are within the range of the target field. Thus avoiding substring out of bounds.

Remember to check the function options for your function to ensure appropriate behaviour, especially close down program and reclaim resources.

Never use the WRK context in new programs. Use LCL and HLL.
If you choose to fix up old WRK fields, remember to check all other internals within your object and ensure that fields aren't used. This used to be used a trick in the old days to bypass the paramters passing limits. This was pre-arrays and when structure files where a pain.

Avoid HLL User Source and Programs. If you do write user source for RPG first convert program to RPG ILE and write one user source. The sign of a well managed and maintained 2E model is the percentage of HLL code versus generated code. If your models are more than 5% HLL then you have issues and a history of developers who have misunderstood the purpose and philosphies of model based development. IMHO.

Always pass parameters to user source. Do not rely on the generated field names.

Avoid use of CON context as these values are not available for impact analysis and localisation.

Avoid manual source modification. Use a program and the pre-processor directives to amend code automatically.


Source Modification – Special Notes.

Manual source modification must be avoided at all costs. If source is required to be overridden then a source modifying program should be written to automatically perform this function after generation and before compilation using the pre-processor.

In Summary:-

- Do NOT consider source modification unless absolutely necessary.

- Avoid the use of fields that use incremental counters for naming i.e. LCL Context YLnnnn.

- Avoid adding parameters above a field declared as a modified field. Therefore, always try to ensure that your fields that are parameters that are modified are at the top of the parameter declaration list.

- Try to avoid usage of fields higher than 32k. If 64k is required considered looping in 32k blocks as the 64k limit would one day be exceeded.

- Consider a naming standard to help you to easily identify a modified program and its modifying program.

- Consider centralised methods to ensure source modification programs have been successful rather then depend on a developer having to manually check the modified source.

Thanks for reading.
Lee.

Monday, July 14, 2008

Knowledge capture & use in technical support communities - Part 3

In Part 2 I looked at how to capture expert knowledge. In this final instalment, I will offer suggestions on where to store that knowledge, some differing uses of these concepts and offer some final insights into why I wrote the original version of this article.

Once again, I lead off with a small overlap to set the scene.

Electronic storage for fast access

Following the structuring process above introduces one significant disadvantage in a paper-based documentation repository. Frequent referencing to other documents causes the reader to flip pages or have multiple documents arranged on the desk in order to complete a single process.

Simply storing these documents electronically, such as Microsoft Word files in a LAN folder, is not enough to deal with this problem, as it merely shifts the emphasis to clicking the mouse constantly and still does not help out with the comprehension of the process as a whole.

The answer is hyper-linking. Every single reference to another document should be turned into a hyper-link to that document. This guarantees simple, fast, unfettered access to the linked information and, in most cases, an easy return path to the original document.

It is important to choose the document delivery technology with this method of usage in mind. Although MS Word provides inter-document hyper-linking capabilities and is an excellent document editor, it leaves a lot to be desired as a document reader. Better choices are Lotus Notes, or HTML. Lotus Notes serves as both editor and fast viewer. HTML requires a separate editor (and there are many to choose from), but has a ubiquitous interface if you are considering a large audience for the documentation.

Although I would recommend Lotus Notes as an excellent delivery mechanism, it is important to avoid the use of Notes 'Views'. Rather a default, or 'home', document should be launched easily from a bookmark and all navigation from that point should be via hyper-linking. This gives immediacy to the navigation by making it all point-and-click. This typically avoids the need to work with Notes' view characteristics like 'twisties' and scrolling. (Note that both of these can be used within a Notes document if desired.)

Tech-centric versus customer-centric

At the beginning of this series, I referred to the workings of a technical support team. For such a team, there are two key audiences for a documentation repository following the guidelines described in this document.

Perhaps the most important audience, in terms of covering exposure, is the team itself. Sharing the expert knowledge around the team ensures that individual team members do not become indispensable.

However, often the single biggest gain to be made from good documentation is by providing the team's customers with 'self help' information. The audience level for such documentation will necessarily be much lower than for internal documentation and will therefore take more time to prepare, but the ability to point customers to a self-help document for a common problem can save the team enormous amounts of time. This time could be used to improve the internal documentation and then you can reap all of the benefits.

The author's experience

In writing these articles, I am speaking from experience. I have built a customer-centric and a tech-centric database along these lines and they have been very successful. Both were built as Lotus Notes databases.

The first one built was the customer-centric repository, which was deployed to up to 150 developers and testers. This originally came out of a FAQ document introduced during a pilot implementation of a source configuration management product. The documents were listed in a Notes view in broad, simple categories. Each document title was phrased as a question, the answer to which lay inside. Many of the documents referred to others for pre-requisite information.

My team used this database to help keep the load down in our helpdesk-like operation. I estimate that once the database had matured, over 90% of customers who were referred to one of the documents did not need to make further contact with us for that issue. It is clear from the numbers and the nature of calls we received that many customers went straight to this repository and never needed to call us.

The second database was tech-centric and used all of the principles described above. Although the closure of the business meant this repository never quite made it to 'complete' status, the documents that were created (over 60) proved the points I have made above. New members in the team who were technically competent (that's why we hired them) but who had scant knowledge of our processes were able to perform fairly complex tasks purely from the information contained in these documents. Although it took them sometimes magnitudes longer to perform the task than one of the experts, the fact remains that they successfully completed the task with low or no input from the experts in person.

Reflection

In the matter of conversational language being more effective than rote instructions, I can recall being given two articles to read in preparation for a 'knowledge workshop' (that unfortunately never happened).

After many years now, I can only remember what one of those documents said. The one I remember spoke about how conversational language was much more effective than brief and often brusque language that is found in much technical documentation. It made the link to the over-the-shoulder technique and postulated some of the thinking that I have expounded above.

Some time well after I had created the customer-centric database I came across these two documents and re-read them. I immediately tied the conversational language ideas to what had happened in our database and saw that it was true. After re-reading the documents, I also realised why I had remembered this document and not the other. This one was very conversational, in contrast to the much more formal (and fact-and-figure quoting) style in the other document.

To this day, I do not remember what the other document said. Perhaps sometime I will come across the pair again and the cycle will repeat.

Thanks for reading.
Allister.