Showing posts with label synon. Show all posts
Showing posts with label synon. Show all posts

Thursday, April 4, 2024

Did you know Synon 2E has aFull Screen Mode?

Long time IBMi users typically work with an expanded WRKACTJOB screen shortly after learning all the command keys that are of use.  As a junior developer (back in the early nineties), I was always in awe of the seniors around me who didn't need to toggle through the command key options whilst managing the jobs running in the system.




As always though, once you had learnt the system you could press F21 to remove the subfile options and command key text, thus freeing up more room for the critical data on these somewhat limited 5250 displays. 

If I remember rightly, the command key used to bring up a window and you could chose between Basic (might have been intermediate) and Advanced mode.  This has long since been removed from the o/s and the command key merely acts as a toggle nowadays.  The reward of course was the ability to see more of your processes and interactive sessions.

Here is my current 'Pro' setup lol.





Synon aka 2e with various prefixes, Cool: Allfusion: Advantage: CA: etc also has the ability for a user to see more on selected screens.  Here are a couple that might be of interest. 

Action Diagram Expanded vs Normal view.





Most developers could survive with this as virtually all worth their salt will know every option and command key off the top of their head anyhow.

A lesser-known scenario and also a bit harder to remember all the options, is the ability to extend the records shown when viewing a model list. 




Personally, I don't have this one switched on but kudos to anyone out there that does!

So how do you set these values?

From the main Services Menu take option 11 for Edit model profile (YEDTMDLPRF) or as the menu implies, execute the command.  Then set the options accordingly.




You're welcome.
Thanks for reading.
Lee.

Thursday, May 25, 2023

Object Usage via SQL

Continuing the series on using IBM i SQL for some basic work management tasks.  Today I had a large list of objects that I wanted to understand the date that they were 'last used' and 'days used' count in order to determine if :-

1. The objects could be removed or flagged as obsolete.
2. Prioritise the development/testing based on high volume activity.

Usually, I would use the Work with Object (WRKOBJ) command and work through them one by one capturing the data in excel or notepad manually.  Another option is to build a list of objects on the system using the OUTFILE keyword from the DSPOBJD command and then query these via SQL or Query/400 or simply by using YWRKF our trusted friend in the 2E world.

Today however,  I had a list of around 50 or so objects and as we have separate development and production machines (the latter, I have no access to via segregation of duties policies that I support BTW), I felt it was unfair to ask a colleague to send me 50 screen prints, as well as, this being prone to user error with so many commands to execute.

I could have documented the steps for the traditional method above but this isn't really repeatable for my colleagues nor is it enjoyable.  Therefore, I decide to write an SQL (or four) as it turns out to get me the data and leave us with a template for the future.  There is a bonus 5th one for us 2Er's

There is a bump in the road though.  Isn't there always aye!

Even though IBM have made huge strides with the availability of data via SQL in recent releases.  A simple view for the OBJECT_STATISTICS does not exist.  There is a table function which will get you the data but is precluded on obtaining data for an entire library or a subset based on object type. 

Here is an example straight from the IBM documentation.


When I applied this to my test library, I didn't want the timestamp but a date only and I wanted a subset of the fields available.

This is quite a simple modification to the IBM example, we just do a substring at take the first 10 characters and give the field as nice name.  Note also the replacement of the asterisk (*) for the select to exact fields.

SELECT OBJNAME, DAYS_00001, 

SUBSTR(VARCHAR(LAST_00001), 1, 10) AS LAST_USED

 FROM table                                    

(object_statistics('MYLIB', '*ALL'))   


Unfortunately, I hit a hurdle when trying to view the resulting data via YWRKF.  The 2E programs raised an error condition and threw a hissy fit as I had some null dates represented as '-' in the returned data and not the ISO format (0001-01-01) it was expecting.

A quick google later and an a minor adjustment to the SQL to present null as an empty ISO date and I was now able to view the data.  FYI, Query/400 was fine with the data, so this is an extra steps for loyal 2E users/shops like me/us.  There are also other date conversion routines readily available I just chose this method for now....

SELECT OBJNAME, DAYS_00001,                                         

IFNULL(SUBSTR(VARCHAR(LAST_00001), 1, 10),'0001-01-01') AS LAST_USED

FROM table                                                        

(object_statistics('MYLIB', '*ALL'))


This is all well and good, but I also wanted to restrict this SQL result to the objects that were of interest.  I achieved this by using a SQL and filter out those of interest.

SELECT OBJNAME, DAYS_00001,                                        

IFNULL(SUBSTR(VARCHAR(LAST_00001), 1, 10),'0001-01-01') AS LAST_USED

FROM table                                                        

(object_statistics('MYLIB', '*ALL')) a WHERE a.OBJNAME in      

('MYOBJ01', 'MYOBJ02', 'MYOBJ03', 'MYOBJ04', 'MYOBJ05',            

 'MYOBJ06', 'MYOBJ07', 'MYOBJ08', 'MYOBJ09', 'MYOBJ10')


Whilst this is great it is only a view on the screen.  I didn't want my colleague to have to take screen prints or scrape the screen with tedious cut and paste to build an excel file for me.

CREATE TABLE QTEMP/OBJ_USAGE AS(
SELECT OBJNAME, DAYS_00001,
IFNULL(SUBSTR(VARCHAR(LAST_00001), 1, 10),'0001-01-01') AS LAST_USED
FROM table
(object_statistics('MYLIB', '*ALL')) a WHERE a.OBJNAME in
('MYOBJ01', 'MYOBJ02', 'MYOBJ03', 'MYOBJ04', 'MYOBJ05', 'MYOBJ06', 'MYOBJ07', 'MYOBJ08', 'MYOBJ09', 'MYOBJ10')

) WITH DATA


This was great for my purposes, I sent the SQL to a colleague, and they were able to implement it and send me a file back.  Job done.

Hold Tight!! 

This is a 2E blog (mainly) and I have only mentioned YWRKF.  You can do better than that I hear you cry.

The above list of objects can easily be expanded or substituted with a list from a file which you could build somehow.  What if I linked this to a Model List?  That would be cool right!

Whilst this isn't an option for my site due to having separate machines, it might work for you.  If you are like us, you may have send over some data periodically from production and work on the queries from a different angle.  Anyhow, assuming you have model lists and objects you wish to query on the same machine you can embed your SQL in RPG, CL or as a SQL source member and run with RUNSQLSTM etc.

You just need to link the Model list with your library of objects.  See below for one method.

We create an alias of the member we wish to use as our list.

CREATE OR REPLACE ALIAS QTEMP/OBJ_USAGE FOR MYMODEL/YMDLLSTRFP(MYLIST)

Execute the query.  Optionally as above, you can output to a file if you wish.

SELECT OBJNAME, DAYS_00001, IFNULL(SUBSTR(VARCHAR(LAST_00001), 1, 10),'0001-01-01') AS LAST_USED
FROM TABLE (OBJECT_STATISTICS('MYLIB', '*ALL')) a LEFT JOIN QTEMP/OBJ_USAGE a ON a.OBJNAME = b.IMPNME WHERE  b.OBJTYP = 'FUN'

Lastly, tidy up after ourselves and drop the temporary ALIAS.

DROP ALIAS QTEMP/OBJ_USAGE


As always, I am sure that there are other ways of solving this problem and I would love to hear about them in the comments.  This is my current 'new method' and will likely change as I notice more and more flaws or need to expand the scope.

Thanks for reading.
Lee.

Wednesday, July 3, 2019

Enhancement for 2E (Come on Broadcom.....)

I've been having a think about a few enhancements for 2E in the hope that the new owners of the products will invest in them. Leaving aside the roadmap ideas and up voted ideas on the idea wall (which need acting upon, especially REST API support), here are a few other ideas for consideration.

Enhancement One

Add an option in Display All Functions to get at the details screen for a given function to show the model object display screen.  Currently you have to do a Usage (wait…) and the do an 8 next to the 000 level function.


I use this screen to see what list an object is associated with etc.


Enhancement Two

The Open Functions screen is really handy but it would be more useful if we can search for a function via its CPF Name.

Enhancement Three

Action Diagram templates to display (fully) what code they are generating.  A good example of this is the RTVOBJ where the return code is set to *Normal at initialisation and automatically set to *Record does not exist in ‘Record Not Found’ user point.  I would be good if this was shown.

There are other examples where the templates could be expanded to show the exact processing to be generated.


I've seen so much bad coding where people are initialising these values not realising what the template generates.  As 2E is supposed to insulate you from the generated code it would make sense to show in AD what actually occurs.... wouldn't it....

Perhaps an expert mode like F21 (WRKACTJOB) to determine how much detail is shown.  I do honestly believe there is a lot of bad code out there due to template knowledge.

Enhancement Four

SQL Statement Support.  Whilst this is already possible for embedded EXCUSRSRC etc, I was thinking more of an interface to do generic SQL processing but use to the 2E method of declaring files and fields etc, have the added bonus of Impact Analysis.

Ability to define an SQL interface using 2E files/Fields and to execute the SQL statement.  This will show up in usages for the files and fields as *SQLSTM


Enhancement Five

License information.  Sure there is a command to show licensing of the product on the box (YDSPLICPRD), but I use it so infrequently I have to look it up almost every time I need it.  Perhaps the model details screens can be extended to have a command key to show the model licensing details.

So come on Broadcom.  Shape your product, prove that you still care.

Thanks for reading.
Lee.

Saturday, June 1, 2019

Slow running batch tasks - A simple method to get you started

Edit - 27/04/2023 - Tidied up SQL formatting and added a note around multi-member files.

"The EOD job is taking too long!", says every system administrator ever!!!!


Tip: First of all if you are reading this please benchmark the program(s) and do one adjustment each time to determine what is making the difference for your situation.  Don't be tempted into making too many changes at once as you'll never learn the value of each approach.  Overtime you will learn which ones make the most difference for your system workload.  Above all, you must remember to re-baseline after each set of changes, as not all changes are equal, some may actually slow down your programs.
There are numerous strategies for improving overall batch performance.  Typically these would included.

  1. Reducing unnecessary random IO - Use Arrays (or memory) instead of disk IO.
  2. Ensure files are opened and closed once per job execution (if possible).
  3. Try to eliminate multiple passes of the same data set.
  4. Breaking the jobs up to perform threaded processing. Comes with a warning!
  5. Remove journaling overheads (if possible).
  6. Reduce record lock contention.
  7. Place independent jobs in parallel in one subsystem. 
  8. Hardware upgrade (CPU, Disk, Memory) etc.
  9. Distributing processing to after a high intensity window (deferred processing)
This is all well and good but how can you get at the information to assist you with identifying where your programmatic problems are.  Many batch processes can nest into dozens of layers deep (both functions and physical program objects).

Obviously there are tools on the IBMi to assist greatly, some are licensed and others are provided by third parties.  I am going to assume you are reading this and you are not yet ready for performance monitoring or job tracing but just have some general batch performance issues and require some quick wins.

Some developers have the skills to just look at a programs architecture and make some compelling changes but most require some hard evidence.  Even if you are one of these programmers with a great deal of insight for your system I'd suggest you do the baseline below to measure your improvements.

To get started I tend to query the database member statistics pre and post execution for the program(s) to determine what occurred.  Note: It is best to capture this data onsite (production) and slot wrap the following commands around the batch program(s).  If you want to reduce noise (data interference) then end as many jobs and subsystems as relevant so that only the IO of the job at hand is being 
captured.

Please note setup commands and queries are highlighted in pink and queries to analyse the results are highlighted in blue.

Capture before details of file(s) IO

SBMJOB CMD(DSPFD FILE(LIBRARY/*ALL) TYPE(*MBR) OUTPUT(*OUTFILE) FILEATR(*PF) OUTFILE(LEED/BF_DTA)) JOB(BF_DTA) 

Execute the batch tasks/programs in question and then capture the after details of the file(s) IO.

SBMJOB CMD(DSPFD FILE(LIBRARY/*ALL) TYPE(*MBR) OUTPUT(*OUTFILE) FILEATR(*PF) OUTFILE(LEED/AF_DTA)) JOB(AF_DTA)                                       

After this you will have two files that you can compare the before and after scenario.  This can give you  insight into what database activity occurred whilst your program/job (set of jobs) were running.  In the example below the SQL refers to the files as BF_DTA and AF_DTA (both in library LEED).  You will need to change these accordingly.

SELECT T01.MBLIB, T01.MBFILE, T02.MBNRCD, T02.MBOPOP, T02.MBCLOP, T02.MBWROP, T02.MBUPOP, T02.MBDLOP,     
T02.MBLRDS, T02.MBOPOP-T01.MBOPOP AS DIFF_OPEN, T02.MBCLOP-T01.MBCLOP AS DIFF_CLOSE, T02.MBWROP-T01.MBWROP AS DIFF_CRT, T02.MBUPOP-T01.MBUPOP AS DIFF_UPD, T02.MBDLOP-T01.MBDLOP AS DIFF_DLT, T02.MBLRDS-T01.MBLRDS AS DIFF_READ, (T02.MBWROP-T01.MBWROP)+(T02.MBUPOP-T01.MBUPOP)+(T02.MBDLOP-T01.MBDLOP) AS DIFF_CUD, (T02.MBUPOP-T01.MBUPOP)/NULLIF(T01.MBNRCD,0) AS IOINTENSE
FROM LEED/BF_DTA T01 INNER JOIN LEED/AF_DTA T02 ON T01.MBFILE = T02.MBFILE AND T01.MBLIB = T02.MBLIB

Note if you have multi members for a given file then the query above should have the WHERE clause extended by focusing on the main physical file member only.  This helps avoid a many to many join scenario.  In my latest environment I append the following.  You can also tune your query to simply omit certain files also.

AND T01.MBFILE = T01.MBNAME

The output is a comparison by file showing the IO differences i.e. Reads, Updates, Opens, Closes etc.

To output this to a file wrap the SQL statement above with the following...

CREATE TABLE LEED/DIFF AS (
 
INSERT SQL STATEMENT ABOVE HERE!!!
 
) WITH DATA

Again, replace LEED with a library of your choice.

Please note that the target file shouldn't exist already and that a library and file name is your choice and will impact the queries below.

This raw data should be enough for you to highlight any performance bottlenecks.  

Additional Queries

As each environment is different here are a few SQL's to execute over the differences file to provide some pointers.


The queries below will help to identify certain database performance scenarios.  I have highlighted the recommended editable values.

High IO Count

SELECT MBLIB, MBFILE, DIFF_CUD FROM LEED/DIFF WHERE diff_cud > 100 ORDER BY diff_cud desc     

Review and see if the IO is commensurate with the number of accounts or clients (records)  being processed etc.  if not, you may have duplication and refactoring could help.

High IO and Triggers

SELECT t01.MBLIB, t01.mBFILE, DIFF_upd FROM LEED/DIFF t01 inner join ytrgctlp t02 on T01.MBFILE = T02.TRGFIL WHERE diff_upd > 100 AND T02.TRGEVT = 'U' and T02.CMTLVL = 1 ORDER BY diff_upd desc       

This will highlight any files with high IO that also have Synon triggers.  Excessive volume may lead to increases in runtime.  Perhaps you have changed objects that are updating records which haven't changed....  Null Update suppression may work here. 

Excessive Reads (Arrays, *QUIT required, Join Logicals)

SELECT MBLIB, MBFILE, DIFF_read, MBNRCD FROM LEED/DIFF WHERE diff_read > 1000 ORDER BY diff_read desc                     

The 1000 (example) figure is very low.   Typically I would be looking for numbers in the millions for a good sized client. 

Excessive Reads for low record count files (Possibility to move to arrays)

SELECT MBLIB, MBFILE, DIFF_read, MBNRCD FROM LEED/DIFF WHERE diff_read > 1000 and mbnrcd < 100 ORDER BY diff_read desc  

Again, review these numbers based on the client database. If you are constantly reading from the same file then these could be committed to memory (array), moved to SSD, loaded in memory etc.

High UPDATES for low volume files (indicates potential contention i.e. a surrogate etc)

SELECT MBLIB, MBFILE, DIFF_upd, MBNRCD FROM LEED/DIFF WHERE diff_upd > 1000 and mbnrcd < 100 ORDER BY diff_upd  desc  

Note: Often people have a surrogate file to get next value for a key.  Especially if you are running parallel processing (either multiple jobs or parallel jobs over one dataset) the parallel jobs can cause record lock contention.

High File Open/Close

SELECT MBLIB, MBFILE, DIFF_open FROM LEED/DIFF WHERE diff_open > 10 ORDER BY diff_open desc     
          
This is used to identify if the task/program has excessive close downs.  Perhaps a routine is set to Closedown = 'Y'.  It is inefficient to keep opening and closing files.  Check the Synon function options within your call stack.

I hope that this information is useful and motivates you to finally have the confidence to look at that long running job.  Using some of these techniques above I have had significant performance improvements.  It is the true IO data that is a reflection of your code and for that you need to use tools or mine the data for yourself.

This, I promise, is a good starting point and as always, I'm happy to help.

Thanks for reading.
Lee.

Wednesday, May 15, 2019

Comment on commenting.

Hello,

Comments are an essential part of any coding practice whether you are using traditional languages that are quite verbose with their syntax and vocabulary i.e. Java, C# or RPG/COBOL.  Even code generation environments like 2E and Plex benefit hugely from appropriate commenting.

Modern low-code platforms like Appian, Mendix and Outsystems (to name a few) who shield you from code (as much as possible) benefit from correctly named functions and comments/annotation within them.

Without comments, what was as relatively simple coding process to the creator is now a moderate pain in the butt for the developer maintaining your code.  Multiply that with a complicated piece of technical logic and/or business logic which is now practically impossible for a maintenance developer to pick up and be successful.

Chances are you will NOT be maintaining your code. Get this into your heads.....

To avoid this, structure your comments professionally and ensure that the comment adds value.

Commenting out old code for safety reasons in the modern world is simply unacceptable.  With repositories like GitHub etc you can be brave and make changes.  Sure, comment some stuff out locally whilst trialing a few ideas....I get it.    But to commit that code to the main branch or the model if programming in Plex/2E is just unforgivable.

If you have got to the point where you have unit tested your code and are 1000% happy, remove the commented out code....NOW.

I'd also go as far to say that you should remove all legacy commented out code at the time you checkout the function...I mean where others have failed before you.  

There are no excuses for leaving commented out code in a production object/branch.

Thanks for reading.
Lee.

Monday, January 7, 2019

A little trick with *SET CURSOR

A quick little post to kick off 2019....

A colleague of mine had an issue today where he was trying to stop a user having to page down dozens of pages when inserting data via a DSPFIL/PMTRCD W/W (Work with) suite.

He asked me how he can reload the subfile (show the new data) but position the page at the point he (the user) was at, rather than refreshing and defaulting back to the first page again....


The solution is quite simple and as the title of the blog says.......

You use the *SET CURSOR function and set it to a field on the subfile record you wish to remain in focus.  In our case we chose the *SFLSEL field.



Thanks for reading.
Lee.

Saturday, December 1, 2018

What is a reasonable time to resolve an issue?


Today, I experienced a bug whilst building a wide screen definition in 2E.  The build was okay but when I went to use it (in the 2E device design editor), I was getting some low-level errors.

Upon googling the error, I came across this ticket which gave a few workarounds, one of which I implemented.

The given workarounds were:-


  1. Set the screen footer to 25 as per the screen print. 
  2. Set the subfile page size for consuming functions explicitly.  I am assuming they mean…
  3. Override the YSFLEND model value to *PLUS and not *TEXT or override it in the consuming function via F7=Function Options



All of these appear to be perfectly good reasons to postpone fixing this issue up within the product as I am guessing...


  • many use the + for subfile,  
  • many also forget to move the command line down to row 26 anyhow when creating and generating wide screens (they leave it at 23) and then wonder why they have a 3 row gap at the bottom of their screens 😊

Is it okay though, that 15 years after it was raised, it is still an issue…..?



Thanks for reading.
Lee.

Sunday, June 10, 2018

You have 'Function Options' you know....


PODA PODA PODA PODA PODA PODA

One of the first things that are discussed when you did (if you did) the Action Diagramming course for 2E is PODA.

PODA is an approach to effective function design.

  • P is for parameters and the interface.
  • O is for options (Function Options)
  • D is for device design (Screen/Print)
  • A is for action diagram.

The concept being that these all influence the function and getting them correct will mean you’ll write less code and won’t be wrestling with the template (prototypes/patterns).

Bare this in mind for the rest of the blog post.

I was at work the other day and was maintaining some code where once again I could be heard saying, “Whoever wrote this should be shot!".  It’s my preferred (go to) phrase when I see badly written/designed/architect-ed code.

Anyhow in this instance the code was quite simple and generic so I can share it here.


The reason for my comment above was why is this code inside a subroutine called Subroutine?  The actual function ‘Perform Substitution’ was itself and EIF (Execute Internal Function).

I thought to myself, it is okay someone probably wanted to be able to *QUIT from one of the case blocks below…..  NO!!!

Hmmmm.......Perhaps someone was being a dunce!

Anyhow, depending on you model default and EIF can be generated as either inline code or as a subroutine.  I am thinking that this code might be quite old or that someone simply doesn’t understand how the code is generated in 2E. (Probably the later).

Most of you know that you can share subroutines and reduce code bloat using the ‘Share subroutine’ option.  And EIF also has an additional option called ‘Generate as subroutine’.


In the instance above we could have achieved the same result with omitting the sequence block and simply setting the value.

Let’s explore the generated code for a much simpler example.  I have an EEF (Execute External Function) calling and EIF.  The EEF is setting the local context for LCL.*JOB DATE to JOB.*Job Date and then calls the EIF which in turn set the WRK.*Job Date to JOB.*Job Date.



With ‘Generate as subroutine’ set to No we get inline code. (See below).


Taking the original example (see top), if I put the internal code inside a Sequence block I’d get a subroutine. 


See code mock up below.


So although this code looks a little neater, it still isn’t perfect.

Setting the option Generate as subroutine’ to Yes generates slightly difference code.


Overall, in this instance it didn’t matter too much as there were NO *QUIT’s to worry about and the routines weren’t (or couldn’t) be shared etc.  But it does highlight that following PODA can make your programmer life easier, not to mention mine..... as I mop up after you....


Thanks for reading. 
Lee.