A little funny post as it's Christmas.
There is a reason why cumulative is never abbreviated....
Thanks for reading.
Lee.
Wednesday, December 12, 2018
Saturday, December 1, 2018
What is a reasonable time to resolve an issue?
Today, I experienced a bug whilst building a wide screen
definition in 2E. The build was okay but
when I went to use it (in the 2E device design editor), I was getting some low-level errors.
Upon googling the error, I came across this ticket which gave
a few workarounds, one of which I implemented.
The given workarounds were:-
- Set the screen footer to 25 as per the screen print.
- Set the subfile page size for consuming functions explicitly. I am assuming they mean…
- Override the YSFLEND model value to *PLUS and
not *TEXT or override it in the consuming function
via F7=Function Options
All of these appear to be perfectly good reasons to postpone fixing
this issue up within the product as I am guessing...
- many use the + for subfile,
- many also forget to move the command line down to row 26 anyhow when creating and generating wide screens (they leave it at 23) and then wonder why they have a 3 row gap at the bottom of their screens 😊
Is it okay though, that 15 years after it was raised, it is
still an issue…..?
Lee.
Tuesday, November 27, 2018
Multi-Line Edit Oddities
Hiya,
Today I helped a colleague (a very talented one at that) with a small issue around a PMTRCD and usage of multi-line edit for oversized fields. Something he hadn't seen before.
He was confused as to why his data entry field was showing on the device design and not showing when executed.
Figure 1 shows a mock up of his device design looking pretty standard.
Figure 2 shows the same screen at run-time. Note the cursor has positioned to the field acting strangely but the underline is not showing.
If you want to recreate this issue I created a file as follows (See Figure 3 and 4) :-
My assumption is that the screen attributes are being corrupted by the multi-line although comparing the source for the DDS didn't highlight anything untoward so I am guessing it is a 5250 issue. (I also recommend aligning the rest of the fields on the screen.)
Any takers on providing the technical explanation? as a compare source doens't seem to highlight a clear and present danger. (Figure 7)
I've always nudged these over by an extra byte for years and years but I guess some tips and tricks get lost.
Thanks for reading.
Lee.
Today I helped a colleague (a very talented one at that) with a small issue around a PMTRCD and usage of multi-line edit for oversized fields. Something he hadn't seen before.
He was confused as to why his data entry field was showing on the device design and not showing when executed.
Figure 1 shows a mock up of his device design looking pretty standard.
Figure 2 shows the same screen at run-time. Note the cursor has positioned to the field acting strangely but the underline is not showing.
If you want to recreate this issue I created a file as follows (See Figure 3 and 4) :-
Additionally, in the device design we set the long field to multi-line Y and set it to 4 rows and 50 wide.
To resolve the issue was a case of applying a (clearly little known) trick of moving the multi-line field over by one byte. (Figure 5)
This then allows the run-time screen to paint properly. (Figure 6)
Any takers on providing the technical explanation? as a compare source doens't seem to highlight a clear and present danger. (Figure 7)
I've always nudged these over by an extra byte for years and years but I guess some tips and tricks get lost.
Thanks for reading.
Lee.
Tuesday, July 24, 2018
The magic roundabout....
“Computer Associates (CA), where products go to die!”
If you were around in the late 90’s and early noughties, the
statement above was industry standard and after a brief rename to the COOL
range from Sterling Software prior to the CA acquisition in 2000 the tools
known as Synon (now CA 2E) and Obsydian/Plex (now CA Plex) have been maintained
and supported by CA.
Correction from above…. CA did in fact (the early years)
innovate with the tools quite frequently and with good features and
enhancements. CA were responsible for
the introduction of the Web Option, Triggers, RPGILE Generator, numerous SQL’s
updates and Web Services for 2E as well as .NET Generator for CA Plex (no small
feat), Web Services publication and consumption as well as keeping up with a
myriad of technology platform refreshes Plex required.
All in all, a reasonable job.
Perhaps a 6 out of 10.
Okay, 5.
The point being that these products didn’t go to CA to die. However,
in recent years with development budgets reduced and key personnel leaving the
rate of change has stalled significantly.
So much so that nowadays a release highlight are items that would have
been reserved for minor features or even bug fixes in years gone by.
Whilst the tools haven’t died they are clearly in
maintenance mode. CA moved this group of
products to sustaining engineering. This
has a negative context whilst a product is in decline and I feel that other
low-code options with better target platforms coverage have emerged into a
space once dominated by case and code generation tooling.
Last week Broadcom announced a cash buyout of CA
Technologies for over 18b dollars.
Broadcom doesn’t do software…they are a semiconductor business
so what does CA provide them:-
- They may be diversifying their offerings and product range. Perhaps there are some key products in the CA range that assist in their growth or CA has strong alliances with certain business verticals or a client base the parent organisation may wish to gain access to.
- Or this is purely a financial decision. They may have too much cash to burn and need to spend it quickly. They buy a solid company with a long and attractive maintenance trailing revenue stream and secure long term (almost guaranteed) recurring revenue. Most likely this means they won’t need to pay any corporation tax for the next year or two as they assimilate this monster of a business.
Perhaps a mix of both but my money is on the second option
and that this is merely a financially driven strategic purchase.
There certainly isn’t any institutional importance for the
CA development tools business i.e. CA 2E, CA Plex and CA Gen. Although these areas are likely to show very
high ROI i.e. cost vs revenue on the reporting charts I very much doubt they’ll
get anymore focus than they there are currently getting.
Now it would appear, that the final resting place for these
(once wonderful and genius) tools is going to be Broadcom. The new statement being “Broadcom, a place
where CA Technologies development tools go to die!”
STOP THE PRESS!!!!!!!
Hopefully not, I hope that the residual value and with opportunities
in a safe pair of hands i.e. a company with a low code focus. It is possible to
recapture the essence of CASE and reinvigorate these tools.
Probability?:
< 10% if Broadcom don’t want to relinquish these tools.
Lee’s take out!
Sadly, it’s probably time to work out what the next big
thing is… These tools are now compliance/maintenance focused (at best) and will
be stabilised (cease to be supported) as soon as the revenue trail drops below x, whatever x is.
x for CA or
Broadcom is far higher than x for a
passionate low-code only vendor. I beg
Broadcom to review the business units at CA and seek a buyer (at a fair price)
so this technology has a chance to thrive once more. These tools practically invented
low-code. In my eyes they are 20 years
ahead of the rest.
Thanks for reading.
p.s. I wonder what theynew name will be....Broadcom Plex doesn't have that good a ring to it.....
Sunday, June 10, 2018
You have 'Function Options' you know....
PODA PODA PODA PODA PODA PODA
One of the first things that are discussed when you did
(if you did) the Action Diagramming course for 2E is PODA.
PODA is an approach to effective function design.
- P is for parameters and the interface.
- O is for options (Function Options)
- D is for device design (Screen/Print)
- A is for action diagram.
The concept being that these all influence the function
and getting them correct will mean you’ll write less code and won’t be
wrestling with the template (prototypes/patterns).
Bare this in mind for the rest of the blog post.
I was at work the other day and was maintaining some code
where once again I could be heard saying, “Whoever wrote this should be shot!". It’s my preferred (go to) phrase when I see
badly written/designed/architect-ed code.
Anyhow in this instance the code was quite simple and
generic so I can share it here.
The reason for my comment above was why is this code
inside a subroutine called Subroutine?
The actual function ‘Perform Substitution’ was itself and EIF (Execute Internal
Function).
I thought to myself, it is okay someone probably wanted
to be able to *QUIT from one of the case blocks below….. NO!!!
Hmmmm.......Perhaps someone was being a dunce!
Anyhow, depending on you model default and EIF can be
generated as either inline code or as a subroutine. I am thinking that this code might be quite
old or that someone simply doesn’t understand how the code is generated in 2E.
(Probably the later).
Most of you know that you can share subroutines and
reduce code bloat using the ‘Share subroutine’ option. And EIF also has an additional option called ‘Generate
as subroutine’.
In the instance above we could have achieved the same
result with omitting the sequence block and simply setting the value.
Let’s explore the generated code for a much simpler
example. I have an EEF (Execute External
Function) calling and EIF. The EEF is
setting the local context for LCL.*JOB DATE to JOB.*Job Date and then calls the
EIF which in turn set the WRK.*Job Date to JOB.*Job Date.
With ‘Generate as subroutine’ set to No we get inline
code. (See below).
Taking the original example (see top), if I put the internal
code inside a Sequence block I’d get a subroutine.
See code mock up below.
So although this code looks a little neater, it still isn’t
perfect.
Setting the option Generate as subroutine’ to Yes
generates slightly difference code.
Overall, in this instance it didn’t matter too much as
there were NO *QUIT’s to worry about and the routines weren’t (or couldn’t) be
shared etc. But it does highlight that
following PODA can make your programmer life easier, not to mention mine..... as I mop up after you....
Thanks for reading.
Lee.
Wednesday, May 16, 2018
2E Code Review - pet hates - part III
7. Bad field names.
I've banged on about this many times but fields called 'Current Balance' or 'Count' simply do not cut it.
Call it as it is and don't be afraid for adding more fields. Especially, as we can now search much easier than before.....
8. Copies of functions (just in case).
RTV All (Copy).
WHY! WHY! WHY! would you do this. Just take a version or install version control system. It is even worse when these are external functions that eventually get generated and promoted (but never used).
This is a very amateur mistake and you'll be shocked at how prevalent this is.
9.WIP
Happy to take ideas but be quick as this blog is likely to be closing soon.....
Thanks for reading.
Lee.
I've banged on about this many times but fields called 'Current Balance' or 'Count' simply do not cut it.
Call it as it is and don't be afraid for adding more fields. Especially, as we can now search much easier than before.....
8. Copies of functions (just in case).
RTV All (Copy).
WHY! WHY! WHY! would you do this. Just take a version or install version control system. It is even worse when these are external functions that eventually get generated and promoted (but never used).
This is a very amateur mistake and you'll be shocked at how prevalent this is.
9.WIP
Happy to take ideas but be quick as this blog is likely to be closing soon.....
Thanks for reading.
Lee.
Wednesday, May 9, 2018
It's not depressing so get suppressing!
Would you do work you didn’t need to do? NO would be the obvious answer right!!! That said there are probably thousands of 2E
programs out there that are doing unnecessary updates to a database using a
CHGOBJ.
Many of these programs have been working seamlessly for
years and years, however, they are like a volcanic field. Stable for years, centuries or even millennia
but then one day……BOOM!!!!
This isn’t because they are really badly coded. After all, we should only be updating one
record at a time etc. Most CHGOBJ’s are
probably inline (i.e. a screen or a single instance business transaction).
Mass bulk updates being the lesser spotted usage for a
CHGOBJ. But it does happen!
Recently we had an issue where a long standing bulk update
processing program (EOD) went from executing in sub 1 minute (so it didn’t get
too much love in the maintenance department) to almost 30+ minutes (overnight).
Upon first inspection, the program hadn’t changed. This points to an environmental based
solution. The dataset or data volume hadn’t
significantly changed either…. The
subsystems configs hadn’t changed and there was no system resourcing issues or
spikes…..
The simple reason for the increase was that a new trigger
(2E) was added to the file being updated, this trigger had a little business
logic and this was required processing.
There was limited tuning to be done in the trigger program.
However, I did notice that the data was summary statistical style
(reporting categories for data like current
balance etc). This was being
recalculated each night and updated on an account by account basis.
On closer inspection of the rules around the categorisation
it was obvious that the vast majority of accounts stayed in their categories
for years and years and only with major events in the accounts lifecycle did
they switch. This mean that the activity
was effectively calculating the same values each night and then updating the
fields in the file every night with the same values. This in turn NOW triggered additional
functionality with a real-time trigger.
Option 1.
It was quite obvious by now that we needed to stop the
execution of the trigger. We didn’t have
the option of removing and reading the triggers after the process. The simplest method was to not perform the
database update in the first instance.
This can be done by simply comparing the newly calculated values with
the those on the database record and NOT call the CHGOBJ.
This method works and is relatively easy for a developer to
read the action diagram and ascertain what is happening and on the surface
seems like a good option. I have seen this
done in many functions.
However, the developer must (potentially) do a read to
compare to the database. This data may
itself be old (retrieved much earlier in the cycle). The developer needs to do this everywhere the
CHGOBJ is used.
Option 2.
Code could be added inside the CHGOBJ to exit if DB1 and PAR
are the same. I’ve seen this approach
too. This is a bit cleaner but for any
functions creates since 6.1 of 2E (last century) this is also the incorrect
approach.
Option 3.
The correct approach in this instance is to switch on a
function option on the CHGOBJ and utilise the built in suppression code
relating to Null Update Suppression.
(See highlighted options below).
The options are quite simple.
M - is the default model value. In this model it is ‘N’ so this implies NO
suppression will occur.
Y - means that the DB1 will be checked against
PAR twice. Once upon initial read of the
data from the file and then once the record lock is in place and that data is
about to be written.
A - (After read) only does the first part (see
above).
The generated code
The diagram below gives a visual of the code that is
generated for each of the options.
NULL Update Suppression works regardless of how you define
CHGOBJ’s
Benefits of the suppress option for CHGOBJ.
- Record level audit stamps won’t get corrupted with unnecessary update
- Performance
- Triggers won’t get fired
- Encapsulated
When to use?
Lee.
Subscribe to:
Posts (Atom)