Showing posts with label Design. Show all posts
Showing posts with label Design. Show all posts

Wednesday, January 20, 2010

Implementing a 'Generic' Data Driver File + Printing/Displaying Arrays in Subfiles (Part III)

Firstly.  Sorry for the delay in finishing off this series.  I have been away on holiday for 6 weeks and when I returned I have been in hospital having my knee operation.  I am now in recovery (bed rest) and finally have a little spare time to finish off the blog.

So let's do a little recap. 

In part one, I discussed the merits of a generic data driver file.  This is a different approach to normal 2E data driven programming but as indicated is particularly useful for implementing *Arrays for DSPFIL's or a PRTFIL or for merging head/footer details into one DSPFIL/PRTFIL.

http://leedare-plex2e.blogspot.com/2009/11/implementing-generic-data-driver-file.html

In part two, I was merely trying to walk you through the solution I was required to provide for one of my customers.  These screenshots have been modified from their original form for confidentiality reasons but are posted with permission from my employer http://www.sasit.co.nz/.

http://leedare-plex2e.blogspot.com/2009/11/implementing-generic-data-driver-file_20.html

Today I am just going to do a quick walk through of what was required to create the solution.  With a little bit of effort (and luck) you should be in a position to work this out for yourself.  Of course, I am always happy to take questions and assist if neccessary.

Step 1.  Implement a Data Driver file.

A very simple file with one key.  I just made mine a simple numeric field.



Here are the field details




I just went with the default sizes for a NBR field.

Remember, once you have created this file you will need to populate it.  I simply populate this with two records 1 and 2 as the keys.  I have seen other implementations where people have put all 99,999 records in the file to make their programming a little easier.  I prefer a slightly different method of key jumping.  Explained a little later.

Step 2 - Some AD coding for the Super 14 table.

I have already computed my table placings based on the scores that have been entered into the system and the Array is keyed in position order.  In the AD below you will see that I set a counter.  This was initialised to 0 in the initialise program section.  The counter is there so I get the correct record for the correct position in the table as my data driven data only has two records.  If I wanted to, I could have populated more and had the counting already available in the DB1 context as I read each record. Personnal preferences here I guess.  I'd be interested in your viewpoints.

Next, I simply retrieve the data from the Array and do some populating of the fields in the subfile.  Now because I am reading down the data driver file and there are only two records I simply have to reset the cursor by doing a re-read of the first record if I think that there are other records to be displayed.  If not, the second record is read, not a lot happens and the subfile loading has done its job.



This is a screenshot of the device design for the Super14 table.



And the final table with 2009 results in place is as follows:-




That's it.   Simple.  If there are other areas in 2e you'd like me to cover.  Drop me a line.
Thanks for reading.
Lee.

Friday, November 20, 2009

Implementing a 'Generic' Data Driver File + Printing/Displaying Arrays in Subfiles (Part II)

The greatest rugby competition on the planet. Alright, I live in the Southern Hemisphere now and as a direct result have begun to believe the hype. That said the Super 14 (Super 15 from 2011) competition is recognised as one of the strongest leagues in Rugby Union and has team from Australia, South Africa and my current place of residence, New Zealand. Sorry to all those that think I have sold out and not created a Football (Soccer) or NFL example.

System Overview

The requirement was to build a system that allowed the user to make simple sports results/margins predictions on a group of games on a weekly basis. The fuxtures would be published and the predictions made. Once the results were known they would be entered in to the system and participants points (awarded for correct or near correct predictions) would be calculated.

Requirement

Not everyone had the time to trawl the internet looking for a league table that may assist them with making their predictions (Hopefully they have enough time to read this article though). The requirement was to show a realtime league table as fixtures results were entered.  It was decided to record the points the actual teams achieved for each fixture and then simply build the league table on the fly.

Additional information

One could have built a simple file and recreated each and every time the results changed. However, due to the limited number of teams and fixtures in a period it was decided to build on the fly in an array. Also this also meant there was no physical file to maintain and promote and the user could easily view any of the previous years.

Solution

Using a generic data driver file. Build an array that computes and sorts the team table into the correct order and then read from the array the teams and show in that order they are in the table.

Next week I will show you how this was achieved. There will be otherways to receive the same result and all notes are intended as a guide. Your individual circumstances and requirements may vary but feel free to emulate and utilise.

As an appetiser the screen below is a DSPFIL based over an Array.



If you require any further assistance you can always email me at (leedare at talk21 dot com)

Thanks for reading.
Lee.

Tuesday, November 17, 2009

Implementing a 'Generic' Data Driver File + Printing/Displaying Arrays in Subfiles (Part I)

This is a three part story.

I can think of quite a few occassions in 2E where I have needed to display or print information from a non 2e standard file i.e. A non 2E defined file, an array or even a Data Queue.

I have also had the need to build PRTFIL's and DSPFIL's which needed to aggregate data in a master/detail arrangement. The example below is from a change management application I worked on years ago.  It shows a diary note (Header) and the detailed comments (Detail) in one screen and uses a toggle button to determine the entires shown for either the summary mode or detail mode.




To implement these solutions I have used the 'Generic Data Driver' file concept.  I have introduced this at the last 3 2E sites I have worked at.

A worked example of how to do this with screen shots and code sample code will be in part III .  I have also included some notes to help you set up your own generic data driver file and one example of how to utilise it.  This example also has the added bonus of showing you how you can show arrays on a DSPFIL.  Whooarah....Yippee....Get on with it..... I can hear you all say........

This might save Rory and Simon some hassle anyway!!!!  At least with fending off this often requested enhancement to the base 2E tool.

Until then.....(Next Week).

Thanks for reading.
Lee.

Sunday, March 15, 2009

Why bad design sucks! - Part II

Hmmmm,

As I suspected a couple of weeks ago once you open your eyes to a subject and commence blogging about it you have it constantly in the back of your mind. My post on bad systems design and planning has had this affect on me.

I am minding my own business at my local quiz night when a question was read out by the quiz master or perhaps mistress as she is a lady. "What day of the week was Valentines Day in the year 2000?" Now as I am not the romantic type so I certainly couldn't recall this answer based on an event, although a friend 'H' was adamant he knew the day.

I decided the simple solution was the mobile phone calender so I plucked out the phone. It is a reasonably new Motorola Razor phone. So you know the routine, Menu, Organiser & Tools, Calendar. Voila.... The calendar is showing February 2009 (the current month when this event occurred).

"Excellent", I said (I remember the excitement), I can now work it out by counting back. Then I thought what about the leap years? So at this stage I decided to press the previous month button again and again and again working my way back in time faster than Marty McFly in his time machine and I didn't need a flux capacitor.

You get the idea, November 2008 came and went. July 2006 soon appeared but a few more key presses and my phone stopped working. It got as far as January 2005 and it stopped. No message, no nothing.

Bang. Another blog appears out of the blue. And a lot of questions.

Why did it stop at January 2005? How am I going to answer the question? Anyhow another of the team had an older phone that could go back further and we got the answer right in the end. fyi it was a Monday.

This got me thinking a little. What is the upper limit for this phone. After hundreds and hundreds of key presses (1007 to be precise) I got my answer NOVEMBER 2088. December 2099 I can understand as it would have covered a century but NOVEMBER 2088, what a strange limitation.

That works out at 83 years and 11 months or 30,650 days and I can't for the life of me think of a reason why my phone wouldn't be calculating this on the fly. It is not showing holidays. Leap years can be identified by a simple formulae. I certainly can't see a contraint in computing terms that would cause this to occur.

If someone can tell me why the programmer decided to restrict the dates in this way I'd be keen to hear your comments. It might be because they assumed that the phone was made after 2005 and therefore didn't need to go back for storing calendar entries. Fair assumption. But why put it in? Another unneccessary computer programmer/designer limitation I think.

Other friends on the night had different limitations on their phones and a colleague at work with a cool new blackberry had no issues at all with date range restrictions far in excess in both directions that he was prepared to sit around trying to find.

I don't think you have heard the last of the 'Why bad design sucks!' series....... It does worry me that people still impose design limitations.

Thanks for reading.
Lee.

Sunday, March 1, 2009

Why bad design sucks! - Part I

There are many areas of computing that have been let down by poor design.

Actually that may be a little harsh.

However, poor assumptions have definitely led to numerous designs that in hindsight could be considered questionable at best. These decisions have in turn led to systems that have suffered due to higher than anticipated volumes or longer than expected life spans.

A few examples that come to mind are:-

The millennium bug. Everyone knows this story. Developers designed systems in the 60s, 70s and 80s with 6 character dates ie DDMMYY or MMDDYY. The assumption being that storage is expensive and this system won't be around in 20 or 30 years. Well we all know how much effort was involved ensuring that airplanes never fell out of the sky in the late 90's, not to mention the contract rates for COBOL programmers that went with it. So I guess the lesson learned here is short sightedness.

http://www.trademe.co.nz/. Another example I heard about was for a very popular auction trading site here in New Zealand. The founders at a Microsoft TechEd conference a few years ago talked candidly about the early days of the business. They were describing growth of the business and unexpected challenges for the fledgling company.

The site was adding customers at a nice steady state when one day the system stopped working. No more new customers could be added.

They were a little concerned at first. After all they had just added their 32767th customer.

The issue was that in the early versions of their database they hadn't considered the domain of this particular field to carefully. i.e. the size. They had reached the limit for the integer field type on their platform. Now that the site is the most successful site in New Zealand and has made the creator a multi-millionaire, I am sure they have used at a minimum a long integer. The lesson learned here would be factor your wildest assumptions into your database designs.

www.Virtualrugby.co.nz. A sports predictions site here in NZ that has just rebranded again with another sponsor but suffered the ignominy of poor application performance and a whole host of players unable to access the site. On one occasion I was advised that the server was unable to make a connection because the limit of 111 connections had been reached. This is for a site that had over 100,000 active participants the previous year. I will go as far as suggesting the infrastructure suppliers arrangement may have changed as a result of the sponsorship change. Hopefully they'll get it sorted.

The lesson learned here. Never underestimate your audience and your processing peaks. Most of all ensure that your test systems have a reasonable subset of data to stress a mass participant application. Three guys in a room pressing the submit button as often as possible is not load/stress testing.

IPv6. Another in a long line of bad designs, or is it? It is alleged that we are to run out of IP addresses in the next few years. Some claim that this is the millennium bug of the 2000's. If it is and given the age of computing in general, we might have a few more yet. Sounds not too different to the 100 year storm scenario that happens every 4 years in these days of global warming.

I guess this one could have also been avoided but the caveat once again was when IP addressing got going the internet was quite young and connected devices were at a far lower number than they are today. Lesson earned! What do you think?

......

With all these though, IMHO it was human expectations that were at fault or shortcuts being taken to save a few bytes here or there. My advice to all application developers is to ensure that all your applications have database fields that are capable of supporting data sets beyond your wildest dreams. And to ensure that your application architecture is fit for purpose.

Bad design really does suck. Ask your end user(s).

Thanks for reading.
Lee.

p.s. I titled this Part I as I guess that now I have finally got this off my chest there will be other scenarios out there that will jump out shouting - BLOGGERTUNITY. Actually, one has just emerged but I will save that for another day.

Tuesday, February 24, 2009

*Arrays can be quirky in 2e

Hiya,

I have just become aware that *Arrays do not support the correct ordering sequence for negative index values. This has been referred to CA Support (2nd Level) for investigation.

My scenario is an array that is ordered based on a difference between two values. For the purposes of a meaningful example lets pretend that our array is a league table for the English football premier league (Soccer to some). If your game is rugby or another sport then you can draw your own comparisons.

The scenario is that after 2 games of the season I have 5 teams on 4 points. These teams are place 1 to 5 on the table. Let's further embelish this example and assume that my team, Tottenham Hotspur (Spurs) are at the top. :-)

TeamPointsGD (Goal Difference)
Tottenham4pts+78
Liverpool4pts+5
Everton4pts0
Wigan4pts-4
Chelsea4pts-8
......
Arsenal0pts-78


Apart from the obvious good start by Wigan and the strange GD for two games. I believe the example table to be a fair reflection of the real world. With Tottenham at the top. COYS. Blue and White army. Stand up if you hate Arsenal.

If I were to create an *array in DESCENDING order with the keys of Points and GD. My array would sort itself as follows:-

TeamPointsGD (Goal Difference)
Tottenham4pts+78
Liverpool4pts+5
Everton4pts0
Chelsea4pts-8
Wigan4pts-4
......
Arsenal0pts-78


The arrays doesn't handle the negative sign and although it preserves the negative sign it is unable to sort it. Note the order of Chelsea and Wigan.

Until this is fixed, a simple workaround I have used is to *ADD an arbitary figure to the GD to ensure it is a positive value. In order not to blow a limit (as over a season a team can be -100) I need to cater for a higher number so I chose 10,000 for the *array as an offset.

At the point of display which happens to be a DSPFIL I simply deduct 10,000. Simple workaround. Hopefully, simple solution that will be fixed some time in the future. Another option which I contribute to my colleague Chris Koloszar is to do the 10,000 offset for the key and leave the original value as an attribute of the array also.

My main concern is for those of you that have negative values but have yet to discover them.

I will post updates as I hear back from CA.

Thanks for reading.
Lee.

UPDATE HISTORY
==============

2:54pm (Same Day). I have had some quick responses from CA (Very impressed - Thanks Lynn). CA claim this to be working as designed. I am countering that it is a bug and was designed incorrectly. I hope that this will be fixed and I will keep you all updated.

What do you think?

Next Day - Referred to development not a trivial fix but I am confident it will be a good look over. Thanks.

Monday, July 28, 2008

The context of programming

Recent events in my place of work have lead me to ponder the concept of programming context once again. I suspect it is a pervasive concept as I seem to come across it on a regular basis in quite different circumstances. Let me explain.

If I am asked to write a program that accepts two numbers and returns a third number, being the product of the two, then there is not a lot more I need to know. Perhaps knowing the possible range of input numbers would be useful, but really this is a pure mathematical problem and has no context.

If I am asked to write a program that accepts two numbers and returns a third number - the number of residential addresses in a database that fall between those two numbers - then there is quite a bit more I need to know. I need to know whether just street numbers alone should be checked, or whether street names should be included (5th Avenue, for example). Even within street numbers alone, what about flat numbers? It's a bit more complex than the first example as there is a context. I.e. what are we actually trying to achieve here?

Now in a third example, I am asked to write a program that accepts two numbers (x and y) and returns a third number which is the number of active users who have been logged in between x hours and y hours. Again, now the context is complex. How do I define a "logged in user"? Do I define one interactive session as one user, or do I need to reduce this to unique users because some may be logged in more than once. What about "special" users such as system supplied IDs? Should they all be counted, none, or only some?

But the third example is even more complex than I have shown so far. Consider that this function needs to work in a function test environment, in an integrated test environment, and in production. There are some processes that occur only in production, some only in test and some on both. Will this affect the outcome? Is testing on the test system going to be good enough to know it works in production?

Hang on a minute - aren't we talking about system programming? Well, maybe yes and maybe no. If this program is needed to manage software licensing, then it's a system program. But, if it is needed to manage the number of customer service representatives assigned to different parts of the call centre, then no it is not system programming. If it is being used to achieve load balancing for application service jobs then it could go one way or the other.

Now that was a somewhat contrived example, but it helps me to illustrate my point. In all three cases, take two numbers and return a third. The first example I would expect absolutely any programmer to be able to achieve. The second example I would expect any programmer to be able to achieve if complete requirements are provided. If the problem is only defined as I described, then you would need an analyst programmer. For the third example, who would you give the job to, generically speaking?

This is where I see a massive gap. I, myself, have been fortunate to have been involved in both application and systems programming fairly extensively and even if I say so myself I think I'm pretty good at covering off the sorts of issues described above. It also means I am frequently seeing other programmers who are failing to account for the "system" level factors.

In a specific recent case, a developer insisted that my team (who are a development & test support team) replace one version of a program with another so that it 'behaved like production'. That should have been the first red flag. (I was not involved at this stage so I don't know whether I would have caught this at the start.) Why was the test system behaving differently to production?

Well, the developer got his wish and proceeded to begin to make his related code work. Meanwhile, large numbers of other people were tripping over the problems introduced. After several days of analysing the problems we concluded we had to put things back the way they were. To quote Spock - "The needs of the many far outweigh the needs of the few." This programmer was looking in far too narrow a context in defining what needed to be done. He had no concept of the roles this particular program was playing, nor the large number of dependencies it had. For instance, an automated regression testing suite completely failed because of the change.

But perhaps the most spectacular case of lack of context that I have ever encountered was in a previous role.

The product in question was enterprise software being used all around the world and it was incredibly complex. Customers had requested the ability to use off-the-shelf reporting tools (such as Crystal Reports) to create their own reports. The development organisation realised this meant less work on such things for us and considered this was a good idea - but dangerous. Great they can write their own reports, but how to let them into a massive, complex database without (a) massive confusion and (b) the opportunity to corrupt it.

So a plan was hatched to deliver a new library (for self containment) of logical files (views) which would collate the data into meaningful constructs and, importantly, be read-only. My team (again in development & test support) figured out how to deal with this new library for the purposes of the testing done on it. For the most part we just manually created and destroyed these libraries as required and used some of our own toolset which, importantly, is not delivered to customers.

At some point I got to thinking...How are we going to deliver this? The initial response I got from the designer was "on a tape/CD with the rest of it." To cut a long story short, I soon proved that it is impossible to ship a library full of logical files. Period. Can't be done. I took this information back to the designer, along with a rough sketch design of a simple tool which could alleviate the problem, and also be useful within the development shop.

The response? "We didn't budget for that." * Sigh *.

In the end, I wrote a quick (hack) version of that tool on the day we packaged the software. Some months later someone contacted me saying that there was a bug in my code. I sent them to the designer to have it sorted out.

Thanks for reading.
Allister.

Monday, July 21, 2008

2E - Development Standards (Defensive Programming)

Part two in the series and takes a look at defensive programming techniques and how these help you to create reliable programs.

The following a guidelines for creating robust code.

Always check for a divide by zero (runtime) error by checking the divisor field for zero value prior to performing the *DIV operation.

Never move numeric fields into a field with a smaller domain.
With RPG this can cause truncation of the value and with RPG ILE Pre 8.0 will cause an RPG ILE runtime error.

Ensure that your iteration values and counters are large enough to cater for your anticipated maximum.

Ensure that your field sizes for database attributes are sized sufficiently to cater for the number of records anticipated.

Ensure that your arrays are sized to cater for the maximum number of array records anticipated.
Thus avoiding array index out of bounds issues. Remember to balance this with not overly sizing the array and thus causing a performance degragetion.

Always ensure that any substring operations utilising position and length parameters are within the range of the target field. Thus avoiding substring out of bounds.

Remember to check the function options for your function to ensure appropriate behaviour, especially close down program and reclaim resources.

Never use the WRK context in new programs. Use LCL and HLL.
If you choose to fix up old WRK fields, remember to check all other internals within your object and ensure that fields aren't used. This used to be used a trick in the old days to bypass the paramters passing limits. This was pre-arrays and when structure files where a pain.

Avoid HLL User Source and Programs. If you do write user source for RPG first convert program to RPG ILE and write one user source. The sign of a well managed and maintained 2E model is the percentage of HLL code versus generated code. If your models are more than 5% HLL then you have issues and a history of developers who have misunderstood the purpose and philosphies of model based development. IMHO.

Always pass parameters to user source. Do not rely on the generated field names.

Avoid use of CON context as these values are not available for impact analysis and localisation.

Avoid manual source modification. Use a program and the pre-processor directives to amend code automatically.


Source Modification – Special Notes.

Manual source modification must be avoided at all costs. If source is required to be overridden then a source modifying program should be written to automatically perform this function after generation and before compilation using the pre-processor.

In Summary:-

- Do NOT consider source modification unless absolutely necessary.

- Avoid the use of fields that use incremental counters for naming i.e. LCL Context YLnnnn.

- Avoid adding parameters above a field declared as a modified field. Therefore, always try to ensure that your fields that are parameters that are modified are at the top of the parameter declaration list.

- Try to avoid usage of fields higher than 32k. If 64k is required considered looping in 32k blocks as the 64k limit would one day be exceeded.

- Consider a naming standard to help you to easily identify a modified program and its modifying program.

- Consider centralised methods to ensure source modification programs have been successful rather then depend on a developer having to manually check the modified source.

Thanks for reading.
Lee.

Sunday, July 6, 2008

2E - Development Standards (Performance)

This is the first part in a complete series of articles I intend to post regarding development best practices and standards for the CA 2e (Synon) development tool. The aim of publishing the guides is to educate, collaborate and enhance the standards by receiving community feedback. After all, no one person can know everything but the wider community can contribute.

Many of these tips I have learnt over the years and quite a lot have been sent to me by interested parties around the world. A big thank you to you all.

I will publish the complete documents on the 2E wiki (soon) with full acknowledgements. (See my links section below).

In the meantime I will publish some selected extracts on this blog just to get your thought processes flowing.

Performance

There are many considerations when programming for performance in CA 2e. A few are highlighted here. This is by no means an exhaustive list. My next technical post will relate to Defensive Programming techniques......

I'd be interested to hear of others from the community in general and would be happy to include them on this blog and the final wiki document.

Drop unused relations where possible and set others to appropriate level. i.e. OPTIONAL or USER etc. This cuts down unneccessary code and processing as well as making your action diagrams more easily navigable.

Avoid FLD for passing parameters for non command line type programs. Will use less PAGs.

Tactically use *QUIT to reduce I/O. Especially when programs have lots of nested validation logic. Very useful when validating. Use the *QUIT inside subroutines to halt further processing. Provides cleaner message feedback to end user and reduces response times.

Avoid Dynamic Selection Access Paths.

Avoid Virtuals, especially Virtuals with relations to files with Virtuals.
Virtuals have their place. Query access paths or in scenarios where they are always used. Best practice in this area is to avoid virtuals and to get data as appropriate.

Ensure programs do not close down if called iteratively.e.g. in a loop or inside USER: Process Record etc for a RTVOBJ or PRTFIL etc. Typically used for externalised RTVOBJs.

Consider sharing ODPs.(Open Data Paths)

Consider usage of shared subroutines.
Minimise the amount of code and reduces object size and makes debugging easier.

Consider usage of Null Update Suppression within your CHGOBJs. Very useful for batch programs.

Avoid unnecessary selector/position fields on subfile selectors.

Avoid contains (CT) selection on control panels

Ensure arrays are appropriately sized.
Too large they will consume more memory.

Reduce file I/O by loading small reference files that are regularly read into arrays upon opening the program. Good examples here wsould be files like TRANSACTION TYPE or XYZ RULES.

Reduce I/O by only getting reference data only on key change. This will depend on the chosen access path of course.

When writing to an IFS write fewer larger chunks of data rather than multiple small chunks. Overhead is opening, positioning and closing the IFS file.

Pass reference data down through call stack rather than re-retrieve in the lower level function.

Consider physical file access paths for fix programs (version 7.0+) or write SQL to perform the basic updates.

Use OS/400 default values to initialise fields on a database file rather than write a program.

CHG/CRT v CRT/CHG. Use the appropriate one depending on likelihood of the records existence.

Avoid *CONCAT and *SUBSTRING native in 2e for long string manipulation. If concatenating for long strings it is possible to keep a counter of where you are to save the concatenation operation time to identify the current position in the string.

Avoid RTV message to build strings with high usage.

Consider DSPFIL instead of DSPTRN, especially true if de-normalisation is designed in the database with any total duplicated into the header record.

Do not perform a *RTVCND for blanks.
Check for blank first in the action diagram.

Consider a database file field for *RTVCND if approriate.

Be aware of the affect of a scan limit for strict selection criteria as the screen will not pause load processing until the subfile if full or EOF reached. Particularly for large files.

Consider the naming conventions of your access paths to ensure that underlying indexes can be shared when key subsets are apparent and ensure that are built and implemented in the correct order to reduce indexes.

Ensure access paths have correct maintenance option i.e. *IMMED, *DLY or *REBLD.


Thanks for reading.
Lee.

Thursday, March 20, 2008

D.I.Y and Project Management fusion

Whilst most people I know are off on holidays this weekend (Easter), I have the unenviable pleasure of decorating my house. Like most people I have been doing this for what appears like eternity. I wouldn't say I am a DIY addict, but I have completed my fair share of decorating rooms over the years.

So this weekend over the 4 days I have to decorate our hall, landing and stairs covering the ceiling, walls, woodwork and doors. Fit new door handles, hang pictures then prepare a bedroom and decorate ready for the new carpet that is being laid on Friday week.

Now, I actually quite enjoy decorating and once this sprint is complete I would have conquered the majority of the house. The people before us clearly never bothered with general house maintenance and as such we have had a few issues but I am pleased to say that it will soon look stunning and be a joy to live in.

The reason for the rush is that we have guests coming from overseas. I say overseas, I should say our homeland. We emigrated a few years ago and are lucky enough to have regular visitors from home. The only real trouble is that due to the regularity of visits people don’t notice progress. Especially those unpainted walls or the lack of carpet in such and so area etc.

I call it progress as I know the amount of effort that is required to make a room look great. I could have easily over painted the old walls and had a reasonable finish. But, I am an IT guy and I notice these holes in the walls, the creases in the wall paper above the door and window corners. I notice the way the light reflects shadows if the plastering is uneven and a light is on in the other room. I notice those blemishes on the wall that will be covered by a picture. Even though these blemishes are covered I know that underneath that they are still going to be there.

Perhaps, just a little, I am too much of a perfectionist when it comes to decorating, but I justify that due to my software development background. I can't craft code or applications with a bad user interface. Sometimes, I need to get under the covers of the code and reorganise and repair previous faults and issues. I wish that the previous owners of the house had invested a little time in their maintenance strategy!!!!!.

As I find myself re-engineering virtually every aspect of every room I can't help but wonder why those lazy sods did nothing.


Money could have been a factor, as could apathy, but just like with computer systems, a little bit of routine maintenance is much better than a re-architecting or re-building project.

Of the houses I have owned and renovated over the years two have stood out as being maintenance nightmares. After analysing the small amount of data I have available my only logical conclusion is to never buy a house from a couple whose surname starts with ‘T’.

The Tibbett’s and the Tankard’s. You know who you are!!!!!!!!!!!!!!!.

I have to plan to do some things in the most efficient order. I need to do detailed preparation for some areas and have to demonstrate my good time management skills, ensure key items are performed as per the critical path, and most importantly, I need to escalate any slippage in the project to the project manager ASAP. In this case my wifelet.

There is also the added pressure in that some of the tasks need to be performed out of standard business hours. This is to avoid kiddies fingers touching freshly painted surfaces and to minimise the odour of the paint fumes permeating throughout the house. So Saturday nights glossing will commence from 7pm until the small hours. If it is anything like before (another house) then I will see daylight before I see the bottom of the paint can.

Actually that reminds me. I do need to remember to check the paint levels, application tools (Brushes), removal and cleaning tools (Sandpaper and Turpentine) before I start.

This is a pre-commencement artefacts scan. Nothing worse than getting dressed up (old clothes) ready for the painting effort, only to realise that there is a fraction of the paint required to do the job. Then you have the decision to make. Do I drive to the DIY store wearing these old paint ridden clothes?, or do I change to something more practical for the purposes?

I should be OK with resources, i.e. me. Anyhow, adding additional resources to a project at this late stage tends to make it late anyhow. And with the dependencies for some of the tasks, adding additional resources now won’t help. Some things just need to be done in a linear fashion.

I remember an ex colleague of mine from years gone by called Yuriy. He was a wonderfully intelligent software technician, he had his quirks and an abundance of quality phrases. One that stood out in particular was “Lee, it takes nine months to make a baby, you cannot add nine women to the project to get it done in a month”.

Now Yuriy is quite right with this statement, although I guess if you do add nine women to the project then you have a higher probability of creating that baby and much more fun during the project initiation phase.

So touch wood, I should be ok this weekend. The resultant smile from the wifelet, the sense of personal satisfaction and the thought of those visitors saying. “Wow!, well done Lee, this looks nice………” should make it all worth while.

This most certainly seems like project management to me and apart from the deliverables (decorating) and a lack of written ‘signed off’ requirements ("Just get it painted"). This could be one of a hundered projects I have completed over the years.


So, always plan your projects, do your analysis and seek approval before you commence. My background in software development and management should come in handy even if it does feel like a busman’s holiday.

Happy Easter.

Thanks for reading.
Lee.

Tuesday, March 18, 2008

The new millenium Bug?

There are only 17576 combinations that can be considered when allocating a TLA (Three Letter Acronym) for airport codes. Part of the challenge is that the code should also be meaningful and identifiable, for instance, everyone knows that London Heathrow is LHR and that Berlin in Germany is BER.

If you don't believe me take a look at this site http://www.world-airport-codes.com/.

After a while some of the codes appear confusing. Hwanga in Zimbabwe has the seemingly obvious code of WKI. I assume this is pronounced Wiki.

This may be of interest to some of the IT geeks reading this, assuming of course that the introduction of Google’s Knol has/will obliterated the Wiki concept. I can never work out why open source stuff like this "Wiki" is so damn difficult to maintain. I guarantee that Google or Microsoft will make this easy for Joe Bloggs general public to use. I can personally hear the death knell for Wiki already, largely IMHO its own fault for keeping it geeky and for the myriad of different syntax styles that are available.

Anyhow, back to airports. With over 9000 airports registered in the database to-date and our insatiable appetite to travel around the world, it is likely that more and more airports are going to be built, each requiring yet another unique meaningful code.

Presently, these codes do not include numeric characters so the basic math tells me that there are 26x26x26=17576 combinations available. This is stated with the assumption that unlike car license plates, we do use every letter available in the alphabet.

So what is going to happen come the day when we have used up all these codes. We could begin to use numeric characters, however, the numbers 0,1,2,3,5 and 7 are unavailable due to their similarities with the O, I ,Z,M (sideways), S and L. Also, unless we have taken a big step into the future, a code like KN9 really sounds like a it should remain in a novel by Arthur C Clarke rather than a domestic airport in deepest Taiwan.

That said, there is more than one way to skin this cat.

We could be tempted to extend the size of the code from say 3 characters to 4, or perhaps more. However, this will require a huge amount of effort to synchronise all the airline ticketing systems around the world, not to mention:-
  • Online and published guides.
  • Signage (i.e. Welcome to LAX).
  • All those travel agents whom for years had remembered these codes.
  • All those flight anoraks who have travelled to every airport known to humankind.
  • The humble fan website and all those pub quiz questions that have been written and are now negated.
All this hassel because someone decided to save a byte or two when naming the airports in order to save, at the time, valuable disk space. The irony being that this is the same disk space that the likes of Google and Yahoo are giving you gigabytes of just to sign up for an online email account.

It doesn't stop there though, what about the issued tickets that are already in the public domain. The transition period for change over would be huge (up to a year). So now we have to include all those check-in staff and the baggage handlers who now have to remember two codes for every airport into the debate.

I would suggest that the majority of those 9,000 airports have been created in the last 50 years. I find it quite daunting that we might experience the aviation equivalent of the millennium bug. This may not be that far off and once the developing nations reach full steam ahead with their expontential economic growth, you may well find yourself employed in the future to sort out the code written by those legacy developers.

Those same developers who didn't have the foresight to cater for tomorrow’s usage.

When we think about it, this has happened before. It was 20 years or so ago when it was concluded that 640kb of RAM was more than enough for any computing requirements in the home PC.

And those guys from the 70's that designed these airline systems have a lot to answer for. Not only did they earn good money back then with job security (outsourcing wasn't invented or trendy then). They now get rewarded for coming back in and fixing up their issues many years later.

So get travelling now. There might be some downtime in this industry and remember, someone has to pay for all this development. I pray to god (actually I don't as I am athiest) that you are using a 4GL like 2e or Plex to maintain this code. If you are using a 3GL you might have quite a lot of impact analysis to perform first.

Remember, you need to be extra cautious with your design and field domain management and regardless of what people tell you they want, look into the future and get it right first time.

Watch this space. You heard it here first.

Thanks for reading.
Lee.