Sunday, July 6, 2008

2E - Development Standards (Performance)

This is the first part in a complete series of articles I intend to post regarding development best practices and standards for the CA 2e (Synon) development tool. The aim of publishing the guides is to educate, collaborate and enhance the standards by receiving community feedback. After all, no one person can know everything but the wider community can contribute.

Many of these tips I have learnt over the years and quite a lot have been sent to me by interested parties around the world. A big thank you to you all.

I will publish the complete documents on the 2E wiki (soon) with full acknowledgements. (See my links section below).

In the meantime I will publish some selected extracts on this blog just to get your thought processes flowing.

Performance

There are many considerations when programming for performance in CA 2e. A few are highlighted here. This is by no means an exhaustive list. My next technical post will relate to Defensive Programming techniques......

I'd be interested to hear of others from the community in general and would be happy to include them on this blog and the final wiki document.

Drop unused relations where possible and set others to appropriate level. i.e. OPTIONAL or USER etc. This cuts down unneccessary code and processing as well as making your action diagrams more easily navigable.

Avoid FLD for passing parameters for non command line type programs. Will use less PAGs.

Tactically use *QUIT to reduce I/O. Especially when programs have lots of nested validation logic. Very useful when validating. Use the *QUIT inside subroutines to halt further processing. Provides cleaner message feedback to end user and reduces response times.

Avoid Dynamic Selection Access Paths.

Avoid Virtuals, especially Virtuals with relations to files with Virtuals.
Virtuals have their place. Query access paths or in scenarios where they are always used. Best practice in this area is to avoid virtuals and to get data as appropriate.

Ensure programs do not close down if called iteratively.e.g. in a loop or inside USER: Process Record etc for a RTVOBJ or PRTFIL etc. Typically used for externalised RTVOBJs.

Consider sharing ODPs.(Open Data Paths)

Consider usage of shared subroutines.
Minimise the amount of code and reduces object size and makes debugging easier.

Consider usage of Null Update Suppression within your CHGOBJs. Very useful for batch programs.

Avoid unnecessary selector/position fields on subfile selectors.

Avoid contains (CT) selection on control panels

Ensure arrays are appropriately sized.
Too large they will consume more memory.

Reduce file I/O by loading small reference files that are regularly read into arrays upon opening the program. Good examples here wsould be files like TRANSACTION TYPE or XYZ RULES.

Reduce I/O by only getting reference data only on key change. This will depend on the chosen access path of course.

When writing to an IFS write fewer larger chunks of data rather than multiple small chunks. Overhead is opening, positioning and closing the IFS file.

Pass reference data down through call stack rather than re-retrieve in the lower level function.

Consider physical file access paths for fix programs (version 7.0+) or write SQL to perform the basic updates.

Use OS/400 default values to initialise fields on a database file rather than write a program.

CHG/CRT v CRT/CHG. Use the appropriate one depending on likelihood of the records existence.

Avoid *CONCAT and *SUBSTRING native in 2e for long string manipulation. If concatenating for long strings it is possible to keep a counter of where you are to save the concatenation operation time to identify the current position in the string.

Avoid RTV message to build strings with high usage.

Consider DSPFIL instead of DSPTRN, especially true if de-normalisation is designed in the database with any total duplicated into the header record.

Do not perform a *RTVCND for blanks.
Check for blank first in the action diagram.

Consider a database file field for *RTVCND if approriate.

Be aware of the affect of a scan limit for strict selection criteria as the screen will not pause load processing until the subfile if full or EOF reached. Particularly for large files.

Consider the naming conventions of your access paths to ensure that underlying indexes can be shared when key subsets are apparent and ensure that are built and implemented in the correct order to reduce indexes.

Ensure access paths have correct maintenance option i.e. *IMMED, *DLY or *REBLD.


Thanks for reading.
Lee.

Monday, June 30, 2008

Knowledge capture & use in technical support communities - Part 2

In Part 1 I discussed the problems facing the technical support team with overworked experts and a need to transfer their knowledge as efficiently as possible.

In Part 2 I will discuss how to successfully capture and store this knowledge in an efficient and, above all, useful way. I'll lead off with a brief overlap from last time as a reminder of where we got to.

The 'Virtual Expert'

From what has been discussed so far, it is clear that expert knowledge is required, but that tying up the expert in this process is seen as unproductive in the current climate. We cannot get away from requiring time from the expert, but we can minimise this time and capitilise on it by recording the knowledge in the right way.

The answer lies in recording the expert knowledge (on paper or, more usefully, electronically - see later) in such a way that it is as close as possible to the over-the-shoulder commentary.

There is often still a need to use numbered steps when accomplishing a task. Such steps provide structure and sequence and help with mental tracking when performing the task. There is no reason, however, why each step cannot contain more than simple 'input, output' or 'action, reaction' type information.

For maximum benefit, each step should be written in conversational language and explain what the user is doing, why they are doing it, what the expected outcome should be and at least make reference to any unusual, but known, variations.

Furthermore, before any of the steps, there should be an introductory section which describes why the user would perform the task, what pre-requisites there may be and definitions of terms, systems and the like. After the final step, make mention of any further tasks that may be a logical progression from the task described, but which do not form part of this process.

Don't take anything for granted

Whilst we are talking about capturing expert knowledge, it is important not to lose sight of the basics. Any documentation is devalued if it makes too many assumptions. In creating a documentation repository, an audience level should be decided - such as 'technically competent', or 'beginner' - and all documents should be written for that lowest common denominator. It is easier for a more expert user to skim over known material than it can be for a new person to work out the undocumented basics.

It is important to include examples in the documentation. Where possible, have the example show the most common scenario, as it is most likely that staff new to the task will use the examples. It is also worth giving additional examples if there are significant variations in a step. Providing examples helps the user to get closer to the over-the-shoulder situation.

The ultimate test for the documentation is to give the process to a person who is at this 'lowest level' and have them perform the task. You will be surprised by some of the information you have taken for granted in your early drafts. I know I was.

Structuring the documentation

For ease of maintenance, it is important to only ever store a piece of information in one place. To help achieve this structure, it is useful to allow for two document types in the repository - reference documents and process documents.

Process documents contain steps describing how to perform a task. Reference documents contain (mostly-) static information that supports one or more processes.

It is often necessary to refer to tables of information (such as a list of files, describing their usage) from more than one process document. By separating this type of information into a reference document, it can be referred to by multiple process documents without increasing the maintenance burden through multiple copies. Additionally, when the table requires maintenance, it is easier to locate (residing under its own title) and the maintenance can be performed without danger of corrupting the process documents. When properly structured, maintenance of the reference document can be accomplished without knowledge of the referring process documents.

Whilst reference documents tend to represent pure data, it is still important to keep the conversational language in mind. There may be naming conventions or other conventions which are being followed for the data and it is important to note this in the reference document to complete the picture for the user of the information, and equally importantly for the maintainer.

It is also beneficial to factor out sub-processes into separate process documents and refer to them from the major process documents. This is of value where a sub-process is part of more than one major process.

The major benefit of having information in only one place is realised when errors are amended or updates are applied. These have to be done in only one location and all related processes are automatically catered for, as they simply reference to this single occurrence.

Capturing is understanding

The process of capturing information is time consuming and is best not left to the individual experts. Remember that they don't have that much time. Also, too many authors can devalue the repository by differing styles and levels of language.

The best solution to this is to have a single person (or possibly two or three) to build the documentation repository. This co-ordinator is then responsible for collation, setting style and keeping the language consistent. This person should not be expected to author all of the documents, but must be able to understand at least the broad concepts involved in order to ensure that appropriate structure is followed.

Each expert should be expected to provide a draft of the process or reference data, in a form approaching the final requirement. In some cases, where the co-ordinator's knowledge is good enough, they may author the document, but it should always be checked by the relevant expert.

Electronic storage for fast access

Following the structuring process above introduces one significant disadvantage in a paper-based documentation repository. Frequent referencing to other documents causes the reader to flip pages or have multiple documents arranged on the desk in order to complete a single process.

In Part 3, I will discuss some real world solutions to electronically storing, maintaining and delivering the captured knowledge.

Thanks for reading.
Allister.

Sunday, June 22, 2008

The "Wooo! moment" Factor

This is another one of my general life blogs and follows up from my recent article about what makes a good programmer where I refer to a ‘Rocky Balboa’ moment that encompasses everything about being a great computer programmer.

I would like to clarify that this post has nothing to do with the incessant number of reality talent shows and is not in anyway linked or endorsed by those in the entertainment industry.

However, I recently went to the Smackdown/ECW world tour event here in Auckland, New Zealand.

Like most of the other hardcore WWE fans I registered for the internet presale, logging in a minute or two before midday and continually pressing the refresh (F5) button in Internet Explorer until the ticket selection page popped up. A quick combo-box click later and an anxious wait ensued whilst the system allocated my tickets and then the “Woo! moment” came.

I knew it was a full on “Wooo! moment” as it did turn the heads of a few people in my office.

The reason being was that I was lucky enough to get front row tickets, seat numbers 1 and 2 which was not only the closest you can get to the ring but also the closest you can get to the entrance ramps for when the wrestlers strut their stuff as part of the pre bout entertainment. The arena held thousands and thousands of people and to get to two best tickets in the house was reason enough for me to celebrate. Actually, celebrate for my daughter, as she is the wrestling fan. I just have transient knowledge.

For anybody that follows the WWE Wrestling you will probably know a guy called Ric Flair. He is a 16 times world champion as well as being regarded as one of the all time greats. He is also famous in WWE circles for his tag line “Wooo!”. So much so that the last three shows we have been to, Ric Flair has never been there, he has been retired since March 2008 and yet everyone was shouting “Wooo!” as the excitement began to build in the arena.

Now for me life is about experiences and the memories thereafter. We have both good and bad times that shape us all individually in some form or another. And one hopes that over the balance our lifetime that there are many more good times rather than bad and it is these good times that I affectionately refer to as the “Wooo! moments”.

Now if we spend on average a minimum of between 50 and 70 hours travelling and working each week I often ask myself why do people put up with a job or a career that doesn’t provide them with “Wooo moments”. I have been pretty lucky in this regard over the years. I am an analytical person and love computers. As a child at my local school jobs fair in 1983 I expressed that I wanted to be a computer programmer. After a few minutes searching the database (list of jobs on paper in those days). I was asked if I wanted to do commerce, a funny choice at first until you realise that it was the closest alphabetically to computing.

How times have changed.

Now everyone wants to do computing and whilst there are now more areas to become expert, I also believe that computing remains at risk of be dumbed down. I say this because many people are getting into computing because they see a higher than average salary trend, they only see the exciting parts of the job glamorised by Hollywood films or they see it as easy.

For me I got into computing because of the “Wooo! moments” and I continue to adore this line of work. But also as I get older I also find myself enjoying the fact that others around me are having their own “Wooo! moments”. I'm a little like grandparents who enjoy watching those cute little bundles known as grandchildren.

But of course it doesn’t stop there.

You may have a job that only has one or two “Woo!” moments in an entire career span. A recent example was the recent mission to detect life on Mars. Some of those guys had been working on that mission for 10 years or more. But that “Wooo! moment” when that million dollar craft landed on Mars and started to do its stuff.

Wow!!!!

The screams of joy and relief I could hear just by watching the footage on a 21” TV was there for the world to see.

The “Wooo! moments” are what drive me to get up each day and if you find yourself having less and less of these moments whilst at work.

Ask yourself why?

Work does dominate and validate many of ours lives so you might as well enjoy what you are doing. But please don't moan to me about your job. Do something about it.

Please, ensure that you do. Life is too short.
Thanks for reading.
Lee.

Saturday, June 14, 2008

The lunchtime effect and an insane piece of Job’s Worth.

The other day I had to visit the immigration department to renew my daughters’ residency visas.

This is a process that you have to repeat every time you renew your main passport because if you want to re-enter the country you will definitely find it useful to have this little slip indicating that you are a legal alien tucked up nicely in there somewhere.

The process is quite simple. You bring your old and new passports, fill out a form and pay the fee.

Simple!!!

A short time later (minutes or hours) you leave feeling robbed but also happy in the knowledge that you’re able to travel to and from your adopted homeland.

The key, as you all know, is to avoid the queue or at least pick the day when the most counters are open. We were going to the hospital as my wife had an appointment for one of her ailments so the time we had available in between the school runs and the appointment was basically lunchtime.

Or put it another way. Rush Hour. Usually when you have to go somewhere on a time constraint then you will always pick the bad day. Well for some reason it was empty on this occasion. We were through to the application triage officers within minutes. These guys check the forms and provide assistance before you get to a case officer.

This obviously is to avoid you waiting for a period of time to find out that you have completed the wrong form or worse still used the wrong type of pen colour.

At this stage I leant over the counter and enquired as to the lack of visitors. You see. I have been to the immigration department before and joined a queue that left the building. To give you another indication in the old building there was a portable cafĂ© outside to serve food and drinks………..

Apparently it was just a slow day. I was wondering whether this was as a result of what I refer to as the lunchtime effect. I am sure I am not the only one out there but it could be due to the fact that I am an IT guy I was thinking. Why was this room empty? Was it because others had considered that it was lunchtime and therefore made the assumption that it would be busy, thus avoiding the ‘so called’ busy period when there are more people and less staff!!!!

Actually, who cares? I got the visas sorted in record time, but, I am grateful for all those who were considerate to enough to think of the lunchtime effect.

But the jobs worth moment is certainly worth writing about. For one reason alone, I am adamant that the person who came up with this rule was not an IT guy as there is no binary representation of what I witnessed. No IT guy in the world could have come up with an answer other than 0 or 1 (On or Off). And this to me is 5.66645645- and fifteen sixtenths.

As we were applying for three visas the costs were $100.00 per application. This makes sense I guess. Until you hear the triage officer ask “Are either of you two (Wife and I) applying for the visa also?”.

Our answer this time around was “No” because our passports are 10 years and the kids are 5 years in renewal intervals. “Shame” was the response, she then continued, “Because if one of you i.e. principal applicants are applying as well we can do this as one application with 3 dependents and therefore you will only be charged one off fee of S100.00.”

So the logic is that we will do the extra two applications and produce 5 instead of 3 visas. Key in details for 5 people and not 3, print, remove and secure 5 visas in the passports and not 3 and we will do that for one price.

But as a principal applicant isn’t applying then we need to treat it as 3 separate applications!!!!!!!!!!

If someone can shed any light of this I would be grateful. Until then I am proud to call myself a Software Development Professional. I certainly wouldn’t want to explain the aforementioned rule for my living or associate my name with inventing this process.

Thanks for reading.
Lee.

Tuesday, June 10, 2008

Knowledge capture & use in technical support communities - Part 1

This three-part article is adapted from one I wrote almost 5 years ago when much of what you will read about was fresh in my mind. This adaptation addresses only the passage of time and some points of style and meaning for a wide audience.

Whilst software development is the subject of this blog, let us not forget those who (typically in large organisations) support the developers and others.

The nature of technical support communities.

Technical communities come in many forms, be they design teams, development teams or support teams.

Whilst design and development teams are largely about the creation process, they still have many day-to-day activities which are defined and repeatable. Support teams, although fulfilling an entirely different role, often have to create on a very short-term basis. So it can be seen that the different types of teams have similar requirements.

However, the support team seems, most often, to be the one to get out of control. The difference is that the support team is always working on a short time frame. In addition, support teams often become involved in project work and this adds to the complexity of the day-to-day activities, as the time frames are shortened still more.

Most often, you will find that staff in a support team are very good at what they do - they have to be to survive. Unfortunately, the higher the skill of the staff, the more reliant you are on those staff to keep the systems running. It is a difficult and time-consuming option to bring 'green' members into the team.

How many support managers have not recognised that documentation is a key part to the support process? I would wager very few. Fewer still, I propose, have succeeded in completing the documentation requirements within their team and reaped the kinds of benefits they were expecting.

Documentation, to the 'tech', is a four-letter word. I, myself, recall asking the question "Do you want me to document it, or do it?" Simple economies prevent the techs from having enough time to complete the documentation task and many welcome this excuse not to do it.

Another trait of support teams is the experts. In virtually any support team, there will be experts in various disciplines. Most often, however, these experts are relied upon to provide most of the resource in fixing problems in their area of expertise when they should, in fact, be called upon to share their knowledge.

Shared knowledge is a powerful tool. Experts will always be needed when particularly difficult or unusual situations occur, but the team as a whole should be able to leverage the experience to improve task turnaround times through a more even spread of the load.

Knowledge transfer

It has been documented in studies that the best way to learn something is to have an expert stand over your shoulder while you go 'hands on'. The reality of the situation in front of the learner, coupled with specific and pertinent comments or instructions from the expert gives the learner an experience often indistinguishable from the real thing. The learner also has the opportunity to ask direct questions in the context of what they are doing. Book learning, on the other hand, can only go so far with static examples and predetermined situations.

Perhaps the most important aspect of 'over-the-shoulder' learning, however, is that the expert is unlikely to simply recite steps by rote. There will be an accompanying commentary and usually a significant amount of reasoning on why things are done that way. This is very important in equipping the learner for when things do not go to plan.

Learning the steps of a process by heart is well and good when the process works. Most often, however, processes do not cover all possibilities and the rote-learner of the steps is going to come unstuck when an unforeseen, or simply undocumented situation arises. Unless the learner understands why they are taking the steps and what they should be achieving, they are almost as much 'in the dark' as prior to learning the steps.

Having knowledge about the nature of the process and the goings on under the covers helps get through many small deviations from the norm and also helps in issue resolution, as the learner is able to return to the expert with an hypothesis, or at least having done some basic checks suggested by the nature of the operation.

The key issue with this type of knowledge transfer is that, in the majority of cases, the expert is already overworked and has no time to spend standing over shoulders.

A secondary issue is that the expert may have to impart their knowledge, over time, to a number of different people, and this is inefficient.

The 'Virtual Expert'

From what has been discussed so far, it is clear that expert knowledge is required, but that tying up the expert in this process is seen as unproductive in most situations. We cannot get away from requiring time from the expert, but we can minimise this time and capitilise on it by recording the knowledge in the right way.

In part 2 of this article I will go into methods for capturing this knowledge in the most effective way.

Thanks for reading.
Allister.

Monday, June 9, 2008

By way of introduction...

Greetings fellow software developers, this is not Lee speaking! My name is Allister Jenks and I am sure some of you who know Lee will know me as well. Those who don't know me may yet have read my comments on Lee's posts - under the identity of "zkarj".

Lee has graciously allowed me to contribute to his blog and I hope I can bring you the same levels of insight and analysis that Lee has led off with. I look forward to your feedback too.

Sunday, June 8, 2008

My first blog alliance

It gives me great pleasure to introduce an ex-colleague of mine called 'Zkarj'. He has worked with IBM Power Systems (System i, i5, AS400 etc) for many many years and is a true advocate of the platform in general.

He has written several articles/rants over the years and is published online

Zkarj has asked if I would like to accept posts from him on this blog. My first offer of co-authorship since the blog began but one I certainly won't be turning down as he has lots of very interesting things to say about many of the topics that I blog about. You can see that by the number of comments I get when one of my new blogs hit his RSS feed

I hope you enjoying reading his material as much as I enjoyed working with him

Thanks for reading.
Lee.

Saturday, June 7, 2008

What makes a good software developer?

I have decided to move on from my current role after over four years working at my present company. My reasons are varied and plentiful but as always the lure of a fresh new challenge often commands the majority of my thoughts.

I have started once more on the interview merry go round, first with agents and then in the coming weeks with potential employers. This is an interesting time in my career and certainly a change I am looking forward to albeit a little nervously as I have only ever had three IT related job interviews in my life.

During my early stages of interview with one particular agent I was asked a really good open question. The question was “What makes a good software developer?”. I waited no more than 2 seconds before I began rattling off my opinion. Normally in these situations you take the time to consider what you want to say and then lead up to the answer.

This felt different.

I guess this is because although I have never answered this question before (personally or via my blog), I have hired enough developers and non-developers over the years to understand what I believe a good developer to be. After all, one of my own interview questions to potential new hires is “Why software development for a career?”

I ask this question as I want to know what motivated them to get into software development and what maintains that desire to be a software developer. At my last firm a new project manager joined and we got talking about stuff. You know, the technical stuff. It was quite obvious to me that this guy didn’t want to be a project manager and that he still harboured that technical development desire. I knew this because as a project manager he would say stuff like “Worst case I can write that program.” or “Couldn’t we do this in x language or y language.” It was pretty obvious to me that this guy couldn’t let go, and this is what I look for.

For me the number one thing is the passion. I want to see this in the eyes of the candidate as they express to me their achievements and technical prowess. I look for the body language that backs up these passionate views.

I have been part of and built software development teams. I have written in other posts that you do need a mixture of people at varying stages in their careers with a good balance of personal motivating factors. Passion is certainly the one I look for when I am considering the lead roles within a team. The reason being that I believe as a lead developer you must bring others on by example.

Other factors to look for, especially for a permanent employee are:-

* Longevity in the industry and loyalty to an employer or two.
* Proof of learning multiple languages and having the desire to adapt to development trends.
* Good understanding of general development concepts and practices.

These are pretty generic but with passion, loyalty, desire, adaptability and a good all round understanding of development I believe I can teach any developer the technology of the month.

Without these attributes I guess you could be selling your business short. If I had to choose one then passion is the one I would go for.

If you see a developer struggling with some code all day but eventually they let out an enormous scream of relief as they finally solve their issue, jump up and then start punching the air in delight in the style of Rocky Balboa.

I’ll have that person in my team any day.

Thanks for reading.
Lee.

Saturday, May 31, 2008

The Great 3GL v 4GL debate - Part III

This is part III of a trilogy of articles regarding the usage and evolution of software development languages. Part I can be found here and part II here.

All of these technologies have issues to address. 20 years ago we were all happy with green screens for business applications with centralised platforms, then came client server with Windows and the distributed computing model became mainstream. Then along came the Internet and the return to HTML thin clients and now the evolution once more learns towards Rich/Smart clients.

The irony for me as that I have witnessed many people move on from the 4GL world of the nineties to emerging 3GL (albeit object based) technologies i.e. J2EE (Java) and .NET compatible languages etc.

With the extra layers of complication (some call it abstraction) added due to business usage of the internet I am seeing more and more tools coming onto the market that claim ‘code generation’ capabilities. You only have to look at the OMG’s ever growing list to see that once again people are looking for the holy grail of application creation as projects overrun and costs escalate.

I do see a trend towards total code generation once more. IBM has launched a 4GL called EGL. This looked quite promising and might me worth a look but to me it is not yet as mature as others.

The difference between tools like Plex/2e and this new breed of tools is that the ‘so called’ newer tools generally only cater for the singular environment and often really only create the initial code that requires manual intervention and coding in the generated language. In my mind, these tools have yet to evolve as far down the road as Plex/2e.

Plex and 2e both have their unique selling points.

2E is pretty easy to use and probably has a 3-6 months learning curve for a developer to become very proficient. Quicker with excellent training and in-house support. Software development room 101. Item 3. Always spend decent money getting a guru to help you set up your environment and train the developers. Too often mistakes are made is the early stages of application development. This is especially true when using new tools.

Plex will take longer (12 to 18 months) as it supports inheritance, shipped and customer business patterns, meta coding and many more target development platforms. It really is the Daddy of ARAD (Architected Rapid Application Development), hence the learning curve but the payback after this is judged in weeks, months or even years off a development projects timeline. And with the great pricing of the tool and generators nowadays, it really is an option to help protect you against the constant upskilling costs associated with other technologies.

When you also consider that the tool has localisation, application version partitioning built into the tool. From the single skill set perspective your developers will always remain current. That said, you would always create the optimum patterns and platform level code if some of your developers have the lower level skills.

I have been programming computer systems in Plex and 2e for 16 years and these systems have used the best aspects of these tools and have always been database focused applications.

These have been in Finance and Banking, Debt Management, Mortgage Application and Processing, MIS, Project Management, Time Recording and Environment Management. These were deployed on System I (now IBM Power System with ‘i’ as the operating system (RPG and RPG ILE code), Java, C++ server code all with either C++ or Java (Swing) clients.

With the plans for these tools heading towards .NET C# clients and the C# server code in 6.0 already available. The recent announcement of the WebClient partnership between ADC Austin and Websydian means that the future looks really bright.

Time will tell what will happen and often these battles are not won or lost by the technologies, often they are decided by the marketing budgets.

However, I know what playground I want to play in. And if you need a guru to help you. You should contact me.

Thanks for reading.
Lee.

Thursday, May 22, 2008

Where's the dishcloth?

Bugs!!!! Love them or loathe them, realistic developers understand that bugs are part of our everyday life. We have technical bugs, environment bugs, business logic bugs, integration bugs, somebody elses bugs and god forbid, stomach bugs.

Now apart from the stomach bugs. Who is responsible for clearing up this mess?

There are numerous approaches depending on the product(s) you have developed, your organisational structure and your focus on bugs in general. I prefer the ‘zero tolerance’ approach to bugs, however, others are quiet happy to have a level of bugs in their code and apply risk and cost ROI calcualtions to determine whether the bug is recitfied, and if so, when. I feel there is a whole post on that subject alone and I’ll save that for a slow news day.

Moving back to the tactics around who should be responsible for clearing up this shoddy code. If you work as part of a small team of developers or lone wolf it is likely you have little choice other than to get the developer who wrote the code to fix it up (look in the mirror). You are unlikely to have development support teams who act as dedicated bug fixers or access to a stream of developers on the graduate recruitment programme that fix up the bugs as part of their development induction process. The later two are certainly perfectly valid approaches although a little old fashioned in my view, after all, who trains up new recruits in the process of only showing you how not to write good code.

Personally, I believe that the developer who created the code should be the developer who fixes the bug. Obviously this won’t happen if they have left or are away on annual leave or a significant amount of time has passed, but in general it would be good practice to follow this process through. There are many fine reasons for either approach and no doubt I will conclude with some views around this a wee bit later.

For now, I prefer to use the anology of those everlasting worksurface ‘tea rings’ when referring to bug clearing methodologies.

“Tea Rings!!!”.

Yes you heard me correctly. Consider the communial kitchen in your office. You probably visit this vicinity between 4 and 10 times per day to make that cup of espresso stimulus or the relaxing afternoon chai tea.

The process is quite simple. You will carefully choose the serving vessel and may even warm it through first. You will likely compliment your brew with milk or cream and sweeten to taste, unless of course you actually listen to the advice of your dental hygienist and drink water only. Whilst queueing patiently for the kettle to boil like the quintessential englishman you will definitely have pondered your preferred order for mixing these ingredients. Water or milk first probably being the most important choice and certainly the one that has polarised the tea drinking world for generations.

More often than not this process is repeated throughout the day and with the exception of having to raid the dishwasher for a preloved teaspoon it generally goes without a hitch time after time after time. Software development generally pans out this way too. Once a developer becomes productive and uses your best practices they will be able to make a good brew (code) with no mishaps (bugs).

After all the effort analysing, prototyping, designing, creating and ensuring adherrence to your quality control processes you are finally ready to move your code (brew) to production or systems testing. From time to time though there is that unsightly spilage around the base of the cup as you pick it up. These are those tea rings that are etched on every spare post-it note pad on your desk or the coat the surface of that old CDR you are using as your cup coaster, the same coaster that once contained the backups of your companies servers.

So who is the best person to clear up this mess. As the creator it should be a small matter of picking up the nearest dishcloth and wiping the worksurface clean. But wait. When you look at the mess you notice that there are other tea rings there, some sugar mounds and a spattering of breadcrumbs from that cheese toasty you could smell from the other side of the office earlier. At this stage do you clean this lot up as well.

You may elect to wipe clean your own mess only, expell a little more elbow grease and time and clean all of it or choose to ignore the tea ring as in the whole scheme of things, it is hardly noticable in amongst the remainder of the mess. For me there is only one satisfactory approach and that is to deal with the issue as soon as it arrives.

It only takes seconds to analyse the problem and take effective corrective action. If you choose to mop up all the mess then you must be aware of the dependencies of fixing up all the issues. What appears quite simple may take longer and if the mess is particularly ingrained you could actually damage the efforts of others.

Doing nothing though really isn’t an option either as this creates an environment that bugs are satisfactory. Housekeeping is just as important in the office kitchen as it is with keeping your code and products bug free. If you do favour seperate teams or graduate programmes for doing the teams dirty work, imagine for one moment how they feel knowing that they are merely cleaning up other peoples mess.

Lastly, how are your developers ever going to get better and improve your product if there are no consequences for producing shoddy code in the first instance.

Thanks for reading.
Lee.

Monday, May 12, 2008

The Great 3GL v 4GL debate - Part II

This is part II of a trilogy of articles regarding the usage and evolution of software development languages. Part I can be found here.

So what are the benefits or otherwise of using a 3GL over a 4GL and visa versa. For me it certainly depends on all the usual factors that drive any technology decision. Cost of product, support, flexibility, the human factor, tool lifecycle, vendor direction and target platforms being a few that come to mind instantaneously.

The Pro’s of a 3GL

Embedded or mission critical applications like Air Traffic Control systems are generally handcrafted and more suited to a 3GL environment, as are operating systems, 4GL tools themselves (debatable), communications, hardware drivers and generally non database applications. As the developers have access to all the API’s and are that step closer to the CPU, they generally have wider usage opportunities.

Accessibility to wider developer pool. Whilst there are probably thousands of developers for your chosen 4GL, possibly even tens of thousands. These tools simply do not have the numbers associated to mainstream development languages and IDE’s. There is an estimated 4 to 5 million developers following the evolution of Java and no doubt Microsoft can boast even more for its most popular products. That said, of course, this also means that it is also harder to find a guru within that skills ocean, not to mention, filtering out those who have spent 15 minutes in the IDE and now claim some form of exposure on their curriculum vitae.

3GL’s are quicker to react to emerging markets and development trends. Generally the suppliers of these 3GL tools are inventing the future. They don’t often agree with each other but they certainly have the advantage over the 4GL creator. These guys have to wait and see what technology actually matures beyond the marketing hype and into mainstream best practice before committing to provide code generation for that area.

Flexibility. Languages at 3GL level, depending on the targeted platform, have virtually no restrictions with the type of application that can be written and how they are written. This means that applications where speed of performance is the critical measurement of success then it is most likely that a 4GL will fall short of the handwritten targeted code.

The Pro’s of a 4GL

Business rules focused development. Once you have learnt the code generators quirks you are in a situation where you mainly tackle your development from the business domain and you allow the code generator to handle the technical implementation. With this comes a significant reduction in the amount of time required to build an application. Many will say that there are standards and frameworks that help with 3GL development. This is actually quite true, but, also be aware that the code generator vendor will be skilled with the major best practices and will write more consistent code. Some may argue that the code is not as neat as code written by a good developer and in the regard, I quite agree. I will say that the underlying code will be written in the same way and style, therefore, after a while all the developers will become conversant in how the code is generated, that is, if they want or need to understand. (See Below)

Complexity avoidance. A 4GL will protect the majority of the developers using the tools from the underlying complexities of the generated language. When you couple this with the ability to influence how the code is generated using patterns, have the ability to take the design model from the 4GL and transform that into other language code, your business logic can truly be ported from platform to platform as trends become reality and your technical needs change.

Impact Analysis. For me this is one of the key features of using a 4GL tool. Generally these tools use a database to store design and program artefacts that are then transformed in the language code. Every reference for every field, File/Table, Access Path/Index/View, Function/Object/Program is stored in the repository and a developer can track each and every item through to where and how they are used. This is a powerful feature that cannot be overlooked versus manual reviewing of language source files.

Trusting the generator. When I train people to use CA 2E or CA Plex the defining moment for gauging the developers progress and understanding is the day that they learn to trust the generator. As with any tool, a badly constructed function in 2E, for example, can create badly generated and non compilable code. Once the developer realises that it is generally their fault if a generation of code fails they’re ready to move forward. If have seen far to many 3GL programmers migrate to the 4GL paradigm only to get bogged down into the details of the code produced, yet they will trust the compiler without hesitation. With the ability to change a shared function or the domain of a field and then apply detailed automated impact analysis to identify all affected programs, press a button to regenerate and compile all programs and database files affected is a very powerful feature.

The Con’s of a 3GL

Slower, more expensive development. The very nature and size of modern 3GL languages and their flexibility is also their Achilles Heel as there are so many ways to resolve a programming issue with literally thousands of opinions and many directions. In a nutshell for certain types of applications, particularly those that involve the extensive usage of a database, the ROI for using a 3GL versus a 4GL is very poor indeed. To contra some of the cost debate, 4GL tools are generally more expensive to purchase. The most expensive item in any development team is the human, even if it has been outsourced to an emerging development powerhouse.

You will spend more time debugging the application. A very good ex-colleague of mine once said “If the art of debugging is the removal of bugs from programs, then programming must be the art of putting them there in the first place.” Because we are relying on the developer to code all aspects of the application it is likely to cause some issues along the way. It is generally the developer’s prerogative to deal with memory leaks and usage in languages like Java or C++ but with a 4GL it would be the code generators responsibility.

Complexity. Once again due to the size of the languages and their strong reach it is unlikely that you will find developers that know all the aspects required to complete an application. Your staffing needs are generally much higher and the learning curve for the 3GL would be very significant indeed. This means that the developers must understand many technical as well as business problems.

The Con’s of a 4GL

Vendor lock in. Depending on the vendor this can be quite a significant issue. If the vendors are too slow to react to emerging technologies you will find yourself with a heterogeneous development environment and you will lose many of the advantages referred to above with regard to complexity protection and highly detailed impact analysis. Worse still, your vendor may well decide to stop production of the 4GL or chose other directions as the options with technology deployment balloon. These tools are often criticised as proprietary.

Flexibility. There will be limitations with the scope of applications that can be created by a single 4GL. There are of course others that target different platforms and purposes. Their flexibility is often measured in the lowest common denominator for which they have to support/generate code for. For example a generator that generates code for three different platforms may have to limit what can be done in one language due to limitations in another. For example different languages may have differing maximum field lengths meaning that for generic code construction in the 4GL platform x and y can only size fields to the limits of platform z.

Source Code. Many 3GL developers will argue that the code is not user friendly, bloated and often too generic in comparison to hand-written code. This can be true of some code generators and is certainly something that needs to be considered when choosing an approach for your development.

All of the above are by no sense of the imagine a definite list. Given time, I believe that I could have produced a list of 20+ Pro’s and Con’s for each approach.

Part III will discuss trends, fads and conclude the 3GL and 4GL debate with my own personal viewpoint.

Thanks for reading.
Lee.

Wednesday, April 30, 2008

The Great 3GL v 4GL debate - Part I

Ever since development languages were invented we have sought ways of making the development of software easier. We have attempted to do this by abstracting the level at which the developer is employed to create code and created languages and tools which are more 'natural English' in terms of human interaction. However, on the other hand we have also added to this extra levels of complexity with changing hardware, communications protocols, multi-tier server deployment, runtimes, middleware, messaging technology and language politics and I haven’t even bothered to discuss the internet.

Regarding language politics, read anywhere on the internet about the great .NET or J2EE debate or perhaps commercial languages versus open source and you will quickly realise that there is significant inroads to be made with IT vendors around the world. You will see an IT community that is split pretty much down the middle, although if you want my humble opinion as it currently stands, I believe that we will once again see a shift towards packaged and guaranteed software over that of open source and Microsoft will eventually win the development language tools war.

This three part article aims to discuss the evolution (not revolution) of software development languages with particular focus on third and fourth generation languages, a debate on the pro’s and con’s of these approaches and then conclude with a few comments regarding some of the repeating fads as I see it today.

It wasn’t that long ago that the typical software developer would have been aged between 35 and 60, male, probably balding (So that’s me covered), university educated and employed within those same hallowed institutional walls since passing his exams, quite ironically with his non IT related degree. He would have been wearing white coats in the office, have bottle bottomed glasses, a pocket full of pens and answered to the name of geek or dork.

Well this is how Hollywood and the urban stereotype would have it.

A bit harsh if you ask me but to be fair, they would have been fascinated by punch cards, saw value in paper tape with holes in it and probably would have missed any fads of the times with regard to musical revolution. There certainly would have been very few ordinary people and the numbers of women specialising in this field, countable on the one hand.

Now, time has moved on, as has technology and you now can’t tell an IT guy apart from your ordinary office worker. It actually amazes me that although we are making the art of software development easier, the extra layers of complexity should in theory have amounted to a increase in the numbers of geeky looking guys, so much so that if lined up ten abreast a communist regime would have been proud to show off their IT military might with these millions marching in city squares across the world. But this hasn’t happened, IT in general is now a mainstream activity and the working environments are certainly more aligned to that of a typical office environment. With this mass adoption of IT skills in the work place I also believe that IT guys are now considered a corporate commodity, where as 15 years ago the pay would have been relatively higher, how times are changing.

So we have worked hard to improve the scope and productivity of the average software developer. We have migrated from the punch card era to having keyboards, mice, laser pens and voice recognition input devices. We have languages that have evolved to make them more readable and understood by a human. The days of everyone programming in assembler or other low-level machine/processor level code began to change with the introduction of the 3GL languages of the day. COBOL, Fortran, RPG and Basic would be good examples here. I am sure that at that time some people would have embraced the new paradigm as much as developers have embraced Java or are now embracing Flex/Actionscript, Ruby on rails or C# as the perfect way forward. There would also have been the doubters and I guess the split would have been no different to many of the impasses that we see reported online and in periodicals every.

Still, software engineering took time.

We are improving and continue to improve 3GL languages to this very day. We now have a whole hard drive full of productivity features embedded within our integrated development environments (IDE). Features like wizards, auto code completion, and syntax auto-correction were non-existent back then, let alone globally accepted standards and minimum requirements.
I would say that any developer working 20 years ago would never have thought that freeware/open source (delete as appropriate) products like Openoffice or Eclipse would be a reality. They could have conceived that software was given away as a loss leader for professional services, but, a massive corporation like IBM giving away a product that it spent and to this day still spends millions of dollars on would have been considered insane. But this is the state of play today.

So when many thought that we had gone as far as we could with the evolution of the 3GL language we once again raised the bar with the next great technology advancement. This time we evolved to 4GL languages. These are otherwise known as code generators, CASE (Computer Aided System Engineering) tools or ARAD (Architected Rapid Application Development). This was hailed as the end of the expensive IT developer, the marketing expressed that the typical end user could now get involved in the development of the IT systems and return the ownership and power of your systems back to the business, and more importantly drive it out of the hands of that lowly IT department.

The same IT department that through these times was still considered a cost overhead rather than a business opportunity enabler. Many of you may remember the days when the IT function reported to the financial controller. I believe that most IT people are artists who can’t draw and we use the creative parts of our brain to build beautiful code and systems. To think that you’d stifle (some may still continue to do) this creativity with the frigidity of accountant mentality still frightens me. Imagine the marketing or sales director reporting to that same accountant? Actually I can, ouch!!!!!!!

With the marketing hype, 3GL project overruns and increasingly tight deliverables the 4GL era was born and in my view this has created some of the more interesting debates in IT circles. The simple reason being that I would anticipate that for each platform/system available there would be numerous languages that are either compatible (Java and the JVM) or targeted (Compiled) that are considered the language of choice, each with their own hardcore developer following. There will also, more than likely, be a 4GL that targets that platform and I bet my left one that a maximum of 10% of the users of the platform use a 4GL over that of the 3GL.

Are these 10% the visionaries?

Well I guess that depends on the tools of choice, but no one denounces the 10% of personal computer users that use the Apple Mac and all its gizmos.

You also have to consider that many of these 4GL languages evolved during a time of single platform computing. i.e. There would be a 4GL that would target the complete application development cycle. The tools were capable of constructing everything from the database, screen and reports though to catering for the applications menus. I have had experience developing in both 3GL and 4GL languages and I believe that I am well placed to comment accurately about both approaches. So as IT has evolved so have many of these 4GL tools.

The question is do you choose a 3GL or a 4GL?

This is still a fiercely debated argument online or at technology conferences just as much as the debate around the merits of client/server technology versus thin client or betamax v VHS (lol). With the emergence of more and more technologies and web 2.0 we are again beginning to witness the thin/rich client gloves come off. Which for me is quite ironic as web thin client was the reason for killing off the high deployment cost of client/server systems which itself was created to offset performance issues of software systems and distribute the processing load.

That said, cost is now measured in bandwidth and reach rather than hardware and employees required to support the system.

I personally believe that these architecture choices should be down to the type of application you’re creating and its accessibility and user requirements. Also, this is the same thinking behind why you would choose a given development tool and at which level of abstraction you wish to develop the application. Another interesting topic involved with the 3GL v 4GL debate is that many of these tools are capable of producing code for multiple platforms i.e. IBM Power System (RPG), Windows (C of one variant or another) as well as Java which is capable of being deployed on multiple platforms.

Java claims a write it once, deploy it many times approach. I would say that it should be rephrased as write it once and the tune it for each platform, JVM or application server of your choice. Now I make no bones that I am an advocate of the 4GL (especially CA Plex or CA 2e) over the 3GL for the applications that I have written over the years. Most 4GLs cater for the RDBMS systems and are best suited for these types of environments i.e. banking systems etc. Other 4GLs or tools for writing computer games are in existence and once again these are designed to protect the developer from the underlying complexities of the code. With these engines you do not need to understand the ins and outs of DirectX or DirectDraw API’s or the language that is generated. But your decision to use one of these tools must be twofold.

1. It must be appropriate for the type of application you are creating.
2. Once you have chosen the 4GL you must stick to it and use it properly.

There are many tools out there that claim that they can generate code into multiple languages and these tools in my opinion are great for ISV’s that need to have an offering across multiple platforms to negate the hard sell of one technology over another. After all, shouldn’t your marketing and sales teams be selling the values and merits of your software’s function and feature set rather than justifying your company’s technology decisions

Part II will discuss the many pro’s and con’s of the 3GL and 4GL languages and tools.

Thursday, April 17, 2008

Always wipe your bum!

“What comes around goes around” is a phrase commonly used when preaching to others about ethical behaviour or by those that believe that there is a levelling force out there that cares enough to ensure that things work out evenly in the end. Other phrases like “You are what you eat”, “You will reap what you sow” or “Do unto others as you would have them do unto you” are also symbolic of phrases embracing karma.

I am a keen believer that as a role model (Manager/Leader) in team management you need to practice what you preach. During the team management phases of my career, my style has generally been a hands-on approach. This enables me to utilise my technical and leadership qualities on a daily basis from within the bosom of the team. I have never sought my own luxury office or other status symbol as an indication of my position. My positioning amongst the team would mean I am always available to talk through ideas or issues. I most certainly will be there to encourage, assist and develop the teams skills. If I ever needed privacy I could always track down a meeting room, shelter in the local café or work from home for an afternoon.

The purpose of this article is not to discuss the merits of positioning yourself as a manager or a leader within your development team, or is it to debate the benefits of the hands-on versus hands-off management and leadership philosophies. Each of these items are environment specific and so in depth that they are best served with full discussion in a future article.

I want to discuss the aspects of being a role model for your team and how your behaviour affects others around you.

I have worked with many different people over the years all with interesting quirks and features and every single one of them has in some way or another left their mark on me, not physically but by influencing my views as a ‘software development professional’ and helping me cast my expectations of the working community in general.

Some I speak of as visionaries and ahead of the curve, I value many others as trusted colleagues whose integrity has shaped my beliefs of honesty and transparency, there are the characters who make you laugh/cry or cringe, even before they speak. Then there are the odd four or five that if I were to write down my true opinion would land me in court fighting a defamation hearing, the blog would be censored as the article degenerates with unprintable language that even Kevin ‘Bloody’ Wilson would find objectionable.

Many managers fail to understand that you are judged on more than just your innovations or effectiveness, and you can guarantee more than death and taxes, your staff and colleagues will eloquently appraise you behind your back, if you are lucky to your face also. Managing upwards or sideways is only half the issue and this is where your political skills shine if you are that way inclined. Having a team that is 100% focused behind you is the harder half of the equation to implement successfully, and it is this half that is often overlooked by a manager on the path of change glory.

To put this into context I once read a couple of short quotes that I believe summarises the management challenge quite succinctly.

“Bulls**it can get you to the top of the corporate ladder, but it’s not good enough to keep you there”.

“When a monkey at the top of the tree looks down they see smiling faces. When you are below and look up, you only see a**eholes.”


As a manager you will be remembered for what you do wrong or badly as much as you do good. Actually, a sack full of positive memories can often be overshadowed by one or two bad decisions whether by misjudgement or deliberately/deviously thought out. The fact that this perception is fair or not is open for debate.

On a personal level I owe as much to the four or five people I’d rather not mention (All ex colleagues) as I do to those that have provided the motivation and broadened my thinking.

Quite often I have seen people behave in a manner that inspires me to make that mental note of “I wouldn’t do it that way” or “When I am in that position I wouldn’t do that”.
  • Ever had a manager who bullies staff or chastises staff in front of others?
  • Ever had a manager that values process and technology over the people aspects of running a team?
  • Ever had a manager who seeks opinion but never listens and ignores all input?
  • Ever had a manager who promises a review and then waited months or years for it to materialise?
  • Ever had a manager breeze though a company with change havoc only to move on without seeing the job through.

Many of these are management lessons on page one of the manual and combine communication and basic human needs. Anyone who has ever taken the time to read material related to Maslow's triangle will understand my point here. I have seen all of these incidents above over the years with varying results, and once again the negative memories override any goodwill previously earned.

Last week I witnessed another of those moments (albeit small) when a direct line manager at my firm failed to stand up and be counted during a leaving speech of a long serving colleague who now reported to them. I was aware of a few differences in opinion between the two people that led to the resignation in the first place, but I felt that this could have been a time of reconciliation.

So whilst most were expecting the usual speech from the line manager I was shocked to see the manager hiding in the wings, quite literally, and instead it was left for other managers to make the ‘Sorry to see you leave speech’. The employee did their speech afterwards and kept it civil and in my view edged the overall contest on points.

I will try and look for the positives out of all this even though I was disappointed enough to blog this today. I am not saying that as a manager or leader you have to be whiter than white. There are occasions when you have to make decisions people won't like. In other situations in sports management I might suggest that pleasantries are not high on the agenda. But I am saying that it is important to consider every factor of your role and day. It is often the little things that undo a manager.

So, another mark has been etched into my mind and I have learnt that it is more important to front up rather than avoid those awkward moments. After all, the negatives may build up and may invoke a re-greasing of the corporate ladder.

So, if you believe in karma please remember, you never know who is sitting on the rung just above or below you or whether they harbour plans to move ahead, so always wipe your bum.

Thanks for reading.
Lee.

Thursday, April 10, 2008

"It's a funny old game."

This is a phrase that was immortalised many years ago into the educated soccer commentators punditry. To this day it is associated with the footballing legend Jimmy Greaves. He however denies that he has ever muttered these hallowed words, but I clearly recollect him talking to Ian St John on the Saint and Greavsie Show on numerous occasions. But if the man himself denies it then I guess I must be mistaken.

The ‘Saint and Greavsie show’ was a Saturday morning football preview show with a combination of the video highlights from the previous weeks games, interviews, opinion and a review of the upcoming games on the Saturday. With the advent of Sky and the commercialisation of the beautiful game, this show would now be a review of the upcoming games on the Saturday, Sunday and Monday.

I have been working in the software development business for far too long. My roles have ranged from day to day software developer, those with project and team management responsibilities to my current position of technology advocate for the CA 2E and CA Plex toolsets, specialising in enablement and best user practices in enterprise sized software development environments.

I often use analogies referring back to football to simplify describing issues or ideas to members of my teams, in fact, I find it rather amusing to compare football management to software development practices and the careful balancing act of creating high productivity software development teams.

My thoughts thus far are that the art of football management and that of software development team management have numerous parallels. I would historically refer to the construction industry or the car manufacturing industry to derive a comparison for the creation of the software product itself. But, when shaping your team it is quite clear that you require a myriad of skills, approaches, characters, opinions, ego’s, attitudes to name just a few of the attributes required to form a modern day software development team.

You will need to give careful consideration to your preferred team formation, management and coaching staff, youth development plans, team captaincy selection, picking the players for a particular project, substitutions, dealing with injuries to the squad, career mentoring as well as dabbling in the transfer market. The prospective number of parallels appears almost infinite.

So where do you possibly start?

I guess you have to look at yourself (The Manager) and decide on your style and approach. Are you going to be a hands on tree hugger or a hardnosed disciplinarian. i.e. A Steve McClaren or a Fabio Capello!

You then need to employ your trusted backroom staff (coaches and medical team). It is highly possible that as a manager you were also a former player and you probably still retain most of the skills required to perform many of the roles within your team, but be warned, if you do find yourself doing rather than managing then this is a sign that you have the incorrect balance in your team.

A manager that believes he can do every role within the team and often gets sucked into the detailed coding on the teams projects is guaranteed to be holding back the team come match day. He needs to empower his team with clear instructions and tactics in order to navigate the perils of developing quality software systems. His role should be to conduct the performance of the team from the broader viewpoint on the sidelines.

Your coaches are your technology evangelists. Their role is to ensure that the team fully understands industry best practices for your technology implementation and they are responsible for the day-to-day training and fitness of the players. These guys educate the players and control items like development standards and peer review processes. They play a pivotal role within the team to provide feedback about a player’s progress and readiness. The medical team are your DBA’s, they ensure that your players are in peak physical condition and provide ideas for improving performance and integrity of the team and products.

With the manager and backroom staff in situ along with the assumption that you have finalised team tactics and on field communication strategies, it is now time to concentrate on the squad.

The types of players that you have are critically important. It is imperative that the right mixture of roles and personalities are employed. It is no good having an entire squad made up of day coders or similarly overloading the human resources entirely out of super-coders and architecture astronauts. (Thanks Joel for that gem)

Great football teams have a mixture of leaders, defenders and all-rounders, as well as, specialist roles like striker or winger. These roles will have varying objectives and performance targets and are likely to be rewarded with differing pay levels. Generally, a striker will command a higher salary than a goalkeeper or a defender, they will also be motivated with bonuses linked to the number of goals that they score during a season.

Age is also important. Consider a team of 17 year old apprentices playing against a group of wiley old professionals with all the life experience caps that they have attained. Balance is a key element of this article and age along with the relevant fitness, naivety, passion, rawness and nerve is equally as important as any other attributes that make a strong team. Historically and with the only exception in soccer being the Busby Babes. Succesful sporting teams in general would have a mixture of ages.

All members of the team will have their objectives set at the start of the season and reiterated before each game. In fact the Post Implementation Review after the game will significantly affect the managers decision making for future games.

The yearly budget you have available will determine the number of star players that you can afford to employ. Just like when creating a fantasy football team online you will need to ensure that your team performs as a whole and doesn’t rely on a few heroes to score you those all important fantasy points. It is the same for a software development team. You can’t rely on a couple of heroes to do all the hard graft whilst the rest of the team sits back and watches. It is proven that heroes do not scale.

The type of project is also an important factor. The race for the league title could be considered a release of the software. Meticulous planning is paramount and this generally represents the highest team priority for the season. A cup competition could be described as PTF or service pack and generally has less lead time and the items of work are more random in scope and type. Emergency patches, I guess, would be extra time in a knockout match that has been tied after the first ninety minutes or the all important penalty shootout that the English always seem to lose.

Now that you have assembled your squad and understand the scope of your requirements you need to consider the team formation. Do you go for an attacking formation and line up for a project or a defensive approach? You can certainly draw from experience with similar projects and previous games against the same opposition. Do you go for an industry standard 4-4-2 formation or an attacking 4-3-3 with more focus of scoring goals with the risk that you may concede more.

So what roles do your players perform for your development team? Do you have a team of permanent players or do you have some on loan (contractors). You then have formation and player positional issues to consider.

Your goalkeeper is your gatekeeper. Their sole focus is to ensure that no errors make it into the production software. They perform the system regression testing.

Your defenders are your process converts and quality conscious developers, their stalwart approach ensures your projects have fewer bugs. These players traditionally lack flair and innovation and the technical ability to complete the highly intricate activities so should be avoided for high pressure or groundbreaking assignments. They are however, tenacious and determined and just as important to the overall performance of the team as any other team member. The defenders love solving configuration issues and enjoy debugging other developers code. They are advocates of unit testing processes and even talk to the testing team. There are of course exceptions to this sterotype, especailly in the modern game where the technical ability of players has been improved from the days of Chopper Harris.

Midfielders are much harder to quantify. They are generally the fitter members of your team and have the ability to perform many roles throughout the team. Some are specialist defensive minded players who protect the defenders with the extra level of security. They enjoy performing peer reviews.

Every team needs a playmaker. This is the person who enjoys having extra time on the ball and loves playing that killer pass to open up the projects defence. They are dead ball specialists and keen reference book readers.

Your wide players have speedy boots and code at a frenetic pace. They can sometimes trip up and get caught out of position but the times when they do get beyond the defence to create opportunities for the strikers can be crucial for a project that is running behind schedule. The amount of running they do during a project often ensures that they need to be substituted during a game.

The striker’s job is to produce the goals. They like to code all the sexy aspects of the deliverable. They tend to prefer GUI development to batch processing and they definitely pay lip service to the art of unit testing. They are generally calmer under pressure and have sublime belief in their own abilities which can lead to a sense of laziness as they tackle most projects with aplomb.

You also need to consider the preferred kicking foot or development skill. There are positions like wing back or winger where delivery is paramount. Having a rightfooter playing on the left or a leftfooter on the right has both good and not so good options. Again, with the modern game this is being phased out by two-footed players who have been nurtured since birth. But if you do have a one trick pony in your team then you need to consider their involvement carefully.

Getting the balance of your team right is important. Too many strikers and you will fail with every project as no-one wants to do the grunt work. Too many defenders and your project timelines will consistently slip. Every team needs to be carefully balanced, coached and briefed on the preferred ways of working.

It is a shame that so many managers just don’t understand the dynamics of a highly productive software development team. Perhaps we need to ensure that software development managers obtain their coaching badges and have performed at a professional level before progressing into the management arena, after all........

It’s a funny old game".

Thanks for reading.
Lee.

Thursday, April 3, 2008

Damn Marketing Rebranding Machines!!!!

UPDATED to cater for the latest rename of Plex from CA Plex to CA Plex PRIME

https://communities.ca.com/thread/241697286

The document

So IBM has announced another change to the name for one of my favourite computing platforms.

The new name ‘IBM Power System’ replaces the name of ‘System i’. I must admit I hadn’t really come to terms with the last rename and more often than not used the term ‘iSeries’ or ‘AS/400’. If I am being totally honest, I actually interchange all of these terms so frequently in both written and oral formats that I have to constantly remember my audience as well as remind myself.

I grew up knowing the platform as the ‘AS/400’. An extremely powerful, reliable and scalable midrange system. It wasn’t known as a server in those days, more an integrated bespoke environment and all the applications ran natively.

Now things have moved on quite a bit. The announcement for the rename is actually more than just a re-branding exercise. It is not a shallow attempt from a ‘change hungry’ marketing team to try and impress a new boss or make an impact in a global IT organisation.

The technology has moved on significantly as well.

Two hardware platforms have been consolidated which must be good for me, the consumer. The ‘System i’ and the ‘System p’ now both ship as the ‘IBM Power System’. You then have the choice of installing one or more operating systems on system partitions. So this announcement for the industry is quite significant for the midrange marketplace.

My main moan point about this change is why companies constantly consider re-branding. In my mind it doesn’t make sense. I doubt they actually consider the affects of their airhead moments after 3 zillion triple espresso’s. Especially the impact for those outside of their organisational walls.

In my opinion, this is change for changes sake and I have seen plenty of that over the years.

This is particularly true when people join an organisation and immediately set about changing it. They do it without considering why it is architected that way. Very rarely do organisations or products require a revolution rather than applied evolution.

Yet, I have witnessed the revolutionists hitting the same problems the evolutionists had already resolved. If only these revolutionists had engaged the incumbents long enough to determine what needed fixing then value could have been added somewhere along the merry path of so called, 'change glory'.

Let's take a look at the soccer scenario when clubs change their managers too frequently whilst chasing success. Those that change managers, their approach and tactics, generally over a period of time underperform those with established managers and an evolutionary mind set. Consider a Manchester United or an Arsenal approach for further proof. The exceptions are the one season wonders and rich clubs like Chelski. How many IT companies out there can afford that level of investment before seeing a return?

So I ask, did these marketing executives ponder the impact of the change?

I guess they would be aware of the cost internally. After all, this is at least the fifth change that I am aware of, so the reprinting of the user guides, help text, updating of the other applications to reference the new name (I hope this was soft-coded somehow) are generally constant. I am assuming that each group within IBM was advised of the change so that all other aspects of the business i.e. services, pre-sales, technical support, training, internal systems and accounts etc are fully conversant with the new brand.

I am also assuming that IBMs strategic and local partners are aware of the change and that they have change plans in place to ensure that their own literature, staff and services are realigned to the IBM 'espresso executives' vision.

But, of course, it doesn’t stop there!!!

What about all those companies with ‘System i’ etc in their company names? What about all those now outdated links on websites? What about all those cyber squatters and phishing sites that need to seek reinvestment capital? Those poor recruitment consultants who have another buzzword to look out for.

One thing is for sure. Google/Yahoo/Microsoft and other web search engine robots won’t know or care about the platform evolution of the ‘IBM Power System’. So I now have to remember to search under many name banners to get the correct information.

How many millions of business cards, job descriptions, organisation charts and email signatures need to be updated around the world? What about all those periodicals that target the platform? All those outdated and now devalued books on http://www.amazon.com/ that plug the power of the 'System i', Ooops, "IBM Power System'.

This list is likely to be significant if I had time to ponder for longer. But, there is also and most importantly of course, the impact on me me me me me. Call it selfish, self-centered or paranoid, but...........

I used to say that I specialised in 'AS/400', 'iSeries', 'i5', 'System i' software development. I am going to have to append ‘IBM Power System' to this list. I won't even begin to comment on the names of the operating system whose naming journey has been equally as diverse to cogitate. Now they call the operating system 'IBM i', "Yeah Right!!!!".

Most begrudgingly, I now have to go and update my curriculum vitae remembering to be aware that not everyone who may read it in the future will be aware of the recent or previous changes.

My CV will now read something like.

Specialist in ‘CA 2E’ formerly known as ‘Allfusion 2E’, ‘Advantage 2E’, ‘Jasmine 2E’, ‘Cool:2E’, ‘Synon/2E’) which is a 4GL code generator for the ‘IBM Power system’ formerly known as ‘System i’, ‘i5’, ‘iSeries’, ‘AS/400’ and specialist in CA Plex PRIME formerly known as, CA Plex, Allfusion Plex, Advantage Plex, Cool:Plex or Obsydian.

The irony is that although the system has been re-branded and many of the tools that I use have also been re-branded. They are more often than not referred to by their original name.

Just ask Symbol, the artist formerly known as Prince.

Thanks for reading.
Lee.

Thursday, March 27, 2008

Thursday, March 20, 2008

D.I.Y and Project Management fusion

Whilst most people I know are off on holidays this weekend (Easter), I have the unenviable pleasure of decorating my house. Like most people I have been doing this for what appears like eternity. I wouldn't say I am a DIY addict, but I have completed my fair share of decorating rooms over the years.

So this weekend over the 4 days I have to decorate our hall, landing and stairs covering the ceiling, walls, woodwork and doors. Fit new door handles, hang pictures then prepare a bedroom and decorate ready for the new carpet that is being laid on Friday week.

Now, I actually quite enjoy decorating and once this sprint is complete I would have conquered the majority of the house. The people before us clearly never bothered with general house maintenance and as such we have had a few issues but I am pleased to say that it will soon look stunning and be a joy to live in.

The reason for the rush is that we have guests coming from overseas. I say overseas, I should say our homeland. We emigrated a few years ago and are lucky enough to have regular visitors from home. The only real trouble is that due to the regularity of visits people don’t notice progress. Especially those unpainted walls or the lack of carpet in such and so area etc.

I call it progress as I know the amount of effort that is required to make a room look great. I could have easily over painted the old walls and had a reasonable finish. But, I am an IT guy and I notice these holes in the walls, the creases in the wall paper above the door and window corners. I notice the way the light reflects shadows if the plastering is uneven and a light is on in the other room. I notice those blemishes on the wall that will be covered by a picture. Even though these blemishes are covered I know that underneath that they are still going to be there.

Perhaps, just a little, I am too much of a perfectionist when it comes to decorating, but I justify that due to my software development background. I can't craft code or applications with a bad user interface. Sometimes, I need to get under the covers of the code and reorganise and repair previous faults and issues. I wish that the previous owners of the house had invested a little time in their maintenance strategy!!!!!.

As I find myself re-engineering virtually every aspect of every room I can't help but wonder why those lazy sods did nothing.


Money could have been a factor, as could apathy, but just like with computer systems, a little bit of routine maintenance is much better than a re-architecting or re-building project.

Of the houses I have owned and renovated over the years two have stood out as being maintenance nightmares. After analysing the small amount of data I have available my only logical conclusion is to never buy a house from a couple whose surname starts with ‘T’.

The Tibbett’s and the Tankard’s. You know who you are!!!!!!!!!!!!!!!.

I have to plan to do some things in the most efficient order. I need to do detailed preparation for some areas and have to demonstrate my good time management skills, ensure key items are performed as per the critical path, and most importantly, I need to escalate any slippage in the project to the project manager ASAP. In this case my wifelet.

There is also the added pressure in that some of the tasks need to be performed out of standard business hours. This is to avoid kiddies fingers touching freshly painted surfaces and to minimise the odour of the paint fumes permeating throughout the house. So Saturday nights glossing will commence from 7pm until the small hours. If it is anything like before (another house) then I will see daylight before I see the bottom of the paint can.

Actually that reminds me. I do need to remember to check the paint levels, application tools (Brushes), removal and cleaning tools (Sandpaper and Turpentine) before I start.

This is a pre-commencement artefacts scan. Nothing worse than getting dressed up (old clothes) ready for the painting effort, only to realise that there is a fraction of the paint required to do the job. Then you have the decision to make. Do I drive to the DIY store wearing these old paint ridden clothes?, or do I change to something more practical for the purposes?

I should be OK with resources, i.e. me. Anyhow, adding additional resources to a project at this late stage tends to make it late anyhow. And with the dependencies for some of the tasks, adding additional resources now won’t help. Some things just need to be done in a linear fashion.

I remember an ex colleague of mine from years gone by called Yuriy. He was a wonderfully intelligent software technician, he had his quirks and an abundance of quality phrases. One that stood out in particular was “Lee, it takes nine months to make a baby, you cannot add nine women to the project to get it done in a month”.

Now Yuriy is quite right with this statement, although I guess if you do add nine women to the project then you have a higher probability of creating that baby and much more fun during the project initiation phase.

So touch wood, I should be ok this weekend. The resultant smile from the wifelet, the sense of personal satisfaction and the thought of those visitors saying. “Wow!, well done Lee, this looks nice………” should make it all worth while.

This most certainly seems like project management to me and apart from the deliverables (decorating) and a lack of written ‘signed off’ requirements ("Just get it painted"). This could be one of a hundered projects I have completed over the years.


So, always plan your projects, do your analysis and seek approval before you commence. My background in software development and management should come in handy even if it does feel like a busman’s holiday.

Happy Easter.

Thanks for reading.
Lee.

Tuesday, March 18, 2008

The new millenium Bug?

There are only 17576 combinations that can be considered when allocating a TLA (Three Letter Acronym) for airport codes. Part of the challenge is that the code should also be meaningful and identifiable, for instance, everyone knows that London Heathrow is LHR and that Berlin in Germany is BER.

If you don't believe me take a look at this site http://www.world-airport-codes.com/.

After a while some of the codes appear confusing. Hwanga in Zimbabwe has the seemingly obvious code of WKI. I assume this is pronounced Wiki.

This may be of interest to some of the IT geeks reading this, assuming of course that the introduction of Google’s Knol has/will obliterated the Wiki concept. I can never work out why open source stuff like this "Wiki" is so damn difficult to maintain. I guarantee that Google or Microsoft will make this easy for Joe Bloggs general public to use. I can personally hear the death knell for Wiki already, largely IMHO its own fault for keeping it geeky and for the myriad of different syntax styles that are available.

Anyhow, back to airports. With over 9000 airports registered in the database to-date and our insatiable appetite to travel around the world, it is likely that more and more airports are going to be built, each requiring yet another unique meaningful code.

Presently, these codes do not include numeric characters so the basic math tells me that there are 26x26x26=17576 combinations available. This is stated with the assumption that unlike car license plates, we do use every letter available in the alphabet.

So what is going to happen come the day when we have used up all these codes. We could begin to use numeric characters, however, the numbers 0,1,2,3,5 and 7 are unavailable due to their similarities with the O, I ,Z,M (sideways), S and L. Also, unless we have taken a big step into the future, a code like KN9 really sounds like a it should remain in a novel by Arthur C Clarke rather than a domestic airport in deepest Taiwan.

That said, there is more than one way to skin this cat.

We could be tempted to extend the size of the code from say 3 characters to 4, or perhaps more. However, this will require a huge amount of effort to synchronise all the airline ticketing systems around the world, not to mention:-
  • Online and published guides.
  • Signage (i.e. Welcome to LAX).
  • All those travel agents whom for years had remembered these codes.
  • All those flight anoraks who have travelled to every airport known to humankind.
  • The humble fan website and all those pub quiz questions that have been written and are now negated.
All this hassel because someone decided to save a byte or two when naming the airports in order to save, at the time, valuable disk space. The irony being that this is the same disk space that the likes of Google and Yahoo are giving you gigabytes of just to sign up for an online email account.

It doesn't stop there though, what about the issued tickets that are already in the public domain. The transition period for change over would be huge (up to a year). So now we have to include all those check-in staff and the baggage handlers who now have to remember two codes for every airport into the debate.

I would suggest that the majority of those 9,000 airports have been created in the last 50 years. I find it quite daunting that we might experience the aviation equivalent of the millennium bug. This may not be that far off and once the developing nations reach full steam ahead with their expontential economic growth, you may well find yourself employed in the future to sort out the code written by those legacy developers.

Those same developers who didn't have the foresight to cater for tomorrow’s usage.

When we think about it, this has happened before. It was 20 years or so ago when it was concluded that 640kb of RAM was more than enough for any computing requirements in the home PC.

And those guys from the 70's that designed these airline systems have a lot to answer for. Not only did they earn good money back then with job security (outsourcing wasn't invented or trendy then). They now get rewarded for coming back in and fixing up their issues many years later.

So get travelling now. There might be some downtime in this industry and remember, someone has to pay for all this development. I pray to god (actually I don't as I am athiest) that you are using a 4GL like 2e or Plex to maintain this code. If you are using a 3GL you might have quite a lot of impact analysis to perform first.

Remember, you need to be extra cautious with your design and field domain management and regardless of what people tell you they want, look into the future and get it right first time.

Watch this space. You heard it here first.

Thanks for reading.
Lee.

Monday, March 17, 2008

What do you do for a living?

This has to be one of the most common questions asked of anyone in life. Apart from, How are you?, Can I buy you a drink? or cringingly, Do you come here often?. Well, this isn't an article about chat up lines or dating gotchas. I am long past all of that.

However, many people can simply reply “I am a plumber” or “Nah, I’m a sparky geezer!” (Electrician), or perhaps they might say "I have my own business selling cars" or "I work for a bank doing banking stuff". The point here is that no matter what they do, their audience will immediately be able to understand what they do and if they need their help or services, they can simply ask.

For the average IT geek, this is always a tricky and preferably avoidable question. We tend to shy away from disclosing our job because we are concerned about the impact of this little snippet of knowledge in the heads of a non IT savvy person.

There is a common phrase in IT that goes something like, 'A little bit of knowledge is a dangerous thing'. Actually, I guess this is true, in general. DIY being a good example.

As IT professionals we tend to try and answer this question ambiguously.

Mainly because we think that what we do is so very specialist and complicated, we also make allowances for the questioner as we believe that they will switch off. We have a primeval fear that we will not be able to complete communicating the fluffy, pinky greeny codey stuff, about why we love our job.

On this note, I do appreciate that in all professions there are general conversations and then there are the technical jargon and insider acronym riddled low level conversations.

As IT professionals we have invented more TLA's (Three Letter Acronyms) than any other profession, possibly with the exception of airport abbreviation naming committees.

Anyhow, a typical answer would be “Urrrrm, Computers”.

“Arghh, Right!!!” comes the reply, quickly followed by “Can you take a look at my computer?”.

And this is it, the single biggest fear of an IT professional. Your job might be that of a patterns and framework designer for J2EE or you may be a Mainframe performance specialist, but rest assured the simple mention that you work with “Computers” means that you are now their personal technical support helpdesk, for life........

Now, by contrast, our plumber and electrician are both in the home building or renovation trades, but, you never hear me asking them if they can do some plasterboard stopping, tile my roof or fit double glazing.

I guess that over time the general levels of understanding of the different roles within IT will improve. However, until this day has arrived I have learnt the hard way to always reply in a precise and exact manner.

"I specialise in software application modernisation, building and shaping high productivity development teams to meet the demands of developing enterprise business applications. I also provide bespoke consulting and training services and expertise in utilising multi-platform 4GL code generation tools.”

Now, for all but the most technical people out there, I tend to get that ‘lights out’ glare about halfway through that sentence, but, on the plus side, I also no longer get those requests for on the spot computer repairs.

Thanks for reading.
Lee.