Monday, June 30, 2008

Knowledge capture & use in technical support communities - Part 2

In Part 1 I discussed the problems facing the technical support team with overworked experts and a need to transfer their knowledge as efficiently as possible.

In Part 2 I will discuss how to successfully capture and store this knowledge in an efficient and, above all, useful way. I'll lead off with a brief overlap from last time as a reminder of where we got to.

The 'Virtual Expert'

From what has been discussed so far, it is clear that expert knowledge is required, but that tying up the expert in this process is seen as unproductive in the current climate. We cannot get away from requiring time from the expert, but we can minimise this time and capitilise on it by recording the knowledge in the right way.

The answer lies in recording the expert knowledge (on paper or, more usefully, electronically - see later) in such a way that it is as close as possible to the over-the-shoulder commentary.

There is often still a need to use numbered steps when accomplishing a task. Such steps provide structure and sequence and help with mental tracking when performing the task. There is no reason, however, why each step cannot contain more than simple 'input, output' or 'action, reaction' type information.

For maximum benefit, each step should be written in conversational language and explain what the user is doing, why they are doing it, what the expected outcome should be and at least make reference to any unusual, but known, variations.

Furthermore, before any of the steps, there should be an introductory section which describes why the user would perform the task, what pre-requisites there may be and definitions of terms, systems and the like. After the final step, make mention of any further tasks that may be a logical progression from the task described, but which do not form part of this process.

Don't take anything for granted

Whilst we are talking about capturing expert knowledge, it is important not to lose sight of the basics. Any documentation is devalued if it makes too many assumptions. In creating a documentation repository, an audience level should be decided - such as 'technically competent', or 'beginner' - and all documents should be written for that lowest common denominator. It is easier for a more expert user to skim over known material than it can be for a new person to work out the undocumented basics.

It is important to include examples in the documentation. Where possible, have the example show the most common scenario, as it is most likely that staff new to the task will use the examples. It is also worth giving additional examples if there are significant variations in a step. Providing examples helps the user to get closer to the over-the-shoulder situation.

The ultimate test for the documentation is to give the process to a person who is at this 'lowest level' and have them perform the task. You will be surprised by some of the information you have taken for granted in your early drafts. I know I was.

Structuring the documentation

For ease of maintenance, it is important to only ever store a piece of information in one place. To help achieve this structure, it is useful to allow for two document types in the repository - reference documents and process documents.

Process documents contain steps describing how to perform a task. Reference documents contain (mostly-) static information that supports one or more processes.

It is often necessary to refer to tables of information (such as a list of files, describing their usage) from more than one process document. By separating this type of information into a reference document, it can be referred to by multiple process documents without increasing the maintenance burden through multiple copies. Additionally, when the table requires maintenance, it is easier to locate (residing under its own title) and the maintenance can be performed without danger of corrupting the process documents. When properly structured, maintenance of the reference document can be accomplished without knowledge of the referring process documents.

Whilst reference documents tend to represent pure data, it is still important to keep the conversational language in mind. There may be naming conventions or other conventions which are being followed for the data and it is important to note this in the reference document to complete the picture for the user of the information, and equally importantly for the maintainer.

It is also beneficial to factor out sub-processes into separate process documents and refer to them from the major process documents. This is of value where a sub-process is part of more than one major process.

The major benefit of having information in only one place is realised when errors are amended or updates are applied. These have to be done in only one location and all related processes are automatically catered for, as they simply reference to this single occurrence.

Capturing is understanding

The process of capturing information is time consuming and is best not left to the individual experts. Remember that they don't have that much time. Also, too many authors can devalue the repository by differing styles and levels of language.

The best solution to this is to have a single person (or possibly two or three) to build the documentation repository. This co-ordinator is then responsible for collation, setting style and keeping the language consistent. This person should not be expected to author all of the documents, but must be able to understand at least the broad concepts involved in order to ensure that appropriate structure is followed.

Each expert should be expected to provide a draft of the process or reference data, in a form approaching the final requirement. In some cases, where the co-ordinator's knowledge is good enough, they may author the document, but it should always be checked by the relevant expert.

Electronic storage for fast access

Following the structuring process above introduces one significant disadvantage in a paper-based documentation repository. Frequent referencing to other documents causes the reader to flip pages or have multiple documents arranged on the desk in order to complete a single process.

In Part 3, I will discuss some real world solutions to electronically storing, maintaining and delivering the captured knowledge.

Thanks for reading.
Allister.

Sunday, June 22, 2008

The "Wooo! moment" Factor

This is another one of my general life blogs and follows up from my recent article about what makes a good programmer where I refer to a ‘Rocky Balboa’ moment that encompasses everything about being a great computer programmer.

I would like to clarify that this post has nothing to do with the incessant number of reality talent shows and is not in anyway linked or endorsed by those in the entertainment industry.

However, I recently went to the Smackdown/ECW world tour event here in Auckland, New Zealand.

Like most of the other hardcore WWE fans I registered for the internet presale, logging in a minute or two before midday and continually pressing the refresh (F5) button in Internet Explorer until the ticket selection page popped up. A quick combo-box click later and an anxious wait ensued whilst the system allocated my tickets and then the “Woo! moment” came.

I knew it was a full on “Wooo! moment” as it did turn the heads of a few people in my office.

The reason being was that I was lucky enough to get front row tickets, seat numbers 1 and 2 which was not only the closest you can get to the ring but also the closest you can get to the entrance ramps for when the wrestlers strut their stuff as part of the pre bout entertainment. The arena held thousands and thousands of people and to get to two best tickets in the house was reason enough for me to celebrate. Actually, celebrate for my daughter, as she is the wrestling fan. I just have transient knowledge.

For anybody that follows the WWE Wrestling you will probably know a guy called Ric Flair. He is a 16 times world champion as well as being regarded as one of the all time greats. He is also famous in WWE circles for his tag line “Wooo!”. So much so that the last three shows we have been to, Ric Flair has never been there, he has been retired since March 2008 and yet everyone was shouting “Wooo!” as the excitement began to build in the arena.

Now for me life is about experiences and the memories thereafter. We have both good and bad times that shape us all individually in some form or another. And one hopes that over the balance our lifetime that there are many more good times rather than bad and it is these good times that I affectionately refer to as the “Wooo! moments”.

Now if we spend on average a minimum of between 50 and 70 hours travelling and working each week I often ask myself why do people put up with a job or a career that doesn’t provide them with “Wooo moments”. I have been pretty lucky in this regard over the years. I am an analytical person and love computers. As a child at my local school jobs fair in 1983 I expressed that I wanted to be a computer programmer. After a few minutes searching the database (list of jobs on paper in those days). I was asked if I wanted to do commerce, a funny choice at first until you realise that it was the closest alphabetically to computing.

How times have changed.

Now everyone wants to do computing and whilst there are now more areas to become expert, I also believe that computing remains at risk of be dumbed down. I say this because many people are getting into computing because they see a higher than average salary trend, they only see the exciting parts of the job glamorised by Hollywood films or they see it as easy.

For me I got into computing because of the “Wooo! moments” and I continue to adore this line of work. But also as I get older I also find myself enjoying the fact that others around me are having their own “Wooo! moments”. I'm a little like grandparents who enjoy watching those cute little bundles known as grandchildren.

But of course it doesn’t stop there.

You may have a job that only has one or two “Woo!” moments in an entire career span. A recent example was the recent mission to detect life on Mars. Some of those guys had been working on that mission for 10 years or more. But that “Wooo! moment” when that million dollar craft landed on Mars and started to do its stuff.

Wow!!!!

The screams of joy and relief I could hear just by watching the footage on a 21” TV was there for the world to see.

The “Wooo! moments” are what drive me to get up each day and if you find yourself having less and less of these moments whilst at work.

Ask yourself why?

Work does dominate and validate many of ours lives so you might as well enjoy what you are doing. But please don't moan to me about your job. Do something about it.

Please, ensure that you do. Life is too short.
Thanks for reading.
Lee.

Saturday, June 14, 2008

The lunchtime effect and an insane piece of Job’s Worth.

The other day I had to visit the immigration department to renew my daughters’ residency visas.

This is a process that you have to repeat every time you renew your main passport because if you want to re-enter the country you will definitely find it useful to have this little slip indicating that you are a legal alien tucked up nicely in there somewhere.

The process is quite simple. You bring your old and new passports, fill out a form and pay the fee.

Simple!!!

A short time later (minutes or hours) you leave feeling robbed but also happy in the knowledge that you’re able to travel to and from your adopted homeland.

The key, as you all know, is to avoid the queue or at least pick the day when the most counters are open. We were going to the hospital as my wife had an appointment for one of her ailments so the time we had available in between the school runs and the appointment was basically lunchtime.

Or put it another way. Rush Hour. Usually when you have to go somewhere on a time constraint then you will always pick the bad day. Well for some reason it was empty on this occasion. We were through to the application triage officers within minutes. These guys check the forms and provide assistance before you get to a case officer.

This obviously is to avoid you waiting for a period of time to find out that you have completed the wrong form or worse still used the wrong type of pen colour.

At this stage I leant over the counter and enquired as to the lack of visitors. You see. I have been to the immigration department before and joined a queue that left the building. To give you another indication in the old building there was a portable cafĂ© outside to serve food and drinks………..

Apparently it was just a slow day. I was wondering whether this was as a result of what I refer to as the lunchtime effect. I am sure I am not the only one out there but it could be due to the fact that I am an IT guy I was thinking. Why was this room empty? Was it because others had considered that it was lunchtime and therefore made the assumption that it would be busy, thus avoiding the ‘so called’ busy period when there are more people and less staff!!!!

Actually, who cares? I got the visas sorted in record time, but, I am grateful for all those who were considerate to enough to think of the lunchtime effect.

But the jobs worth moment is certainly worth writing about. For one reason alone, I am adamant that the person who came up with this rule was not an IT guy as there is no binary representation of what I witnessed. No IT guy in the world could have come up with an answer other than 0 or 1 (On or Off). And this to me is 5.66645645- and fifteen sixtenths.

As we were applying for three visas the costs were $100.00 per application. This makes sense I guess. Until you hear the triage officer ask “Are either of you two (Wife and I) applying for the visa also?”.

Our answer this time around was “No” because our passports are 10 years and the kids are 5 years in renewal intervals. “Shame” was the response, she then continued, “Because if one of you i.e. principal applicants are applying as well we can do this as one application with 3 dependents and therefore you will only be charged one off fee of S100.00.”

So the logic is that we will do the extra two applications and produce 5 instead of 3 visas. Key in details for 5 people and not 3, print, remove and secure 5 visas in the passports and not 3 and we will do that for one price.

But as a principal applicant isn’t applying then we need to treat it as 3 separate applications!!!!!!!!!!

If someone can shed any light of this I would be grateful. Until then I am proud to call myself a Software Development Professional. I certainly wouldn’t want to explain the aforementioned rule for my living or associate my name with inventing this process.

Thanks for reading.
Lee.

Tuesday, June 10, 2008

Knowledge capture & use in technical support communities - Part 1

This three-part article is adapted from one I wrote almost 5 years ago when much of what you will read about was fresh in my mind. This adaptation addresses only the passage of time and some points of style and meaning for a wide audience.

Whilst software development is the subject of this blog, let us not forget those who (typically in large organisations) support the developers and others.

The nature of technical support communities.

Technical communities come in many forms, be they design teams, development teams or support teams.

Whilst design and development teams are largely about the creation process, they still have many day-to-day activities which are defined and repeatable. Support teams, although fulfilling an entirely different role, often have to create on a very short-term basis. So it can be seen that the different types of teams have similar requirements.

However, the support team seems, most often, to be the one to get out of control. The difference is that the support team is always working on a short time frame. In addition, support teams often become involved in project work and this adds to the complexity of the day-to-day activities, as the time frames are shortened still more.

Most often, you will find that staff in a support team are very good at what they do - they have to be to survive. Unfortunately, the higher the skill of the staff, the more reliant you are on those staff to keep the systems running. It is a difficult and time-consuming option to bring 'green' members into the team.

How many support managers have not recognised that documentation is a key part to the support process? I would wager very few. Fewer still, I propose, have succeeded in completing the documentation requirements within their team and reaped the kinds of benefits they were expecting.

Documentation, to the 'tech', is a four-letter word. I, myself, recall asking the question "Do you want me to document it, or do it?" Simple economies prevent the techs from having enough time to complete the documentation task and many welcome this excuse not to do it.

Another trait of support teams is the experts. In virtually any support team, there will be experts in various disciplines. Most often, however, these experts are relied upon to provide most of the resource in fixing problems in their area of expertise when they should, in fact, be called upon to share their knowledge.

Shared knowledge is a powerful tool. Experts will always be needed when particularly difficult or unusual situations occur, but the team as a whole should be able to leverage the experience to improve task turnaround times through a more even spread of the load.

Knowledge transfer

It has been documented in studies that the best way to learn something is to have an expert stand over your shoulder while you go 'hands on'. The reality of the situation in front of the learner, coupled with specific and pertinent comments or instructions from the expert gives the learner an experience often indistinguishable from the real thing. The learner also has the opportunity to ask direct questions in the context of what they are doing. Book learning, on the other hand, can only go so far with static examples and predetermined situations.

Perhaps the most important aspect of 'over-the-shoulder' learning, however, is that the expert is unlikely to simply recite steps by rote. There will be an accompanying commentary and usually a significant amount of reasoning on why things are done that way. This is very important in equipping the learner for when things do not go to plan.

Learning the steps of a process by heart is well and good when the process works. Most often, however, processes do not cover all possibilities and the rote-learner of the steps is going to come unstuck when an unforeseen, or simply undocumented situation arises. Unless the learner understands why they are taking the steps and what they should be achieving, they are almost as much 'in the dark' as prior to learning the steps.

Having knowledge about the nature of the process and the goings on under the covers helps get through many small deviations from the norm and also helps in issue resolution, as the learner is able to return to the expert with an hypothesis, or at least having done some basic checks suggested by the nature of the operation.

The key issue with this type of knowledge transfer is that, in the majority of cases, the expert is already overworked and has no time to spend standing over shoulders.

A secondary issue is that the expert may have to impart their knowledge, over time, to a number of different people, and this is inefficient.

The 'Virtual Expert'

From what has been discussed so far, it is clear that expert knowledge is required, but that tying up the expert in this process is seen as unproductive in most situations. We cannot get away from requiring time from the expert, but we can minimise this time and capitilise on it by recording the knowledge in the right way.

In part 2 of this article I will go into methods for capturing this knowledge in the most effective way.

Thanks for reading.
Allister.

Monday, June 9, 2008

By way of introduction...

Greetings fellow software developers, this is not Lee speaking! My name is Allister Jenks and I am sure some of you who know Lee will know me as well. Those who don't know me may yet have read my comments on Lee's posts - under the identity of "zkarj".

Lee has graciously allowed me to contribute to his blog and I hope I can bring you the same levels of insight and analysis that Lee has led off with. I look forward to your feedback too.

Sunday, June 8, 2008

My first blog alliance

It gives me great pleasure to introduce an ex-colleague of mine called 'Zkarj'. He has worked with IBM Power Systems (System i, i5, AS400 etc) for many many years and is a true advocate of the platform in general.

He has written several articles/rants over the years and is published online

Zkarj has asked if I would like to accept posts from him on this blog. My first offer of co-authorship since the blog began but one I certainly won't be turning down as he has lots of very interesting things to say about many of the topics that I blog about. You can see that by the number of comments I get when one of my new blogs hit his RSS feed

I hope you enjoying reading his material as much as I enjoyed working with him

Thanks for reading.
Lee.

Saturday, June 7, 2008

What makes a good software developer?

I have decided to move on from my current role after over four years working at my present company. My reasons are varied and plentiful but as always the lure of a fresh new challenge often commands the majority of my thoughts.

I have started once more on the interview merry go round, first with agents and then in the coming weeks with potential employers. This is an interesting time in my career and certainly a change I am looking forward to albeit a little nervously as I have only ever had three IT related job interviews in my life.

During my early stages of interview with one particular agent I was asked a really good open question. The question was “What makes a good software developer?”. I waited no more than 2 seconds before I began rattling off my opinion. Normally in these situations you take the time to consider what you want to say and then lead up to the answer.

This felt different.

I guess this is because although I have never answered this question before (personally or via my blog), I have hired enough developers and non-developers over the years to understand what I believe a good developer to be. After all, one of my own interview questions to potential new hires is “Why software development for a career?”

I ask this question as I want to know what motivated them to get into software development and what maintains that desire to be a software developer. At my last firm a new project manager joined and we got talking about stuff. You know, the technical stuff. It was quite obvious to me that this guy didn’t want to be a project manager and that he still harboured that technical development desire. I knew this because as a project manager he would say stuff like “Worst case I can write that program.” or “Couldn’t we do this in x language or y language.” It was pretty obvious to me that this guy couldn’t let go, and this is what I look for.

For me the number one thing is the passion. I want to see this in the eyes of the candidate as they express to me their achievements and technical prowess. I look for the body language that backs up these passionate views.

I have been part of and built software development teams. I have written in other posts that you do need a mixture of people at varying stages in their careers with a good balance of personal motivating factors. Passion is certainly the one I look for when I am considering the lead roles within a team. The reason being that I believe as a lead developer you must bring others on by example.

Other factors to look for, especially for a permanent employee are:-

* Longevity in the industry and loyalty to an employer or two.
* Proof of learning multiple languages and having the desire to adapt to development trends.
* Good understanding of general development concepts and practices.

These are pretty generic but with passion, loyalty, desire, adaptability and a good all round understanding of development I believe I can teach any developer the technology of the month.

Without these attributes I guess you could be selling your business short. If I had to choose one then passion is the one I would go for.

If you see a developer struggling with some code all day but eventually they let out an enormous scream of relief as they finally solve their issue, jump up and then start punching the air in delight in the style of Rocky Balboa.

I’ll have that person in my team any day.

Thanks for reading.
Lee.

Saturday, May 31, 2008

The Great 3GL v 4GL debate - Part III

This is part III of a trilogy of articles regarding the usage and evolution of software development languages. Part I can be found here and part II here.

All of these technologies have issues to address. 20 years ago we were all happy with green screens for business applications with centralised platforms, then came client server with Windows and the distributed computing model became mainstream. Then along came the Internet and the return to HTML thin clients and now the evolution once more learns towards Rich/Smart clients.

The irony for me as that I have witnessed many people move on from the 4GL world of the nineties to emerging 3GL (albeit object based) technologies i.e. J2EE (Java) and .NET compatible languages etc.

With the extra layers of complication (some call it abstraction) added due to business usage of the internet I am seeing more and more tools coming onto the market that claim ‘code generation’ capabilities. You only have to look at the OMG’s ever growing list to see that once again people are looking for the holy grail of application creation as projects overrun and costs escalate.

I do see a trend towards total code generation once more. IBM has launched a 4GL called EGL. This looked quite promising and might me worth a look but to me it is not yet as mature as others.

The difference between tools like Plex/2e and this new breed of tools is that the ‘so called’ newer tools generally only cater for the singular environment and often really only create the initial code that requires manual intervention and coding in the generated language. In my mind, these tools have yet to evolve as far down the road as Plex/2e.

Plex and 2e both have their unique selling points.

2E is pretty easy to use and probably has a 3-6 months learning curve for a developer to become very proficient. Quicker with excellent training and in-house support. Software development room 101. Item 3. Always spend decent money getting a guru to help you set up your environment and train the developers. Too often mistakes are made is the early stages of application development. This is especially true when using new tools.

Plex will take longer (12 to 18 months) as it supports inheritance, shipped and customer business patterns, meta coding and many more target development platforms. It really is the Daddy of ARAD (Architected Rapid Application Development), hence the learning curve but the payback after this is judged in weeks, months or even years off a development projects timeline. And with the great pricing of the tool and generators nowadays, it really is an option to help protect you against the constant upskilling costs associated with other technologies.

When you also consider that the tool has localisation, application version partitioning built into the tool. From the single skill set perspective your developers will always remain current. That said, you would always create the optimum patterns and platform level code if some of your developers have the lower level skills.

I have been programming computer systems in Plex and 2e for 16 years and these systems have used the best aspects of these tools and have always been database focused applications.

These have been in Finance and Banking, Debt Management, Mortgage Application and Processing, MIS, Project Management, Time Recording and Environment Management. These were deployed on System I (now IBM Power System with ‘i’ as the operating system (RPG and RPG ILE code), Java, C++ server code all with either C++ or Java (Swing) clients.

With the plans for these tools heading towards .NET C# clients and the C# server code in 6.0 already available. The recent announcement of the WebClient partnership between ADC Austin and Websydian means that the future looks really bright.

Time will tell what will happen and often these battles are not won or lost by the technologies, often they are decided by the marketing budgets.

However, I know what playground I want to play in. And if you need a guru to help you. You should contact me.

Thanks for reading.
Lee.

Thursday, May 22, 2008

Where's the dishcloth?

Bugs!!!! Love them or loathe them, realistic developers understand that bugs are part of our everyday life. We have technical bugs, environment bugs, business logic bugs, integration bugs, somebody elses bugs and god forbid, stomach bugs.

Now apart from the stomach bugs. Who is responsible for clearing up this mess?

There are numerous approaches depending on the product(s) you have developed, your organisational structure and your focus on bugs in general. I prefer the ‘zero tolerance’ approach to bugs, however, others are quiet happy to have a level of bugs in their code and apply risk and cost ROI calcualtions to determine whether the bug is recitfied, and if so, when. I feel there is a whole post on that subject alone and I’ll save that for a slow news day.

Moving back to the tactics around who should be responsible for clearing up this shoddy code. If you work as part of a small team of developers or lone wolf it is likely you have little choice other than to get the developer who wrote the code to fix it up (look in the mirror). You are unlikely to have development support teams who act as dedicated bug fixers or access to a stream of developers on the graduate recruitment programme that fix up the bugs as part of their development induction process. The later two are certainly perfectly valid approaches although a little old fashioned in my view, after all, who trains up new recruits in the process of only showing you how not to write good code.

Personally, I believe that the developer who created the code should be the developer who fixes the bug. Obviously this won’t happen if they have left or are away on annual leave or a significant amount of time has passed, but in general it would be good practice to follow this process through. There are many fine reasons for either approach and no doubt I will conclude with some views around this a wee bit later.

For now, I prefer to use the anology of those everlasting worksurface ‘tea rings’ when referring to bug clearing methodologies.

“Tea Rings!!!”.

Yes you heard me correctly. Consider the communial kitchen in your office. You probably visit this vicinity between 4 and 10 times per day to make that cup of espresso stimulus or the relaxing afternoon chai tea.

The process is quite simple. You will carefully choose the serving vessel and may even warm it through first. You will likely compliment your brew with milk or cream and sweeten to taste, unless of course you actually listen to the advice of your dental hygienist and drink water only. Whilst queueing patiently for the kettle to boil like the quintessential englishman you will definitely have pondered your preferred order for mixing these ingredients. Water or milk first probably being the most important choice and certainly the one that has polarised the tea drinking world for generations.

More often than not this process is repeated throughout the day and with the exception of having to raid the dishwasher for a preloved teaspoon it generally goes without a hitch time after time after time. Software development generally pans out this way too. Once a developer becomes productive and uses your best practices they will be able to make a good brew (code) with no mishaps (bugs).

After all the effort analysing, prototyping, designing, creating and ensuring adherrence to your quality control processes you are finally ready to move your code (brew) to production or systems testing. From time to time though there is that unsightly spilage around the base of the cup as you pick it up. These are those tea rings that are etched on every spare post-it note pad on your desk or the coat the surface of that old CDR you are using as your cup coaster, the same coaster that once contained the backups of your companies servers.

So who is the best person to clear up this mess. As the creator it should be a small matter of picking up the nearest dishcloth and wiping the worksurface clean. But wait. When you look at the mess you notice that there are other tea rings there, some sugar mounds and a spattering of breadcrumbs from that cheese toasty you could smell from the other side of the office earlier. At this stage do you clean this lot up as well.

You may elect to wipe clean your own mess only, expell a little more elbow grease and time and clean all of it or choose to ignore the tea ring as in the whole scheme of things, it is hardly noticable in amongst the remainder of the mess. For me there is only one satisfactory approach and that is to deal with the issue as soon as it arrives.

It only takes seconds to analyse the problem and take effective corrective action. If you choose to mop up all the mess then you must be aware of the dependencies of fixing up all the issues. What appears quite simple may take longer and if the mess is particularly ingrained you could actually damage the efforts of others.

Doing nothing though really isn’t an option either as this creates an environment that bugs are satisfactory. Housekeeping is just as important in the office kitchen as it is with keeping your code and products bug free. If you do favour seperate teams or graduate programmes for doing the teams dirty work, imagine for one moment how they feel knowing that they are merely cleaning up other peoples mess.

Lastly, how are your developers ever going to get better and improve your product if there are no consequences for producing shoddy code in the first instance.

Thanks for reading.
Lee.

Monday, May 12, 2008

The Great 3GL v 4GL debate - Part II

This is part II of a trilogy of articles regarding the usage and evolution of software development languages. Part I can be found here.

So what are the benefits or otherwise of using a 3GL over a 4GL and visa versa. For me it certainly depends on all the usual factors that drive any technology decision. Cost of product, support, flexibility, the human factor, tool lifecycle, vendor direction and target platforms being a few that come to mind instantaneously.

The Pro’s of a 3GL

Embedded or mission critical applications like Air Traffic Control systems are generally handcrafted and more suited to a 3GL environment, as are operating systems, 4GL tools themselves (debatable), communications, hardware drivers and generally non database applications. As the developers have access to all the API’s and are that step closer to the CPU, they generally have wider usage opportunities.

Accessibility to wider developer pool. Whilst there are probably thousands of developers for your chosen 4GL, possibly even tens of thousands. These tools simply do not have the numbers associated to mainstream development languages and IDE’s. There is an estimated 4 to 5 million developers following the evolution of Java and no doubt Microsoft can boast even more for its most popular products. That said, of course, this also means that it is also harder to find a guru within that skills ocean, not to mention, filtering out those who have spent 15 minutes in the IDE and now claim some form of exposure on their curriculum vitae.

3GL’s are quicker to react to emerging markets and development trends. Generally the suppliers of these 3GL tools are inventing the future. They don’t often agree with each other but they certainly have the advantage over the 4GL creator. These guys have to wait and see what technology actually matures beyond the marketing hype and into mainstream best practice before committing to provide code generation for that area.

Flexibility. Languages at 3GL level, depending on the targeted platform, have virtually no restrictions with the type of application that can be written and how they are written. This means that applications where speed of performance is the critical measurement of success then it is most likely that a 4GL will fall short of the handwritten targeted code.

The Pro’s of a 4GL

Business rules focused development. Once you have learnt the code generators quirks you are in a situation where you mainly tackle your development from the business domain and you allow the code generator to handle the technical implementation. With this comes a significant reduction in the amount of time required to build an application. Many will say that there are standards and frameworks that help with 3GL development. This is actually quite true, but, also be aware that the code generator vendor will be skilled with the major best practices and will write more consistent code. Some may argue that the code is not as neat as code written by a good developer and in the regard, I quite agree. I will say that the underlying code will be written in the same way and style, therefore, after a while all the developers will become conversant in how the code is generated, that is, if they want or need to understand. (See Below)

Complexity avoidance. A 4GL will protect the majority of the developers using the tools from the underlying complexities of the generated language. When you couple this with the ability to influence how the code is generated using patterns, have the ability to take the design model from the 4GL and transform that into other language code, your business logic can truly be ported from platform to platform as trends become reality and your technical needs change.

Impact Analysis. For me this is one of the key features of using a 4GL tool. Generally these tools use a database to store design and program artefacts that are then transformed in the language code. Every reference for every field, File/Table, Access Path/Index/View, Function/Object/Program is stored in the repository and a developer can track each and every item through to where and how they are used. This is a powerful feature that cannot be overlooked versus manual reviewing of language source files.

Trusting the generator. When I train people to use CA 2E or CA Plex the defining moment for gauging the developers progress and understanding is the day that they learn to trust the generator. As with any tool, a badly constructed function in 2E, for example, can create badly generated and non compilable code. Once the developer realises that it is generally their fault if a generation of code fails they’re ready to move forward. If have seen far to many 3GL programmers migrate to the 4GL paradigm only to get bogged down into the details of the code produced, yet they will trust the compiler without hesitation. With the ability to change a shared function or the domain of a field and then apply detailed automated impact analysis to identify all affected programs, press a button to regenerate and compile all programs and database files affected is a very powerful feature.

The Con’s of a 3GL

Slower, more expensive development. The very nature and size of modern 3GL languages and their flexibility is also their Achilles Heel as there are so many ways to resolve a programming issue with literally thousands of opinions and many directions. In a nutshell for certain types of applications, particularly those that involve the extensive usage of a database, the ROI for using a 3GL versus a 4GL is very poor indeed. To contra some of the cost debate, 4GL tools are generally more expensive to purchase. The most expensive item in any development team is the human, even if it has been outsourced to an emerging development powerhouse.

You will spend more time debugging the application. A very good ex-colleague of mine once said “If the art of debugging is the removal of bugs from programs, then programming must be the art of putting them there in the first place.” Because we are relying on the developer to code all aspects of the application it is likely to cause some issues along the way. It is generally the developer’s prerogative to deal with memory leaks and usage in languages like Java or C++ but with a 4GL it would be the code generators responsibility.

Complexity. Once again due to the size of the languages and their strong reach it is unlikely that you will find developers that know all the aspects required to complete an application. Your staffing needs are generally much higher and the learning curve for the 3GL would be very significant indeed. This means that the developers must understand many technical as well as business problems.

The Con’s of a 4GL

Vendor lock in. Depending on the vendor this can be quite a significant issue. If the vendors are too slow to react to emerging technologies you will find yourself with a heterogeneous development environment and you will lose many of the advantages referred to above with regard to complexity protection and highly detailed impact analysis. Worse still, your vendor may well decide to stop production of the 4GL or chose other directions as the options with technology deployment balloon. These tools are often criticised as proprietary.

Flexibility. There will be limitations with the scope of applications that can be created by a single 4GL. There are of course others that target different platforms and purposes. Their flexibility is often measured in the lowest common denominator for which they have to support/generate code for. For example a generator that generates code for three different platforms may have to limit what can be done in one language due to limitations in another. For example different languages may have differing maximum field lengths meaning that for generic code construction in the 4GL platform x and y can only size fields to the limits of platform z.

Source Code. Many 3GL developers will argue that the code is not user friendly, bloated and often too generic in comparison to hand-written code. This can be true of some code generators and is certainly something that needs to be considered when choosing an approach for your development.

All of the above are by no sense of the imagine a definite list. Given time, I believe that I could have produced a list of 20+ Pro’s and Con’s for each approach.

Part III will discuss trends, fads and conclude the 3GL and 4GL debate with my own personal viewpoint.

Thanks for reading.
Lee.