Monday, July 28, 2008

The context of programming

Recent events in my place of work have lead me to ponder the concept of programming context once again. I suspect it is a pervasive concept as I seem to come across it on a regular basis in quite different circumstances. Let me explain.

If I am asked to write a program that accepts two numbers and returns a third number, being the product of the two, then there is not a lot more I need to know. Perhaps knowing the possible range of input numbers would be useful, but really this is a pure mathematical problem and has no context.

If I am asked to write a program that accepts two numbers and returns a third number - the number of residential addresses in a database that fall between those two numbers - then there is quite a bit more I need to know. I need to know whether just street numbers alone should be checked, or whether street names should be included (5th Avenue, for example). Even within street numbers alone, what about flat numbers? It's a bit more complex than the first example as there is a context. I.e. what are we actually trying to achieve here?

Now in a third example, I am asked to write a program that accepts two numbers (x and y) and returns a third number which is the number of active users who have been logged in between x hours and y hours. Again, now the context is complex. How do I define a "logged in user"? Do I define one interactive session as one user, or do I need to reduce this to unique users because some may be logged in more than once. What about "special" users such as system supplied IDs? Should they all be counted, none, or only some?

But the third example is even more complex than I have shown so far. Consider that this function needs to work in a function test environment, in an integrated test environment, and in production. There are some processes that occur only in production, some only in test and some on both. Will this affect the outcome? Is testing on the test system going to be good enough to know it works in production?

Hang on a minute - aren't we talking about system programming? Well, maybe yes and maybe no. If this program is needed to manage software licensing, then it's a system program. But, if it is needed to manage the number of customer service representatives assigned to different parts of the call centre, then no it is not system programming. If it is being used to achieve load balancing for application service jobs then it could go one way or the other.

Now that was a somewhat contrived example, but it helps me to illustrate my point. In all three cases, take two numbers and return a third. The first example I would expect absolutely any programmer to be able to achieve. The second example I would expect any programmer to be able to achieve if complete requirements are provided. If the problem is only defined as I described, then you would need an analyst programmer. For the third example, who would you give the job to, generically speaking?

This is where I see a massive gap. I, myself, have been fortunate to have been involved in both application and systems programming fairly extensively and even if I say so myself I think I'm pretty good at covering off the sorts of issues described above. It also means I am frequently seeing other programmers who are failing to account for the "system" level factors.

In a specific recent case, a developer insisted that my team (who are a development & test support team) replace one version of a program with another so that it 'behaved like production'. That should have been the first red flag. (I was not involved at this stage so I don't know whether I would have caught this at the start.) Why was the test system behaving differently to production?

Well, the developer got his wish and proceeded to begin to make his related code work. Meanwhile, large numbers of other people were tripping over the problems introduced. After several days of analysing the problems we concluded we had to put things back the way they were. To quote Spock - "The needs of the many far outweigh the needs of the few." This programmer was looking in far too narrow a context in defining what needed to be done. He had no concept of the roles this particular program was playing, nor the large number of dependencies it had. For instance, an automated regression testing suite completely failed because of the change.

But perhaps the most spectacular case of lack of context that I have ever encountered was in a previous role.

The product in question was enterprise software being used all around the world and it was incredibly complex. Customers had requested the ability to use off-the-shelf reporting tools (such as Crystal Reports) to create their own reports. The development organisation realised this meant less work on such things for us and considered this was a good idea - but dangerous. Great they can write their own reports, but how to let them into a massive, complex database without (a) massive confusion and (b) the opportunity to corrupt it.

So a plan was hatched to deliver a new library (for self containment) of logical files (views) which would collate the data into meaningful constructs and, importantly, be read-only. My team (again in development & test support) figured out how to deal with this new library for the purposes of the testing done on it. For the most part we just manually created and destroyed these libraries as required and used some of our own toolset which, importantly, is not delivered to customers.

At some point I got to thinking...How are we going to deliver this? The initial response I got from the designer was "on a tape/CD with the rest of it." To cut a long story short, I soon proved that it is impossible to ship a library full of logical files. Period. Can't be done. I took this information back to the designer, along with a rough sketch design of a simple tool which could alleviate the problem, and also be useful within the development shop.

The response? "We didn't budget for that." * Sigh *.

In the end, I wrote a quick (hack) version of that tool on the day we packaged the software. Some months later someone contacted me saying that there was a bug in my code. I sent them to the designer to have it sorted out.

Thanks for reading.
Allister.

Monday, July 21, 2008

2E - Development Standards (Defensive Programming)

Part two in the series and takes a look at defensive programming techniques and how these help you to create reliable programs.

The following a guidelines for creating robust code.

Always check for a divide by zero (runtime) error by checking the divisor field for zero value prior to performing the *DIV operation.

Never move numeric fields into a field with a smaller domain.
With RPG this can cause truncation of the value and with RPG ILE Pre 8.0 will cause an RPG ILE runtime error.

Ensure that your iteration values and counters are large enough to cater for your anticipated maximum.

Ensure that your field sizes for database attributes are sized sufficiently to cater for the number of records anticipated.

Ensure that your arrays are sized to cater for the maximum number of array records anticipated.
Thus avoiding array index out of bounds issues. Remember to balance this with not overly sizing the array and thus causing a performance degragetion.

Always ensure that any substring operations utilising position and length parameters are within the range of the target field. Thus avoiding substring out of bounds.

Remember to check the function options for your function to ensure appropriate behaviour, especially close down program and reclaim resources.

Never use the WRK context in new programs. Use LCL and HLL.
If you choose to fix up old WRK fields, remember to check all other internals within your object and ensure that fields aren't used. This used to be used a trick in the old days to bypass the paramters passing limits. This was pre-arrays and when structure files where a pain.

Avoid HLL User Source and Programs. If you do write user source for RPG first convert program to RPG ILE and write one user source. The sign of a well managed and maintained 2E model is the percentage of HLL code versus generated code. If your models are more than 5% HLL then you have issues and a history of developers who have misunderstood the purpose and philosphies of model based development. IMHO.

Always pass parameters to user source. Do not rely on the generated field names.

Avoid use of CON context as these values are not available for impact analysis and localisation.

Avoid manual source modification. Use a program and the pre-processor directives to amend code automatically.


Source Modification – Special Notes.

Manual source modification must be avoided at all costs. If source is required to be overridden then a source modifying program should be written to automatically perform this function after generation and before compilation using the pre-processor.

In Summary:-

- Do NOT consider source modification unless absolutely necessary.

- Avoid the use of fields that use incremental counters for naming i.e. LCL Context YLnnnn.

- Avoid adding parameters above a field declared as a modified field. Therefore, always try to ensure that your fields that are parameters that are modified are at the top of the parameter declaration list.

- Try to avoid usage of fields higher than 32k. If 64k is required considered looping in 32k blocks as the 64k limit would one day be exceeded.

- Consider a naming standard to help you to easily identify a modified program and its modifying program.

- Consider centralised methods to ensure source modification programs have been successful rather then depend on a developer having to manually check the modified source.

Thanks for reading.
Lee.

Monday, July 14, 2008

Knowledge capture & use in technical support communities - Part 3

In Part 2 I looked at how to capture expert knowledge. In this final instalment, I will offer suggestions on where to store that knowledge, some differing uses of these concepts and offer some final insights into why I wrote the original version of this article.

Once again, I lead off with a small overlap to set the scene.

Electronic storage for fast access

Following the structuring process above introduces one significant disadvantage in a paper-based documentation repository. Frequent referencing to other documents causes the reader to flip pages or have multiple documents arranged on the desk in order to complete a single process.

Simply storing these documents electronically, such as Microsoft Word files in a LAN folder, is not enough to deal with this problem, as it merely shifts the emphasis to clicking the mouse constantly and still does not help out with the comprehension of the process as a whole.

The answer is hyper-linking. Every single reference to another document should be turned into a hyper-link to that document. This guarantees simple, fast, unfettered access to the linked information and, in most cases, an easy return path to the original document.

It is important to choose the document delivery technology with this method of usage in mind. Although MS Word provides inter-document hyper-linking capabilities and is an excellent document editor, it leaves a lot to be desired as a document reader. Better choices are Lotus Notes, or HTML. Lotus Notes serves as both editor and fast viewer. HTML requires a separate editor (and there are many to choose from), but has a ubiquitous interface if you are considering a large audience for the documentation.

Although I would recommend Lotus Notes as an excellent delivery mechanism, it is important to avoid the use of Notes 'Views'. Rather a default, or 'home', document should be launched easily from a bookmark and all navigation from that point should be via hyper-linking. This gives immediacy to the navigation by making it all point-and-click. This typically avoids the need to work with Notes' view characteristics like 'twisties' and scrolling. (Note that both of these can be used within a Notes document if desired.)

Tech-centric versus customer-centric

At the beginning of this series, I referred to the workings of a technical support team. For such a team, there are two key audiences for a documentation repository following the guidelines described in this document.

Perhaps the most important audience, in terms of covering exposure, is the team itself. Sharing the expert knowledge around the team ensures that individual team members do not become indispensable.

However, often the single biggest gain to be made from good documentation is by providing the team's customers with 'self help' information. The audience level for such documentation will necessarily be much lower than for internal documentation and will therefore take more time to prepare, but the ability to point customers to a self-help document for a common problem can save the team enormous amounts of time. This time could be used to improve the internal documentation and then you can reap all of the benefits.

The author's experience

In writing these articles, I am speaking from experience. I have built a customer-centric and a tech-centric database along these lines and they have been very successful. Both were built as Lotus Notes databases.

The first one built was the customer-centric repository, which was deployed to up to 150 developers and testers. This originally came out of a FAQ document introduced during a pilot implementation of a source configuration management product. The documents were listed in a Notes view in broad, simple categories. Each document title was phrased as a question, the answer to which lay inside. Many of the documents referred to others for pre-requisite information.

My team used this database to help keep the load down in our helpdesk-like operation. I estimate that once the database had matured, over 90% of customers who were referred to one of the documents did not need to make further contact with us for that issue. It is clear from the numbers and the nature of calls we received that many customers went straight to this repository and never needed to call us.

The second database was tech-centric and used all of the principles described above. Although the closure of the business meant this repository never quite made it to 'complete' status, the documents that were created (over 60) proved the points I have made above. New members in the team who were technically competent (that's why we hired them) but who had scant knowledge of our processes were able to perform fairly complex tasks purely from the information contained in these documents. Although it took them sometimes magnitudes longer to perform the task than one of the experts, the fact remains that they successfully completed the task with low or no input from the experts in person.

Reflection

In the matter of conversational language being more effective than rote instructions, I can recall being given two articles to read in preparation for a 'knowledge workshop' (that unfortunately never happened).

After many years now, I can only remember what one of those documents said. The one I remember spoke about how conversational language was much more effective than brief and often brusque language that is found in much technical documentation. It made the link to the over-the-shoulder technique and postulated some of the thinking that I have expounded above.

Some time well after I had created the customer-centric database I came across these two documents and re-read them. I immediately tied the conversational language ideas to what had happened in our database and saw that it was true. After re-reading the documents, I also realised why I had remembered this document and not the other. This one was very conversational, in contrast to the much more formal (and fact-and-figure quoting) style in the other document.

To this day, I do not remember what the other document said. Perhaps sometime I will come across the pair again and the cycle will repeat.

Thanks for reading.
Allister.

Sunday, July 6, 2008

2E - Development Standards (Performance)

This is the first part in a complete series of articles I intend to post regarding development best practices and standards for the CA 2e (Synon) development tool. The aim of publishing the guides is to educate, collaborate and enhance the standards by receiving community feedback. After all, no one person can know everything but the wider community can contribute.

Many of these tips I have learnt over the years and quite a lot have been sent to me by interested parties around the world. A big thank you to you all.

I will publish the complete documents on the 2E wiki (soon) with full acknowledgements. (See my links section below).

In the meantime I will publish some selected extracts on this blog just to get your thought processes flowing.

Performance

There are many considerations when programming for performance in CA 2e. A few are highlighted here. This is by no means an exhaustive list. My next technical post will relate to Defensive Programming techniques......

I'd be interested to hear of others from the community in general and would be happy to include them on this blog and the final wiki document.

Drop unused relations where possible and set others to appropriate level. i.e. OPTIONAL or USER etc. This cuts down unneccessary code and processing as well as making your action diagrams more easily navigable.

Avoid FLD for passing parameters for non command line type programs. Will use less PAGs.

Tactically use *QUIT to reduce I/O. Especially when programs have lots of nested validation logic. Very useful when validating. Use the *QUIT inside subroutines to halt further processing. Provides cleaner message feedback to end user and reduces response times.

Avoid Dynamic Selection Access Paths.

Avoid Virtuals, especially Virtuals with relations to files with Virtuals.
Virtuals have their place. Query access paths or in scenarios where they are always used. Best practice in this area is to avoid virtuals and to get data as appropriate.

Ensure programs do not close down if called iteratively.e.g. in a loop or inside USER: Process Record etc for a RTVOBJ or PRTFIL etc. Typically used for externalised RTVOBJs.

Consider sharing ODPs.(Open Data Paths)

Consider usage of shared subroutines.
Minimise the amount of code and reduces object size and makes debugging easier.

Consider usage of Null Update Suppression within your CHGOBJs. Very useful for batch programs.

Avoid unnecessary selector/position fields on subfile selectors.

Avoid contains (CT) selection on control panels

Ensure arrays are appropriately sized.
Too large they will consume more memory.

Reduce file I/O by loading small reference files that are regularly read into arrays upon opening the program. Good examples here wsould be files like TRANSACTION TYPE or XYZ RULES.

Reduce I/O by only getting reference data only on key change. This will depend on the chosen access path of course.

When writing to an IFS write fewer larger chunks of data rather than multiple small chunks. Overhead is opening, positioning and closing the IFS file.

Pass reference data down through call stack rather than re-retrieve in the lower level function.

Consider physical file access paths for fix programs (version 7.0+) or write SQL to perform the basic updates.

Use OS/400 default values to initialise fields on a database file rather than write a program.

CHG/CRT v CRT/CHG. Use the appropriate one depending on likelihood of the records existence.

Avoid *CONCAT and *SUBSTRING native in 2e for long string manipulation. If concatenating for long strings it is possible to keep a counter of where you are to save the concatenation operation time to identify the current position in the string.

Avoid RTV message to build strings with high usage.

Consider DSPFIL instead of DSPTRN, especially true if de-normalisation is designed in the database with any total duplicated into the header record.

Do not perform a *RTVCND for blanks.
Check for blank first in the action diagram.

Consider a database file field for *RTVCND if approriate.

Be aware of the affect of a scan limit for strict selection criteria as the screen will not pause load processing until the subfile if full or EOF reached. Particularly for large files.

Consider the naming conventions of your access paths to ensure that underlying indexes can be shared when key subsets are apparent and ensure that are built and implemented in the correct order to reduce indexes.

Ensure access paths have correct maintenance option i.e. *IMMED, *DLY or *REBLD.


Thanks for reading.
Lee.

Monday, June 30, 2008

Knowledge capture & use in technical support communities - Part 2

In Part 1 I discussed the problems facing the technical support team with overworked experts and a need to transfer their knowledge as efficiently as possible.

In Part 2 I will discuss how to successfully capture and store this knowledge in an efficient and, above all, useful way. I'll lead off with a brief overlap from last time as a reminder of where we got to.

The 'Virtual Expert'

From what has been discussed so far, it is clear that expert knowledge is required, but that tying up the expert in this process is seen as unproductive in the current climate. We cannot get away from requiring time from the expert, but we can minimise this time and capitilise on it by recording the knowledge in the right way.

The answer lies in recording the expert knowledge (on paper or, more usefully, electronically - see later) in such a way that it is as close as possible to the over-the-shoulder commentary.

There is often still a need to use numbered steps when accomplishing a task. Such steps provide structure and sequence and help with mental tracking when performing the task. There is no reason, however, why each step cannot contain more than simple 'input, output' or 'action, reaction' type information.

For maximum benefit, each step should be written in conversational language and explain what the user is doing, why they are doing it, what the expected outcome should be and at least make reference to any unusual, but known, variations.

Furthermore, before any of the steps, there should be an introductory section which describes why the user would perform the task, what pre-requisites there may be and definitions of terms, systems and the like. After the final step, make mention of any further tasks that may be a logical progression from the task described, but which do not form part of this process.

Don't take anything for granted

Whilst we are talking about capturing expert knowledge, it is important not to lose sight of the basics. Any documentation is devalued if it makes too many assumptions. In creating a documentation repository, an audience level should be decided - such as 'technically competent', or 'beginner' - and all documents should be written for that lowest common denominator. It is easier for a more expert user to skim over known material than it can be for a new person to work out the undocumented basics.

It is important to include examples in the documentation. Where possible, have the example show the most common scenario, as it is most likely that staff new to the task will use the examples. It is also worth giving additional examples if there are significant variations in a step. Providing examples helps the user to get closer to the over-the-shoulder situation.

The ultimate test for the documentation is to give the process to a person who is at this 'lowest level' and have them perform the task. You will be surprised by some of the information you have taken for granted in your early drafts. I know I was.

Structuring the documentation

For ease of maintenance, it is important to only ever store a piece of information in one place. To help achieve this structure, it is useful to allow for two document types in the repository - reference documents and process documents.

Process documents contain steps describing how to perform a task. Reference documents contain (mostly-) static information that supports one or more processes.

It is often necessary to refer to tables of information (such as a list of files, describing their usage) from more than one process document. By separating this type of information into a reference document, it can be referred to by multiple process documents without increasing the maintenance burden through multiple copies. Additionally, when the table requires maintenance, it is easier to locate (residing under its own title) and the maintenance can be performed without danger of corrupting the process documents. When properly structured, maintenance of the reference document can be accomplished without knowledge of the referring process documents.

Whilst reference documents tend to represent pure data, it is still important to keep the conversational language in mind. There may be naming conventions or other conventions which are being followed for the data and it is important to note this in the reference document to complete the picture for the user of the information, and equally importantly for the maintainer.

It is also beneficial to factor out sub-processes into separate process documents and refer to them from the major process documents. This is of value where a sub-process is part of more than one major process.

The major benefit of having information in only one place is realised when errors are amended or updates are applied. These have to be done in only one location and all related processes are automatically catered for, as they simply reference to this single occurrence.

Capturing is understanding

The process of capturing information is time consuming and is best not left to the individual experts. Remember that they don't have that much time. Also, too many authors can devalue the repository by differing styles and levels of language.

The best solution to this is to have a single person (or possibly two or three) to build the documentation repository. This co-ordinator is then responsible for collation, setting style and keeping the language consistent. This person should not be expected to author all of the documents, but must be able to understand at least the broad concepts involved in order to ensure that appropriate structure is followed.

Each expert should be expected to provide a draft of the process or reference data, in a form approaching the final requirement. In some cases, where the co-ordinator's knowledge is good enough, they may author the document, but it should always be checked by the relevant expert.

Electronic storage for fast access

Following the structuring process above introduces one significant disadvantage in a paper-based documentation repository. Frequent referencing to other documents causes the reader to flip pages or have multiple documents arranged on the desk in order to complete a single process.

In Part 3, I will discuss some real world solutions to electronically storing, maintaining and delivering the captured knowledge.

Thanks for reading.
Allister.

Sunday, June 22, 2008

The "Wooo! moment" Factor

This is another one of my general life blogs and follows up from my recent article about what makes a good programmer where I refer to a ‘Rocky Balboa’ moment that encompasses everything about being a great computer programmer.

I would like to clarify that this post has nothing to do with the incessant number of reality talent shows and is not in anyway linked or endorsed by those in the entertainment industry.

However, I recently went to the Smackdown/ECW world tour event here in Auckland, New Zealand.

Like most of the other hardcore WWE fans I registered for the internet presale, logging in a minute or two before midday and continually pressing the refresh (F5) button in Internet Explorer until the ticket selection page popped up. A quick combo-box click later and an anxious wait ensued whilst the system allocated my tickets and then the “Woo! moment” came.

I knew it was a full on “Wooo! moment” as it did turn the heads of a few people in my office.

The reason being was that I was lucky enough to get front row tickets, seat numbers 1 and 2 which was not only the closest you can get to the ring but also the closest you can get to the entrance ramps for when the wrestlers strut their stuff as part of the pre bout entertainment. The arena held thousands and thousands of people and to get to two best tickets in the house was reason enough for me to celebrate. Actually, celebrate for my daughter, as she is the wrestling fan. I just have transient knowledge.

For anybody that follows the WWE Wrestling you will probably know a guy called Ric Flair. He is a 16 times world champion as well as being regarded as one of the all time greats. He is also famous in WWE circles for his tag line “Wooo!”. So much so that the last three shows we have been to, Ric Flair has never been there, he has been retired since March 2008 and yet everyone was shouting “Wooo!” as the excitement began to build in the arena.

Now for me life is about experiences and the memories thereafter. We have both good and bad times that shape us all individually in some form or another. And one hopes that over the balance our lifetime that there are many more good times rather than bad and it is these good times that I affectionately refer to as the “Wooo! moments”.

Now if we spend on average a minimum of between 50 and 70 hours travelling and working each week I often ask myself why do people put up with a job or a career that doesn’t provide them with “Wooo moments”. I have been pretty lucky in this regard over the years. I am an analytical person and love computers. As a child at my local school jobs fair in 1983 I expressed that I wanted to be a computer programmer. After a few minutes searching the database (list of jobs on paper in those days). I was asked if I wanted to do commerce, a funny choice at first until you realise that it was the closest alphabetically to computing.

How times have changed.

Now everyone wants to do computing and whilst there are now more areas to become expert, I also believe that computing remains at risk of be dumbed down. I say this because many people are getting into computing because they see a higher than average salary trend, they only see the exciting parts of the job glamorised by Hollywood films or they see it as easy.

For me I got into computing because of the “Wooo! moments” and I continue to adore this line of work. But also as I get older I also find myself enjoying the fact that others around me are having their own “Wooo! moments”. I'm a little like grandparents who enjoy watching those cute little bundles known as grandchildren.

But of course it doesn’t stop there.

You may have a job that only has one or two “Woo!” moments in an entire career span. A recent example was the recent mission to detect life on Mars. Some of those guys had been working on that mission for 10 years or more. But that “Wooo! moment” when that million dollar craft landed on Mars and started to do its stuff.

Wow!!!!

The screams of joy and relief I could hear just by watching the footage on a 21” TV was there for the world to see.

The “Wooo! moments” are what drive me to get up each day and if you find yourself having less and less of these moments whilst at work.

Ask yourself why?

Work does dominate and validate many of ours lives so you might as well enjoy what you are doing. But please don't moan to me about your job. Do something about it.

Please, ensure that you do. Life is too short.
Thanks for reading.
Lee.

Saturday, June 14, 2008

The lunchtime effect and an insane piece of Job’s Worth.

The other day I had to visit the immigration department to renew my daughters’ residency visas.

This is a process that you have to repeat every time you renew your main passport because if you want to re-enter the country you will definitely find it useful to have this little slip indicating that you are a legal alien tucked up nicely in there somewhere.

The process is quite simple. You bring your old and new passports, fill out a form and pay the fee.

Simple!!!

A short time later (minutes or hours) you leave feeling robbed but also happy in the knowledge that you’re able to travel to and from your adopted homeland.

The key, as you all know, is to avoid the queue or at least pick the day when the most counters are open. We were going to the hospital as my wife had an appointment for one of her ailments so the time we had available in between the school runs and the appointment was basically lunchtime.

Or put it another way. Rush Hour. Usually when you have to go somewhere on a time constraint then you will always pick the bad day. Well for some reason it was empty on this occasion. We were through to the application triage officers within minutes. These guys check the forms and provide assistance before you get to a case officer.

This obviously is to avoid you waiting for a period of time to find out that you have completed the wrong form or worse still used the wrong type of pen colour.

At this stage I leant over the counter and enquired as to the lack of visitors. You see. I have been to the immigration department before and joined a queue that left the building. To give you another indication in the old building there was a portable café outside to serve food and drinks………..

Apparently it was just a slow day. I was wondering whether this was as a result of what I refer to as the lunchtime effect. I am sure I am not the only one out there but it could be due to the fact that I am an IT guy I was thinking. Why was this room empty? Was it because others had considered that it was lunchtime and therefore made the assumption that it would be busy, thus avoiding the ‘so called’ busy period when there are more people and less staff!!!!

Actually, who cares? I got the visas sorted in record time, but, I am grateful for all those who were considerate to enough to think of the lunchtime effect.

But the jobs worth moment is certainly worth writing about. For one reason alone, I am adamant that the person who came up with this rule was not an IT guy as there is no binary representation of what I witnessed. No IT guy in the world could have come up with an answer other than 0 or 1 (On or Off). And this to me is 5.66645645- and fifteen sixtenths.

As we were applying for three visas the costs were $100.00 per application. This makes sense I guess. Until you hear the triage officer ask “Are either of you two (Wife and I) applying for the visa also?”.

Our answer this time around was “No” because our passports are 10 years and the kids are 5 years in renewal intervals. “Shame” was the response, she then continued, “Because if one of you i.e. principal applicants are applying as well we can do this as one application with 3 dependents and therefore you will only be charged one off fee of S100.00.”

So the logic is that we will do the extra two applications and produce 5 instead of 3 visas. Key in details for 5 people and not 3, print, remove and secure 5 visas in the passports and not 3 and we will do that for one price.

But as a principal applicant isn’t applying then we need to treat it as 3 separate applications!!!!!!!!!!

If someone can shed any light of this I would be grateful. Until then I am proud to call myself a Software Development Professional. I certainly wouldn’t want to explain the aforementioned rule for my living or associate my name with inventing this process.

Thanks for reading.
Lee.

Tuesday, June 10, 2008

Knowledge capture & use in technical support communities - Part 1

This three-part article is adapted from one I wrote almost 5 years ago when much of what you will read about was fresh in my mind. This adaptation addresses only the passage of time and some points of style and meaning for a wide audience.

Whilst software development is the subject of this blog, let us not forget those who (typically in large organisations) support the developers and others.

The nature of technical support communities.

Technical communities come in many forms, be they design teams, development teams or support teams.

Whilst design and development teams are largely about the creation process, they still have many day-to-day activities which are defined and repeatable. Support teams, although fulfilling an entirely different role, often have to create on a very short-term basis. So it can be seen that the different types of teams have similar requirements.

However, the support team seems, most often, to be the one to get out of control. The difference is that the support team is always working on a short time frame. In addition, support teams often become involved in project work and this adds to the complexity of the day-to-day activities, as the time frames are shortened still more.

Most often, you will find that staff in a support team are very good at what they do - they have to be to survive. Unfortunately, the higher the skill of the staff, the more reliant you are on those staff to keep the systems running. It is a difficult and time-consuming option to bring 'green' members into the team.

How many support managers have not recognised that documentation is a key part to the support process? I would wager very few. Fewer still, I propose, have succeeded in completing the documentation requirements within their team and reaped the kinds of benefits they were expecting.

Documentation, to the 'tech', is a four-letter word. I, myself, recall asking the question "Do you want me to document it, or do it?" Simple economies prevent the techs from having enough time to complete the documentation task and many welcome this excuse not to do it.

Another trait of support teams is the experts. In virtually any support team, there will be experts in various disciplines. Most often, however, these experts are relied upon to provide most of the resource in fixing problems in their area of expertise when they should, in fact, be called upon to share their knowledge.

Shared knowledge is a powerful tool. Experts will always be needed when particularly difficult or unusual situations occur, but the team as a whole should be able to leverage the experience to improve task turnaround times through a more even spread of the load.

Knowledge transfer

It has been documented in studies that the best way to learn something is to have an expert stand over your shoulder while you go 'hands on'. The reality of the situation in front of the learner, coupled with specific and pertinent comments or instructions from the expert gives the learner an experience often indistinguishable from the real thing. The learner also has the opportunity to ask direct questions in the context of what they are doing. Book learning, on the other hand, can only go so far with static examples and predetermined situations.

Perhaps the most important aspect of 'over-the-shoulder' learning, however, is that the expert is unlikely to simply recite steps by rote. There will be an accompanying commentary and usually a significant amount of reasoning on why things are done that way. This is very important in equipping the learner for when things do not go to plan.

Learning the steps of a process by heart is well and good when the process works. Most often, however, processes do not cover all possibilities and the rote-learner of the steps is going to come unstuck when an unforeseen, or simply undocumented situation arises. Unless the learner understands why they are taking the steps and what they should be achieving, they are almost as much 'in the dark' as prior to learning the steps.

Having knowledge about the nature of the process and the goings on under the covers helps get through many small deviations from the norm and also helps in issue resolution, as the learner is able to return to the expert with an hypothesis, or at least having done some basic checks suggested by the nature of the operation.

The key issue with this type of knowledge transfer is that, in the majority of cases, the expert is already overworked and has no time to spend standing over shoulders.

A secondary issue is that the expert may have to impart their knowledge, over time, to a number of different people, and this is inefficient.

The 'Virtual Expert'

From what has been discussed so far, it is clear that expert knowledge is required, but that tying up the expert in this process is seen as unproductive in most situations. We cannot get away from requiring time from the expert, but we can minimise this time and capitilise on it by recording the knowledge in the right way.

In part 2 of this article I will go into methods for capturing this knowledge in the most effective way.

Thanks for reading.
Allister.

Monday, June 9, 2008

By way of introduction...

Greetings fellow software developers, this is not Lee speaking! My name is Allister Jenks and I am sure some of you who know Lee will know me as well. Those who don't know me may yet have read my comments on Lee's posts - under the identity of "zkarj".

Lee has graciously allowed me to contribute to his blog and I hope I can bring you the same levels of insight and analysis that Lee has led off with. I look forward to your feedback too.

Sunday, June 8, 2008

My first blog alliance

It gives me great pleasure to introduce an ex-colleague of mine called 'Zkarj'. He has worked with IBM Power Systems (System i, i5, AS400 etc) for many many years and is a true advocate of the platform in general.

He has written several articles/rants over the years and is published online

Zkarj has asked if I would like to accept posts from him on this blog. My first offer of co-authorship since the blog began but one I certainly won't be turning down as he has lots of very interesting things to say about many of the topics that I blog about. You can see that by the number of comments I get when one of my new blogs hit his RSS feed

I hope you enjoying reading his material as much as I enjoyed working with him

Thanks for reading.
Lee.