Showing posts with label Documentation. Show all posts
Showing posts with label Documentation. Show all posts

Thursday, February 9, 2017

2E Code Review - pet hates - part I


Today I thought I'd get a few things off my chest. 

Whenever I review code, look at old code or have the job of maintaining code (not my favourite part of the job by a long shot) I am often left dumbfounded.  I see the same old mistakes being made regardless of where I have worked. You will often hear me saying,"Whoever wrote this should be shot!!".

The most annoying part is that in many places the developers agree about the best way to code but quickly use the excuse of "I don't have enough time", or "Well it's done now, so don't worry" to not code properly.  My experience tells me it is often the most experienced developer(s) who are the hardest to convince of solid development standards with the terms "Old dog" and "new tricks" preeminent in my mind....

IMHO the difference between a run of the mill programmer (I've worked with plenty of these) and a good one (fewer but they are out there) is adherence to the little details that make long term maintainability of your code as easy as it should.  This is especially true considering the inevitable change and enhancement that is required during an applications growth and evolution.  I have espoused the standard development quote that 90% of an applications life cycle is maintenance numerous times before.

Anyhow, my post today is a list of pet hates that I see when people are "unprofessional", "selfish","crude", "Lazy" with their 2E coding.

1. Legacy commented out code.

I recently worked on some maintenance and had to work through some action diagram code.  Using the Find Services option within 2E I started to quickly get frustrated that the majority of the usages that were commented out.

What was worse still is that there were developer comments (from 2002) stating that the code was commented out and I know that this area has been enhanced many times since.  There is simply no reason to keep such old code.  The developer concerned is normally quite good but has a couple of really bad habits like this one.

Here is a snippet of code from an old function (screen print kindly shown with permission).  Note, I have redacted any client or developer or brand comments (hence some empty white space).

 

I am all for commenting out code as part of debugging, rapid prototyping, unit testing and quick wins (hot fixes) where you are not quite sure of the change you are making, but please annotate when and why.  However, I am aware that some of this code was moved to their own program(s) so IMHO should just have been removed.

Associated hates:
  • Having to weed through the chaff to get to the code to change.
  • Any old code will not have been maintained so will not (often) be fit for purpose (if you decided to uncomment it) so why leave it there.
  • Extra impact analysis (especially for internal functions). 
  • Confusion with the impact analysis usage.  A future post is planned for this.
  • Better to take a version.
  • Commenting out code doesn't change the timestamp for the AD line so we don't know when it was commented out.
Tip: Comment out code sparingly and preferably not at all.  Be confident with your code and solution.  After you have completed the work (and tested it), if you are 100% happy with the results revisit the action diagram and remove commented out code.  Commented out code is a maintenance burden and others will not understand why.

Here is the same code with the bad commented out code and legacy comments removed.

Still not perfectly structured but a whole lot more readable.  Perhaps CA can add a hide/show commented out code option for the action diagrammer.

2. "GET All fields" RTVOBJ not doing as described.

Every 2E developer has written the standard RTVOBJ that returns the full record. Yes, we all have forgotten the *MOVE ALL, but my biggest annoyance is when people don't visit these functions when the underlying file structure changes.

Imagine the "*GET", "GET Record", "Get All", "RTV All Fields" function without the new fields as parameter.


The next developer comes along and low and behold, there are a few fields missing.  Then they create a new one called Get All (New) and probably flag the other one with some kind of encoding system like DNU for Do Not Use etc.

Associated hates:

Numerous versions of the same type of function.  Too many choices.
Description vs reality can be misleading and frustrating.

Tip: If you have to change the file, you may as well change the "Full Record" type function as you have to regenerate up everything anyhow.

3. Using all 9 parameter definition lines for functions parameters.

In the old days when we only had *FIELD, Access Path(s) or Structure files for defining parameters it could sometimes get a little busy and we'd fill the 9 lines leaving limited options for the next developer.  We shouldn't create structure files just for the sake of it, after all, their primary purposes was for consistency of repeating data groups like audit stamps etc not for easier passing of parameters.

Since version 4.0 (over 20 years) we've been able to define parameter arrays.  There really is no excuse now for taking up the last line of a functions parameter block unless "I don't have time" or "I'll do it when we get to 10" is a valid reason.

Tip: When maintaining a function and say 6 or more parameter block lines are used consider refactoring with a parameter array.

I still think that even using parameter arrays as being a bastardised solution for this problem and that 2E should just have had more that 9 lines....but it is the lesser or two evils.

This is the tip of the iceberg.  Plenty more to follow I am sure.

Thanks for reading.
Lee.

Monday, July 14, 2008

Knowledge capture & use in technical support communities - Part 3

In Part 2 I looked at how to capture expert knowledge. In this final instalment, I will offer suggestions on where to store that knowledge, some differing uses of these concepts and offer some final insights into why I wrote the original version of this article.

Once again, I lead off with a small overlap to set the scene.

Electronic storage for fast access

Following the structuring process above introduces one significant disadvantage in a paper-based documentation repository. Frequent referencing to other documents causes the reader to flip pages or have multiple documents arranged on the desk in order to complete a single process.

Simply storing these documents electronically, such as Microsoft Word files in a LAN folder, is not enough to deal with this problem, as it merely shifts the emphasis to clicking the mouse constantly and still does not help out with the comprehension of the process as a whole.

The answer is hyper-linking. Every single reference to another document should be turned into a hyper-link to that document. This guarantees simple, fast, unfettered access to the linked information and, in most cases, an easy return path to the original document.

It is important to choose the document delivery technology with this method of usage in mind. Although MS Word provides inter-document hyper-linking capabilities and is an excellent document editor, it leaves a lot to be desired as a document reader. Better choices are Lotus Notes, or HTML. Lotus Notes serves as both editor and fast viewer. HTML requires a separate editor (and there are many to choose from), but has a ubiquitous interface if you are considering a large audience for the documentation.

Although I would recommend Lotus Notes as an excellent delivery mechanism, it is important to avoid the use of Notes 'Views'. Rather a default, or 'home', document should be launched easily from a bookmark and all navigation from that point should be via hyper-linking. This gives immediacy to the navigation by making it all point-and-click. This typically avoids the need to work with Notes' view characteristics like 'twisties' and scrolling. (Note that both of these can be used within a Notes document if desired.)

Tech-centric versus customer-centric

At the beginning of this series, I referred to the workings of a technical support team. For such a team, there are two key audiences for a documentation repository following the guidelines described in this document.

Perhaps the most important audience, in terms of covering exposure, is the team itself. Sharing the expert knowledge around the team ensures that individual team members do not become indispensable.

However, often the single biggest gain to be made from good documentation is by providing the team's customers with 'self help' information. The audience level for such documentation will necessarily be much lower than for internal documentation and will therefore take more time to prepare, but the ability to point customers to a self-help document for a common problem can save the team enormous amounts of time. This time could be used to improve the internal documentation and then you can reap all of the benefits.

The author's experience

In writing these articles, I am speaking from experience. I have built a customer-centric and a tech-centric database along these lines and they have been very successful. Both were built as Lotus Notes databases.

The first one built was the customer-centric repository, which was deployed to up to 150 developers and testers. This originally came out of a FAQ document introduced during a pilot implementation of a source configuration management product. The documents were listed in a Notes view in broad, simple categories. Each document title was phrased as a question, the answer to which lay inside. Many of the documents referred to others for pre-requisite information.

My team used this database to help keep the load down in our helpdesk-like operation. I estimate that once the database had matured, over 90% of customers who were referred to one of the documents did not need to make further contact with us for that issue. It is clear from the numbers and the nature of calls we received that many customers went straight to this repository and never needed to call us.

The second database was tech-centric and used all of the principles described above. Although the closure of the business meant this repository never quite made it to 'complete' status, the documents that were created (over 60) proved the points I have made above. New members in the team who were technically competent (that's why we hired them) but who had scant knowledge of our processes were able to perform fairly complex tasks purely from the information contained in these documents. Although it took them sometimes magnitudes longer to perform the task than one of the experts, the fact remains that they successfully completed the task with low or no input from the experts in person.

Reflection

In the matter of conversational language being more effective than rote instructions, I can recall being given two articles to read in preparation for a 'knowledge workshop' (that unfortunately never happened).

After many years now, I can only remember what one of those documents said. The one I remember spoke about how conversational language was much more effective than brief and often brusque language that is found in much technical documentation. It made the link to the over-the-shoulder technique and postulated some of the thinking that I have expounded above.

Some time well after I had created the customer-centric database I came across these two documents and re-read them. I immediately tied the conversational language ideas to what had happened in our database and saw that it was true. After re-reading the documents, I also realised why I had remembered this document and not the other. This one was very conversational, in contrast to the much more formal (and fact-and-figure quoting) style in the other document.

To this day, I do not remember what the other document said. Perhaps sometime I will come across the pair again and the cycle will repeat.

Thanks for reading.
Allister.

Monday, June 30, 2008

Knowledge capture & use in technical support communities - Part 2

In Part 1 I discussed the problems facing the technical support team with overworked experts and a need to transfer their knowledge as efficiently as possible.

In Part 2 I will discuss how to successfully capture and store this knowledge in an efficient and, above all, useful way. I'll lead off with a brief overlap from last time as a reminder of where we got to.

The 'Virtual Expert'

From what has been discussed so far, it is clear that expert knowledge is required, but that tying up the expert in this process is seen as unproductive in the current climate. We cannot get away from requiring time from the expert, but we can minimise this time and capitilise on it by recording the knowledge in the right way.

The answer lies in recording the expert knowledge (on paper or, more usefully, electronically - see later) in such a way that it is as close as possible to the over-the-shoulder commentary.

There is often still a need to use numbered steps when accomplishing a task. Such steps provide structure and sequence and help with mental tracking when performing the task. There is no reason, however, why each step cannot contain more than simple 'input, output' or 'action, reaction' type information.

For maximum benefit, each step should be written in conversational language and explain what the user is doing, why they are doing it, what the expected outcome should be and at least make reference to any unusual, but known, variations.

Furthermore, before any of the steps, there should be an introductory section which describes why the user would perform the task, what pre-requisites there may be and definitions of terms, systems and the like. After the final step, make mention of any further tasks that may be a logical progression from the task described, but which do not form part of this process.

Don't take anything for granted

Whilst we are talking about capturing expert knowledge, it is important not to lose sight of the basics. Any documentation is devalued if it makes too many assumptions. In creating a documentation repository, an audience level should be decided - such as 'technically competent', or 'beginner' - and all documents should be written for that lowest common denominator. It is easier for a more expert user to skim over known material than it can be for a new person to work out the undocumented basics.

It is important to include examples in the documentation. Where possible, have the example show the most common scenario, as it is most likely that staff new to the task will use the examples. It is also worth giving additional examples if there are significant variations in a step. Providing examples helps the user to get closer to the over-the-shoulder situation.

The ultimate test for the documentation is to give the process to a person who is at this 'lowest level' and have them perform the task. You will be surprised by some of the information you have taken for granted in your early drafts. I know I was.

Structuring the documentation

For ease of maintenance, it is important to only ever store a piece of information in one place. To help achieve this structure, it is useful to allow for two document types in the repository - reference documents and process documents.

Process documents contain steps describing how to perform a task. Reference documents contain (mostly-) static information that supports one or more processes.

It is often necessary to refer to tables of information (such as a list of files, describing their usage) from more than one process document. By separating this type of information into a reference document, it can be referred to by multiple process documents without increasing the maintenance burden through multiple copies. Additionally, when the table requires maintenance, it is easier to locate (residing under its own title) and the maintenance can be performed without danger of corrupting the process documents. When properly structured, maintenance of the reference document can be accomplished without knowledge of the referring process documents.

Whilst reference documents tend to represent pure data, it is still important to keep the conversational language in mind. There may be naming conventions or other conventions which are being followed for the data and it is important to note this in the reference document to complete the picture for the user of the information, and equally importantly for the maintainer.

It is also beneficial to factor out sub-processes into separate process documents and refer to them from the major process documents. This is of value where a sub-process is part of more than one major process.

The major benefit of having information in only one place is realised when errors are amended or updates are applied. These have to be done in only one location and all related processes are automatically catered for, as they simply reference to this single occurrence.

Capturing is understanding

The process of capturing information is time consuming and is best not left to the individual experts. Remember that they don't have that much time. Also, too many authors can devalue the repository by differing styles and levels of language.

The best solution to this is to have a single person (or possibly two or three) to build the documentation repository. This co-ordinator is then responsible for collation, setting style and keeping the language consistent. This person should not be expected to author all of the documents, but must be able to understand at least the broad concepts involved in order to ensure that appropriate structure is followed.

Each expert should be expected to provide a draft of the process or reference data, in a form approaching the final requirement. In some cases, where the co-ordinator's knowledge is good enough, they may author the document, but it should always be checked by the relevant expert.

Electronic storage for fast access

Following the structuring process above introduces one significant disadvantage in a paper-based documentation repository. Frequent referencing to other documents causes the reader to flip pages or have multiple documents arranged on the desk in order to complete a single process.

In Part 3, I will discuss some real world solutions to electronically storing, maintaining and delivering the captured knowledge.

Thanks for reading.
Allister.