Showing posts with label Tools. Show all posts
Showing posts with label Tools. Show all posts

Tuesday, July 24, 2018

The magic roundabout....


“Computer Associates (CA), where products go to die!”

If you were around in the late 90’s and early noughties, the statement above was industry standard and after a brief rename to the COOL range from Sterling Software prior to the CA acquisition in 2000 the tools known as Synon (now CA 2E) and Obsydian/Plex (now CA Plex) have been maintained and supported by CA.

Correction from above…. CA did in fact (the early years) innovate with the tools quite frequently and with good features and enhancements.  CA were responsible for the introduction of the Web Option, Triggers, RPGILE Generator, numerous SQL’s updates and Web Services for 2E as well as .NET Generator for CA Plex (no small feat), Web Services publication and consumption as well as keeping up with a myriad of technology platform refreshes Plex required. 

All in all, a reasonable job. 

Perhaps a 6 out of 10.

Okay, 5.

The point being that these products didn’t go to CA to die. However, in recent years with development budgets reduced and key personnel leaving the rate of change has stalled significantly.  So much so that nowadays a release highlight are items that would have been reserved for minor features or even bug fixes in years gone by.

Whilst the tools haven’t died they are clearly in maintenance mode.  CA moved this group of products to sustaining engineering.  This has a negative context whilst a product is in decline and I feel that other low-code options with better target platforms coverage have emerged into a space once dominated by case and code generation tooling.

Last week Broadcom announced a cash buyout of CA Technologies for over 18b dollars.
Broadcom doesn’t do software…they are a semiconductor business so what does CA provide them:-

  1. They may be diversifying their offerings and product range.  Perhaps there are some key products in the CA range that assist in their growth or CA has strong alliances with certain business verticals or a client base the parent organisation may wish to gain access to.
  2. Or this is purely a financial decision.  They may have too much cash to burn and need to spend it quickly.  They buy a solid company with a long and attractive maintenance trailing revenue stream and secure long term (almost guaranteed) recurring revenue.  Most likely this means they won’t need to pay any corporation tax for the next year or two as they assimilate this monster of a business.

Perhaps a mix of both but my money is on the second option and that this is merely a financially driven strategic purchase.  

There certainly isn’t any institutional importance for the CA development tools business i.e. CA 2E, CA Plex and CA Gen.  Although these areas are likely to show very high ROI i.e. cost vs revenue on the reporting charts I very much doubt they’ll get anymore focus than they there are currently getting.

Now it would appear, that the final resting place for these (once wonderful and genius) tools is going to be Broadcom.  The new statement being “Broadcom, a place where CA Technologies development tools go to die!”

STOP THE PRESS!!!!!!!

Hopefully not, I hope that the residual value and with opportunities in a safe pair of hands i.e. a company with a low code focus. It is possible to recapture the essence of CASE and reinvigorate these tools.

Probability?: < 10% if Broadcom don’t want to relinquish these tools.

Lee’s take out!

Sadly, it’s probably time to work out what the next big thing is… These tools are now compliance/maintenance focused (at best) and will be stabilised (cease to be supported) as soon as the revenue trail drops below x, whatever x is.  

x for CA or Broadcom is far higher than x for a passionate low-code only vendor.  I beg Broadcom to review the business units at CA and seek a buyer (at a fair price) so this technology has a chance to thrive once more.  These tools practically invented low-code.  In my eyes they are 20 years ahead of the rest.

Thanks for reading.

p.s. I wonder what theynew name will be....Broadcom Plex doesn't have that good a ring to it.....




Monday, July 28, 2008

The context of programming

Recent events in my place of work have lead me to ponder the concept of programming context once again. I suspect it is a pervasive concept as I seem to come across it on a regular basis in quite different circumstances. Let me explain.

If I am asked to write a program that accepts two numbers and returns a third number, being the product of the two, then there is not a lot more I need to know. Perhaps knowing the possible range of input numbers would be useful, but really this is a pure mathematical problem and has no context.

If I am asked to write a program that accepts two numbers and returns a third number - the number of residential addresses in a database that fall between those two numbers - then there is quite a bit more I need to know. I need to know whether just street numbers alone should be checked, or whether street names should be included (5th Avenue, for example). Even within street numbers alone, what about flat numbers? It's a bit more complex than the first example as there is a context. I.e. what are we actually trying to achieve here?

Now in a third example, I am asked to write a program that accepts two numbers (x and y) and returns a third number which is the number of active users who have been logged in between x hours and y hours. Again, now the context is complex. How do I define a "logged in user"? Do I define one interactive session as one user, or do I need to reduce this to unique users because some may be logged in more than once. What about "special" users such as system supplied IDs? Should they all be counted, none, or only some?

But the third example is even more complex than I have shown so far. Consider that this function needs to work in a function test environment, in an integrated test environment, and in production. There are some processes that occur only in production, some only in test and some on both. Will this affect the outcome? Is testing on the test system going to be good enough to know it works in production?

Hang on a minute - aren't we talking about system programming? Well, maybe yes and maybe no. If this program is needed to manage software licensing, then it's a system program. But, if it is needed to manage the number of customer service representatives assigned to different parts of the call centre, then no it is not system programming. If it is being used to achieve load balancing for application service jobs then it could go one way or the other.

Now that was a somewhat contrived example, but it helps me to illustrate my point. In all three cases, take two numbers and return a third. The first example I would expect absolutely any programmer to be able to achieve. The second example I would expect any programmer to be able to achieve if complete requirements are provided. If the problem is only defined as I described, then you would need an analyst programmer. For the third example, who would you give the job to, generically speaking?

This is where I see a massive gap. I, myself, have been fortunate to have been involved in both application and systems programming fairly extensively and even if I say so myself I think I'm pretty good at covering off the sorts of issues described above. It also means I am frequently seeing other programmers who are failing to account for the "system" level factors.

In a specific recent case, a developer insisted that my team (who are a development & test support team) replace one version of a program with another so that it 'behaved like production'. That should have been the first red flag. (I was not involved at this stage so I don't know whether I would have caught this at the start.) Why was the test system behaving differently to production?

Well, the developer got his wish and proceeded to begin to make his related code work. Meanwhile, large numbers of other people were tripping over the problems introduced. After several days of analysing the problems we concluded we had to put things back the way they were. To quote Spock - "The needs of the many far outweigh the needs of the few." This programmer was looking in far too narrow a context in defining what needed to be done. He had no concept of the roles this particular program was playing, nor the large number of dependencies it had. For instance, an automated regression testing suite completely failed because of the change.

But perhaps the most spectacular case of lack of context that I have ever encountered was in a previous role.

The product in question was enterprise software being used all around the world and it was incredibly complex. Customers had requested the ability to use off-the-shelf reporting tools (such as Crystal Reports) to create their own reports. The development organisation realised this meant less work on such things for us and considered this was a good idea - but dangerous. Great they can write their own reports, but how to let them into a massive, complex database without (a) massive confusion and (b) the opportunity to corrupt it.

So a plan was hatched to deliver a new library (for self containment) of logical files (views) which would collate the data into meaningful constructs and, importantly, be read-only. My team (again in development & test support) figured out how to deal with this new library for the purposes of the testing done on it. For the most part we just manually created and destroyed these libraries as required and used some of our own toolset which, importantly, is not delivered to customers.

At some point I got to thinking...How are we going to deliver this? The initial response I got from the designer was "on a tape/CD with the rest of it." To cut a long story short, I soon proved that it is impossible to ship a library full of logical files. Period. Can't be done. I took this information back to the designer, along with a rough sketch design of a simple tool which could alleviate the problem, and also be useful within the development shop.

The response? "We didn't budget for that." * Sigh *.

In the end, I wrote a quick (hack) version of that tool on the day we packaged the software. Some months later someone contacted me saying that there was a bug in my code. I sent them to the designer to have it sorted out.

Thanks for reading.
Allister.

Monday, May 12, 2008

The Great 3GL v 4GL debate - Part II

This is part II of a trilogy of articles regarding the usage and evolution of software development languages. Part I can be found here.

So what are the benefits or otherwise of using a 3GL over a 4GL and visa versa. For me it certainly depends on all the usual factors that drive any technology decision. Cost of product, support, flexibility, the human factor, tool lifecycle, vendor direction and target platforms being a few that come to mind instantaneously.

The Pro’s of a 3GL

Embedded or mission critical applications like Air Traffic Control systems are generally handcrafted and more suited to a 3GL environment, as are operating systems, 4GL tools themselves (debatable), communications, hardware drivers and generally non database applications. As the developers have access to all the API’s and are that step closer to the CPU, they generally have wider usage opportunities.

Accessibility to wider developer pool. Whilst there are probably thousands of developers for your chosen 4GL, possibly even tens of thousands. These tools simply do not have the numbers associated to mainstream development languages and IDE’s. There is an estimated 4 to 5 million developers following the evolution of Java and no doubt Microsoft can boast even more for its most popular products. That said, of course, this also means that it is also harder to find a guru within that skills ocean, not to mention, filtering out those who have spent 15 minutes in the IDE and now claim some form of exposure on their curriculum vitae.

3GL’s are quicker to react to emerging markets and development trends. Generally the suppliers of these 3GL tools are inventing the future. They don’t often agree with each other but they certainly have the advantage over the 4GL creator. These guys have to wait and see what technology actually matures beyond the marketing hype and into mainstream best practice before committing to provide code generation for that area.

Flexibility. Languages at 3GL level, depending on the targeted platform, have virtually no restrictions with the type of application that can be written and how they are written. This means that applications where speed of performance is the critical measurement of success then it is most likely that a 4GL will fall short of the handwritten targeted code.

The Pro’s of a 4GL

Business rules focused development. Once you have learnt the code generators quirks you are in a situation where you mainly tackle your development from the business domain and you allow the code generator to handle the technical implementation. With this comes a significant reduction in the amount of time required to build an application. Many will say that there are standards and frameworks that help with 3GL development. This is actually quite true, but, also be aware that the code generator vendor will be skilled with the major best practices and will write more consistent code. Some may argue that the code is not as neat as code written by a good developer and in the regard, I quite agree. I will say that the underlying code will be written in the same way and style, therefore, after a while all the developers will become conversant in how the code is generated, that is, if they want or need to understand. (See Below)

Complexity avoidance. A 4GL will protect the majority of the developers using the tools from the underlying complexities of the generated language. When you couple this with the ability to influence how the code is generated using patterns, have the ability to take the design model from the 4GL and transform that into other language code, your business logic can truly be ported from platform to platform as trends become reality and your technical needs change.

Impact Analysis. For me this is one of the key features of using a 4GL tool. Generally these tools use a database to store design and program artefacts that are then transformed in the language code. Every reference for every field, File/Table, Access Path/Index/View, Function/Object/Program is stored in the repository and a developer can track each and every item through to where and how they are used. This is a powerful feature that cannot be overlooked versus manual reviewing of language source files.

Trusting the generator. When I train people to use CA 2E or CA Plex the defining moment for gauging the developers progress and understanding is the day that they learn to trust the generator. As with any tool, a badly constructed function in 2E, for example, can create badly generated and non compilable code. Once the developer realises that it is generally their fault if a generation of code fails they’re ready to move forward. If have seen far to many 3GL programmers migrate to the 4GL paradigm only to get bogged down into the details of the code produced, yet they will trust the compiler without hesitation. With the ability to change a shared function or the domain of a field and then apply detailed automated impact analysis to identify all affected programs, press a button to regenerate and compile all programs and database files affected is a very powerful feature.

The Con’s of a 3GL

Slower, more expensive development. The very nature and size of modern 3GL languages and their flexibility is also their Achilles Heel as there are so many ways to resolve a programming issue with literally thousands of opinions and many directions. In a nutshell for certain types of applications, particularly those that involve the extensive usage of a database, the ROI for using a 3GL versus a 4GL is very poor indeed. To contra some of the cost debate, 4GL tools are generally more expensive to purchase. The most expensive item in any development team is the human, even if it has been outsourced to an emerging development powerhouse.

You will spend more time debugging the application. A very good ex-colleague of mine once said “If the art of debugging is the removal of bugs from programs, then programming must be the art of putting them there in the first place.” Because we are relying on the developer to code all aspects of the application it is likely to cause some issues along the way. It is generally the developer’s prerogative to deal with memory leaks and usage in languages like Java or C++ but with a 4GL it would be the code generators responsibility.

Complexity. Once again due to the size of the languages and their strong reach it is unlikely that you will find developers that know all the aspects required to complete an application. Your staffing needs are generally much higher and the learning curve for the 3GL would be very significant indeed. This means that the developers must understand many technical as well as business problems.

The Con’s of a 4GL

Vendor lock in. Depending on the vendor this can be quite a significant issue. If the vendors are too slow to react to emerging technologies you will find yourself with a heterogeneous development environment and you will lose many of the advantages referred to above with regard to complexity protection and highly detailed impact analysis. Worse still, your vendor may well decide to stop production of the 4GL or chose other directions as the options with technology deployment balloon. These tools are often criticised as proprietary.

Flexibility. There will be limitations with the scope of applications that can be created by a single 4GL. There are of course others that target different platforms and purposes. Their flexibility is often measured in the lowest common denominator for which they have to support/generate code for. For example a generator that generates code for three different platforms may have to limit what can be done in one language due to limitations in another. For example different languages may have differing maximum field lengths meaning that for generic code construction in the 4GL platform x and y can only size fields to the limits of platform z.

Source Code. Many 3GL developers will argue that the code is not user friendly, bloated and often too generic in comparison to hand-written code. This can be true of some code generators and is certainly something that needs to be considered when choosing an approach for your development.

All of the above are by no sense of the imagine a definite list. Given time, I believe that I could have produced a list of 20+ Pro’s and Con’s for each approach.

Part III will discuss trends, fads and conclude the 3GL and 4GL debate with my own personal viewpoint.

Thanks for reading.
Lee.

Wednesday, April 30, 2008

The Great 3GL v 4GL debate - Part I

Ever since development languages were invented we have sought ways of making the development of software easier. We have attempted to do this by abstracting the level at which the developer is employed to create code and created languages and tools which are more 'natural English' in terms of human interaction. However, on the other hand we have also added to this extra levels of complexity with changing hardware, communications protocols, multi-tier server deployment, runtimes, middleware, messaging technology and language politics and I haven’t even bothered to discuss the internet.

Regarding language politics, read anywhere on the internet about the great .NET or J2EE debate or perhaps commercial languages versus open source and you will quickly realise that there is significant inroads to be made with IT vendors around the world. You will see an IT community that is split pretty much down the middle, although if you want my humble opinion as it currently stands, I believe that we will once again see a shift towards packaged and guaranteed software over that of open source and Microsoft will eventually win the development language tools war.

This three part article aims to discuss the evolution (not revolution) of software development languages with particular focus on third and fourth generation languages, a debate on the pro’s and con’s of these approaches and then conclude with a few comments regarding some of the repeating fads as I see it today.

It wasn’t that long ago that the typical software developer would have been aged between 35 and 60, male, probably balding (So that’s me covered), university educated and employed within those same hallowed institutional walls since passing his exams, quite ironically with his non IT related degree. He would have been wearing white coats in the office, have bottle bottomed glasses, a pocket full of pens and answered to the name of geek or dork.

Well this is how Hollywood and the urban stereotype would have it.

A bit harsh if you ask me but to be fair, they would have been fascinated by punch cards, saw value in paper tape with holes in it and probably would have missed any fads of the times with regard to musical revolution. There certainly would have been very few ordinary people and the numbers of women specialising in this field, countable on the one hand.

Now, time has moved on, as has technology and you now can’t tell an IT guy apart from your ordinary office worker. It actually amazes me that although we are making the art of software development easier, the extra layers of complexity should in theory have amounted to a increase in the numbers of geeky looking guys, so much so that if lined up ten abreast a communist regime would have been proud to show off their IT military might with these millions marching in city squares across the world. But this hasn’t happened, IT in general is now a mainstream activity and the working environments are certainly more aligned to that of a typical office environment. With this mass adoption of IT skills in the work place I also believe that IT guys are now considered a corporate commodity, where as 15 years ago the pay would have been relatively higher, how times are changing.

So we have worked hard to improve the scope and productivity of the average software developer. We have migrated from the punch card era to having keyboards, mice, laser pens and voice recognition input devices. We have languages that have evolved to make them more readable and understood by a human. The days of everyone programming in assembler or other low-level machine/processor level code began to change with the introduction of the 3GL languages of the day. COBOL, Fortran, RPG and Basic would be good examples here. I am sure that at that time some people would have embraced the new paradigm as much as developers have embraced Java or are now embracing Flex/Actionscript, Ruby on rails or C# as the perfect way forward. There would also have been the doubters and I guess the split would have been no different to many of the impasses that we see reported online and in periodicals every.

Still, software engineering took time.

We are improving and continue to improve 3GL languages to this very day. We now have a whole hard drive full of productivity features embedded within our integrated development environments (IDE). Features like wizards, auto code completion, and syntax auto-correction were non-existent back then, let alone globally accepted standards and minimum requirements.
I would say that any developer working 20 years ago would never have thought that freeware/open source (delete as appropriate) products like Openoffice or Eclipse would be a reality. They could have conceived that software was given away as a loss leader for professional services, but, a massive corporation like IBM giving away a product that it spent and to this day still spends millions of dollars on would have been considered insane. But this is the state of play today.

So when many thought that we had gone as far as we could with the evolution of the 3GL language we once again raised the bar with the next great technology advancement. This time we evolved to 4GL languages. These are otherwise known as code generators, CASE (Computer Aided System Engineering) tools or ARAD (Architected Rapid Application Development). This was hailed as the end of the expensive IT developer, the marketing expressed that the typical end user could now get involved in the development of the IT systems and return the ownership and power of your systems back to the business, and more importantly drive it out of the hands of that lowly IT department.

The same IT department that through these times was still considered a cost overhead rather than a business opportunity enabler. Many of you may remember the days when the IT function reported to the financial controller. I believe that most IT people are artists who can’t draw and we use the creative parts of our brain to build beautiful code and systems. To think that you’d stifle (some may still continue to do) this creativity with the frigidity of accountant mentality still frightens me. Imagine the marketing or sales director reporting to that same accountant? Actually I can, ouch!!!!!!!

With the marketing hype, 3GL project overruns and increasingly tight deliverables the 4GL era was born and in my view this has created some of the more interesting debates in IT circles. The simple reason being that I would anticipate that for each platform/system available there would be numerous languages that are either compatible (Java and the JVM) or targeted (Compiled) that are considered the language of choice, each with their own hardcore developer following. There will also, more than likely, be a 4GL that targets that platform and I bet my left one that a maximum of 10% of the users of the platform use a 4GL over that of the 3GL.

Are these 10% the visionaries?

Well I guess that depends on the tools of choice, but no one denounces the 10% of personal computer users that use the Apple Mac and all its gizmos.

You also have to consider that many of these 4GL languages evolved during a time of single platform computing. i.e. There would be a 4GL that would target the complete application development cycle. The tools were capable of constructing everything from the database, screen and reports though to catering for the applications menus. I have had experience developing in both 3GL and 4GL languages and I believe that I am well placed to comment accurately about both approaches. So as IT has evolved so have many of these 4GL tools.

The question is do you choose a 3GL or a 4GL?

This is still a fiercely debated argument online or at technology conferences just as much as the debate around the merits of client/server technology versus thin client or betamax v VHS (lol). With the emergence of more and more technologies and web 2.0 we are again beginning to witness the thin/rich client gloves come off. Which for me is quite ironic as web thin client was the reason for killing off the high deployment cost of client/server systems which itself was created to offset performance issues of software systems and distribute the processing load.

That said, cost is now measured in bandwidth and reach rather than hardware and employees required to support the system.

I personally believe that these architecture choices should be down to the type of application you’re creating and its accessibility and user requirements. Also, this is the same thinking behind why you would choose a given development tool and at which level of abstraction you wish to develop the application. Another interesting topic involved with the 3GL v 4GL debate is that many of these tools are capable of producing code for multiple platforms i.e. IBM Power System (RPG), Windows (C of one variant or another) as well as Java which is capable of being deployed on multiple platforms.

Java claims a write it once, deploy it many times approach. I would say that it should be rephrased as write it once and the tune it for each platform, JVM or application server of your choice. Now I make no bones that I am an advocate of the 4GL (especially CA Plex or CA 2e) over the 3GL for the applications that I have written over the years. Most 4GLs cater for the RDBMS systems and are best suited for these types of environments i.e. banking systems etc. Other 4GLs or tools for writing computer games are in existence and once again these are designed to protect the developer from the underlying complexities of the code. With these engines you do not need to understand the ins and outs of DirectX or DirectDraw API’s or the language that is generated. But your decision to use one of these tools must be twofold.

1. It must be appropriate for the type of application you are creating.
2. Once you have chosen the 4GL you must stick to it and use it properly.

There are many tools out there that claim that they can generate code into multiple languages and these tools in my opinion are great for ISV’s that need to have an offering across multiple platforms to negate the hard sell of one technology over another. After all, shouldn’t your marketing and sales teams be selling the values and merits of your software’s function and feature set rather than justifying your company’s technology decisions

Part II will discuss the many pro’s and con’s of the 3GL and 4GL languages and tools.