Monday 9 November 2009

Eternity with stops on the way

Recently, my sister asked "Mana, how long should it take to get rid of all the bugs in new software?"

That's a broad question but certainly a fair one for a user to ask. I had to ask her to elaborate so that I could perhaps explain the reason why the new software she uses every day at work is not working as expected.

In short, the new major version of the software she has been using for years has been installed on her work computers. They still have access to the old one for reporting purposes but can not write any data to the old system. The new system provides a bunch of basic features from the old system but definitely not all of them.

When she's asked the software provider about the missing features, she has been informed that they are on their way but were not high enough priority to have gone in to the initial release of the new system. There are other features that do exist but do not work in the way they once did. The explanation for this is that there are bugs and the users are expected to test the software for the company and let them know what is wrong so they can fix it.

That was in July, this year. Now it is October 2009 and my sister and her colleagues are learning to use yet another new system because the earlier new one was so bad that the company who developed it went out of business. Unfortunately, Government dictates which applications meet certain criteria for use in this industry and the users have little choice about what is thrust upon them. The only real choice they have is to throw up their hands and say "we can't do that anymore but apparently it's coming".

There are so many things that concern me with this scenario, least of which is that this is not a rare occurrence. The other issues I see here that must be addressed can be represented by the following questions? Questions that I will answer here but that I would appreciate your input in to.

Who decides the priority of features in a product, to be redeveloped?

If you answer this question literally, the person who usually decides is the product owner or project manager. If you really think about what this means then you have to ask how they get their information in order to make that call. What data do they use to arrive at the conclusion that one feature is more important than another?

In the majority of the cases I have seen, people make the decision based on years of experience working in the vertical concerned or working administering the previous or current software that the new application or feature is replacing.

An alternative is to do user testing and see what they say is important. Measuring use of current applications and workflows can give information that is useful to deciding what functionality the users can not live without.

In the case I discussed above, it does not appear that all users were represented in the decision making process if one existed involving users at all. Instead, the group represented by my sister and her colleagues ended up with a new and improved version of the software that did not do those most important things that were involved in their daily routine. That functionality was gone and was promised in a future release.

I was asked about: how you decide something is left out? Is it simply because it was difficult to implement? It's great to see end-users trying to understand why but sad to see them surrender to the fact that nothing is in their control. They pay for it and they use it. Would you accept this from any other industry?!

Who is supposed to test your software and when should that happen?

Then I was asked: when the new software should be bug free or at least not falling over at the drop of a hat?

When the software company was asked what testing they did, they answered with the often heard line of letting the users test it when it was released. They apparently talked about reacting fast and fixing the problems, yet their releases were quarterly. This software was served over the Internet so it was a centralised update but they still released quarterly.

Of course, their release cycles can be due to a lot of factors that are very valid. What I can not accept is that it is the users responsibility to test the application after it is in production. How can this possibly be acceptable to anyone involved? I ask the question again, would you accept this from a car or pharmaceutical company?

There will always be bugs that go out in to production but when the software stops the people using it from doing their job and they are told to test it, report bugs and wait three months for a fix then you can't be surprised the company would go out of business.

What does the user consider a bug?

With access to a bunch of people who consume software at the end of the cycle, I had to ask what they considered a bug. Instead of finding the level of intolerance familiar to software engineers from product owners who demand no failure at all, I found that these users expected far less than that.

In fact, they thought the software not working at all was annoying but apparently understandable because computing is complicated. They understood new systems have issues to be ironed out. They could tolerate that for a while. What they couldn't fathom was why they had no view of their old data; functions that were called the same thing but did something different; and buttons that you clicked that did nothing at all.

The ultimate thing they called a bug was when the system did something completely wrong while promising to do something else. Especially when that thing was related to money or Government concessions. Things that you would think were legally binding. Processes that were tried and true and had worked in previous software.

What I drew from this line of inquiry is that users are much more understanding of our profession than I expect them to be. Much more understanding than I would ever be in the same situation. Users need to realise that they don't have to put up with the bad output from software companies and that if they don't like it, they can chose not to use it.

Maybe we need money back guarantees.

Should governments dictate key features of applications shared across multiple industries?

This is an interesting point and one that I have discussed almost continuously for over a decade, since I have spent the majority of time on the other side of the fence. My most relevant experience in this area was working for a Government agency that collected scientific test results from food imported in to Australia. Labs that did this testing across the country were required to implement applications that collated food sample test results and submit it electronically to the agency. At that point, the agency would approve or fail the importation of the food. Doing this electronically would speed up the process and stop food from sitting around for so long before being granted admission to the Australian food market. Brilliant idea, right?

In theory yes, but in practice this meant asking a lab that employed no IT people to have a system created for them to suck data out of their labs systems and deliver to a Government agency. In practice that sounded simple but it was a huge program of work that saw the bigger labs just come in on time with implementations. We did everything we could to aid for simplicity for the lowest common denominator and to support development in to using our interfaces and messaging systems.

From our point of view, it was a new application that spoke to the agencies back-end legacy application. It took 1.5 engineers, 1 business analyst, 1 project manager and 3 business people 3 months to get the receiving system designed and off the ground. We surrounded it with a swamp of automated unit tests and integrated with our demonstration systems. It was a breeze.

We then spent 5 months helping other developers integrate with us. We made the rules and decided everything up front and then thrust the specs upon them. The level of expertise from the developers involved was mostly sub-standard in the small labs. It was not the dream that we had promised it would be. Myself and the other half an engineer knew this was coming and when the integration hell arrived, we did what we could to help including writing code in 4+ other languages other than our own to help companies comply.

Now imagine the situation for my sister and the end-users who have that new system thrust upon them. Another layer of abstraction away from the user. Demands from Government implemented and carved in stone and then handed to software vendors who struggle to deliver on time and budget. Yes, it costs them money just to deal with the Government and to build to changing conditions and rules. It's difficult but it's their job. What is horrible is watching the person sitting at their machine with the end product asking themselves how this is anywhere near the same system.

Yes, Governments must legislate and ask for compliance but they should also do something to ensure the final product meets some kind of standard. If not then the market will ensure the outcome and it won't be in the favour of the software vendor.

Is this how software providers should treat customers?

There is a huge divide between the people who support and sell software and the people who build it. I've argued for years that production support teaches you invaluable skills. Software engineers should go out and face the scorn and hopefully applause that their software brings. Their is no better way to learn to write software that will be nice to support and maintain than to go out and feel the pain of it living.

If you have not worked on living and breathing software then you are not a good developer. You don't know the baby you created. You live in neglect every day.

Sales people, support staff and whoever else pours water on the fires can't make it better but the engineers can create good software. It is possible. People do it every single day.

How long does it take to get rid of all the bugs in software?

This is the same as asking how long a piece of string is. Like I said, software is a living thing. There will always be glitches/bugs/issues/defects/whatever and we will always be adding and fixing features and functionality. That's how it's life goes and that's how our life goes.

We have a responsibility not to send out software that we don't trust is good enough for sale or use. That can be helped through testing, design for usability, user acceptance testing and lots of good requirements gathering at the beginning and throughout the process. All of this should happen throughout the process.

People like my sister (or someone who represents her and cares) should be seeing and using working versions of the software as it is created and grows. That way the end product is not a surprise and a tragedy.

Don't just sit there. Tell me how you stop your company from being thrown out of the market each day. What can others do to make the software world a better place for our users?

No comments:

Acknowledge Me

Apple started a user experience trend many iOSes ago when it accepted Settings changes and did not ask for confirmation. Once the chang...