Importance Of Automated Testing


Test Early, Test Often and Release a Great Product

Testing is the key tenet in the Rational Unified Process. Each aspect of the process carries with it a test component that validates each step taken before it. It is this integration of testing into the process that reduces the risk in the overall software development process and enables the deployment of great, stable applications that address the needs of the key stakeholders. While testing is the primary reason that the process works, it is also the most likely portion to be reduced in priority because it is the most time consuming and expensive part.


Automate For Time Savings in Regression Testing

Automation is the key to saving time in the testing process. The rework that must be done at each iteration to ensure against functional or performance defects in previously stable functionality is immense. Using test automation products like Rational Suite TestStudio or Rational TeamTest greatly reduces the amount of rework necessary in the process by automating the testing of functionality that is stable. The test automation removes the need to manually test those portions and allows testers to focus exclusively on the new functionality that needs manual testing. Each time some functionality is tested and deemed stable, automated tests are applied to it and the process continues. From that point forward, no human intervention is needed other than the review of the log reports and the creation of defect reports. Test automation is a tremendous time

Saver in the software development process, especially in the Rational Unified Process where testing is so critical to success.


The User Experience and Deployment Side Testing

Unmanaged Change after Deployment

The benefits of testing and test automation during the software development process are well known and well implemented in the Rational Unified Process. What is not as well addressed is the testing process after the software has been deployed. This is especially true with Web applications where the forces of entropy play a huge role in the stability of the application. A destabilized Web application directly affects the experience that a user has with the site and the impressions that result range from minor annoyance to outright anger with the state of the site.


A Web application is very different from a standard desktop application. It is accessible to a much larger group of people, the changes that occur on it or in it happen much more quickly, and it is exposed to the outside world. Most importantly, the way that Web applications are created and managed is very different than that for a desktop application. There are many different groups of people involved, not just software developers. Graphics designers, webmasters, marketing managers, database administrators and a host of others are involved in addition to the standard development and quality teams. Their involvement continues through the development process, into deployment and then to post-deployment, which is where the entropy caused by unmanaged change begins to hurt and gets very expensive.


Keeping Up With the Jones’, Web Entropy in Action

Entropy on a Web site is a key concept. In terms of physics, entropy1 is a measure of the energy in a system or process that is unavailable to do work; in terms of communication it is defined as: a measure of the random errors (noise) occurring in the transmission of signals, and from this a measure of the efficiency of transmission systems; but in general terms entropy is: a measure of the disorder that exists in a system. Each of these definitions can be applied to a Web application and the meaning is obvious to anyone who has used one that has faulty forms, broken links, slowly loading pages, JScript error dialogs or similar defects.


Do we believe that the producers of these sites intentionally deployed defective pages? Of course not! Especially when one looks at the cost involved in the initial production of the site. Generally the cause of Web entropy is the continual updates and changes that a Web application goes through in its normal lifecycle. The constant updating of pages for things like press releases, new or different linked information resources, new products, updates to static information, new layouts, new buttons, updates to look and feel from consumer feedback—these are what cause entropy. These are problems not because the changes are dramatic, but because they are unmanaged.


The practitioners involved in the post-deployment management of the Web application are generally not those who are involved with the initial development. The updating of a product description for a catalog application does not go to the development staff or through the development process; it is done on the fly by a Marketing/ Communications manager or staffer and happens in real or near-real time. Similarly, changes to product pricing or updates to online documentation do not go through the development process; they have their own groups of practitioners with direct access to the Web application. There are many, many moving parts in this engine. The potential for entropy is immense and the results are obvious: unmanaged change leads to unstable Web applications.


Implementing a process for changes to a Web application is the obvious answer to the problem. However, lining up all of the tasks and driving a software development process with them is a significant challenge. Another solution might be to drive all changes directly through the software development organization and release the Web application on a predictable and measurable cycle, managing change as tightly as possible. While this might work in organizations that can wait to publish critical information, it is an unlikely scenario for most organizations producing Web applications in the world today.


The World Wide Web is in its infancy; the fact that it’s part of the common vernacular and in use by millions of people on a daily basis does not mean that it is a mature and stable environment. The mechanisms and processes employed for the creation and management of Web applications are similarly immature. There are gaping holes that need to be filled to ensure the integrity and stability of applications, holes that mostly fall into the post-deployment category.


What is needed is a mechanism to continuously test the quality of a deployed application in the key areas that define a user’s experience, and for providing instant feedback to the publishing organization regarding any breakage or inconsistencies so they can be analyzed and repaired in real-time. As in communications where entropy is managed by measuring against a steady state and addressing deltas, Web entropy can only be managed by constantly testing the Web application. This will uncover defects caused by unmanaged change and put the information in the hands of someone who can deal with it.


The Web Quality Process

Web application development involves not only the software development team but also the many organizations that manage the Web application prior to and after deployment. Organizations that are generally considered outside of the software development process, such as marketing and IT, play key roles in the ongoing maintenance and development of Web applications.


While the Rational Unified Process (RUP) is nearly perfect for the initial development of Web applications, it does not extend into the realm of post-deployment where the issues of real-time maintenance and change are controlling factors. In order for RUP to be completely effective in this space, the notion of testing and test automation needs to be extended beyond their normal scope and into the post-deployment space where the forces that lead to entropy take control of the application.


The issues that arise in this phase of the application’s lifecycle are very different from those that arise during development though in many ways they are the same. The key tenet is to test early and often and do it through all stages of the software development lifecycle. However, the tools that are necessary to do so are different from those classically associated with the process.





Discussing about specific tools that will be used for automated testing is out of the scope of this article and I will cover them in a separate Article. The main thing we need to learn is to know the importance of the Testing process in the entire SDLC of the Software.   


The ultimate concern of all those involved in the production of the code that implements a software system must be that the software produced is of high quality. Unfortunately, testing is a destructive process … it cannot show a piece of software to be free from errors; neither can it be guaranteed to eradicate all errors which are present. This fact can cause the process of testing to appear threatening to the programmer.


A consideration of some statistics collected about the cost of testing is instructive in evolving attitudes to tests. For example, Boehm (1976) notes that on average 40% of the effort put into a programming project is devoted to the detection and eradication of errors. This measure can increase significantly if the cost for corrective maintenance, traditionally hidden in the maintenance phase, is included. In some cases the cost has been seen to rise as high as 400% of the cost of the lifecycle up to implementation.


So Test Early, Test Often and Release a Great Product


With Regards,
UVN PardhaSaradhi




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s