ALM & an expanded approach to being Test Driven (Part 2)

In my last post, we identified 9 key areas of an Applications Lifecycle, and also set the ground rules for developing tests that can quantify both correctness and quality. In this segment, we will se how this groundwork can be applied to each of the 9 areas.

Requirements Gathering (User Story Development)

At first glance, it may not seem that there is anything to test at this point in time. However ensuring that the documented requirements accurately match the user requirements and are also consistent among themselves is extremely critical. If there is an undetected problem at this phase, it can easily cause significant impact on all of the other areas.

I have found that there are indeed a few activities that can be used to “test” this area.

  1. Use light weight mockups to provide a visual walk-through. A large number of issues are often identified when “Seeing pictures” that do not surface by reading words. The most effective methods are using tools like SketchFlow or even PowerPoint. At early stages, mockups written in code often present more problems in the long run than they solve.
  2. Record each item as a distinct artifact that is under revision control and provide links between related artifacts. TFS work items are the preferred way when using the Microsoft ALM tools.
  3. Develop Test Plans and Specific Test Steps as each Requirement/Story is being formalized. This provides additional information about the expectations. I have found many instances where thinking about how the requirement is going to be tested as provided immediate refinement to the requirement itself.

Architecture/Design/Implementation

I have grouped these three items together because test driven approaches in these areas are typically the best understood. In a future post, I will be covering these areas in greater detail.

Quality Assurance

If Test Plans and Test Steps have been being developed since the beginning (see above) then ter is already a good start on determining the Acceptance Criteria that the QA team will be targeting. Careful thought should also be given to the types of tests that will be performed, with the three major catefories being:

  • Scripted Manual Testing
  • Automated [Coded UI] Testing
  • Exploratory Testing

While most QA teams have a good handle on Scripted Manual Testing, the latter two categories are often overlooked or misunderstood. With Visual Studio 2010, it is simple to record actions while “driving” the application, and identifying the specific elements that are being examined along with their required values. These tests can then be repeated in an automated fashon to rapidly re-test many aspects of the system without the time consuming (therefore expensive) manual interactions. Having the tests run in a 100% repeatable manner also provides consistancy.

Formal Exploratory Testing is almost an oxymoron. By its very nature, it involves (a person) “wandering” through the application in a semi-structured or even random manner, looking for potential defects. Since there is no script or other definition of the testing activity, it can be hard to imagine this being formalized. However tooling such as Visual Studio 2010 once again provides some very helpful capabilities. The iteractions with the system can be recorded as a video that can be played back to see the exact steps to arrive at a specific point. IntelliTrace (background recording of internal state as the application runs) allows for the capture of elements such as stack traces, exceptions, state in the event that a potential defect is discovered. These features make it much simpler for the developer team to analyze reports that originate from QA.

When these activities are recorded into a central repository, there exists the capability to analyze the QA activities, and find out what testing methods are effective, what could be improved (in some cases, the improvement is a reduction in certain types of testing in favor of other types). Effectively we have reached the point where we can “test the testers” and reach a more harminic relationship between the development and test teams.

Maintenance/Defect Management/Deployment/Operations

Application lifecycle management does not end with the release of a version to production. For most application, the journey is just beginning as the time from initial release to final decommissioning can be orders of magnitude longer than the time from concept to initial release.

If solid practices and processed have been established during the initial development phase, there is a good deal of “metadata” about the project, including significant informantion about HOW the application reached the current state, and WHY decisions were made. Unfortunately too many companies treat the release milestone as “throwing the project over a wall”.  Customer (User) Support starts to use their own “issue tracking” system and Operations keeps their own internal records. Things begin to drift back into “islands of information” rather than a unified/comprehensive view.

To the suprise of many, these issues can be mitigated simple by “testing” the relationships between the various parties. Before the first “real” deployment, there should be mock deployments that are treated as any other development activity, with requirements, tasks, issues and bugs being recorded. This will provide helpful informtion to the deployment team. As the “real” deployments occur, these activities should also be tracked back in the same manner. Similar trials and integration with whatever system is being used for tracking customer issues should be applied.

Conclusion

Hopefully these two posts have provided some insight regarding integrating all of the various areas into a unified environment. The outcome is a consistent approach to capturing information in a form that can be analyzed, review by all parties (testing) at or near the time it is recording, being able to see relationships between items and validating (testing) that they are consistent, having easy access to reference and update the information as work progresses ensuring (testing) the current tasks align the the requirements and test plan, and finally having retrospectives on completed items with the focus on evaluating (testing) if the process/workflow can be improved.

Advertisements

About David V. Corbin

President / Chief Architect Dynamic Concepts Development Corp. Microsoft MVP 2008-2011 (current Specialized in ALM) Microsoft ALM Ranger 2009-2011
This entry was posted in General Interest. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s