ALM & an expanded approach to being Test Driven (Part 1)

Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In recent years Test Driven Development has gained popularity as a means of verifying code functions properly [or that defects/limitations are identified by failing tests] at all times. When one is adopting a comprehensive approach to ALM, the principles of TDD as applied to code can, and should, be applied to other areas of the overall process.

To start off let’s quickly review the various activities that are part of the Application LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are:

  • Requirements Gathering
  • Architecture
  • Design
  • Implementation
  • Quality Assurance
  • Maintenance
  • Defect Management
  • Deployment
  • Operations

Can each of these items be subjected to a process which establishes quantified metrics  that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do?

For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them – in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle.

There are many places where “testing” is used outside of software development. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without “quality and correctness” in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test).

One must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the “workflow”) uses a completely invalid approach, yet still comes up with the right number.

The “Wrong Process”/”Right Answer” is probably the single biggest problem in creating quality tests. Even very simple items can suffer from this. As an example consider the following code for a “straight line” calculation….Is it correct? (for Integral Points)

int Solve(int m, int b, int x) { return m * x + b; }

Most people would respond “Yes”. But let’s take the question one step further… Is it correct for all possible values of m,b,x??? Without additional information regarding constrains on “the possible values of m,b,x” the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result!

To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is “Correct” and has “Quality”.

Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) “know” what the bounds are, and that the code will be correct. As we all know, requirements change, “code reuse” causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects.

This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply:

  • Each item should be tested.
  • The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item.

We also add concepts that expand the scope and alter the style by recognizing:

  • There are many things beside “lines of code” that benefit from testing (measuring/evaluating in a formal way)
  • Correctness and Quality can not be solely measured by “correct results”

Next time we will dig deeper into the 9 ALM areas listed at the beginning of this post and evaluate how each can be tested.


About David V. Corbin

President / Chief Architect Dynamic Concepts Development Corp. Microsoft MVP 2008-2011 (current Specialized in ALM) Microsoft ALM Ranger 2009-2011
This entry was posted in Application Lifecycle Management. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s