ALM & an expanded approach to being Test Driven (Part 1)

Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In recent years Test Driven Development has gained popularity as a means of verifying code functions properly [or that defects/limitations are identified by failing tests] at all times. When one is adopting a comprehensive approach to ALM, the principles of TDD as applied to code can, and should, be applied to other areas of the overall process.

To start off let’s quickly review the various activities that are part of the Application LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are:

  • Requirements Gathering
  • Architecture
  • Design
  • Implementation
  • Quality Assurance
  • Maintenance
  • Defect Management
  • Deployment
  • Operations

Can each of these items be subjected to a process which establishes quantified metrics  that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do?

For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them – in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle.

There are many places where “testing” is used outside of software development. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without “quality and correctness” in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test).

One must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the “workflow”) uses a completely invalid approach, yet still comes up with the right number.

The “Wrong Process”/”Right Answer” is probably the single biggest problem in creating quality tests. Even very simple items can suffer from this. As an example consider the following code for a “straight line” calculation….Is it correct? (for Integral Points)

int Solve(int m, int b, int x) { return m * x + b; }

Most people would respond “Yes”. But let’s take the question one step further… Is it correct for all possible values of m,b,x??? Without additional information regarding constrains on “the possible values of m,b,x” the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result!

To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is “Correct” and has “Quality”.

Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) “know” what the bounds are, and that the code will be correct. As we all know, requirements change, “code reuse” causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects.

This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply:

  • Each item should be tested.
  • The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item.

We also add concepts that expand the scope and alter the style by recognizing:

  • There are many things beside “lines of code” that benefit from testing (measuring/evaluating in a formal way)
  • Correctness and Quality can not be solely measured by “correct results”

Next time we will dig deeper into the 9 ALM areas listed at the beginning of this post and evaluate how each can be tested.

Posted in Application Lifecycle Management | Leave a comment

ALM – What is it and Why Do I Care?

For those not familiar with ALM, it can be simplified down to “A comprehensive view of all of the ideas, requirements, activities and artifacts that impact an application over the course of its lifecycle, from concept until decommissioning”. Obviously, this encompasses a large number of different areas even for relatively small and medium-sized projects. In recent years, many teams have adapted methodologies which address individual aspects of this; but the majority of this adoption has resulted in “islands of improvement” rather than the desired comprehensive outcome…Until now!

Last year Microsoft released Team Foundation Server 2010 along with Visual Studio 2010 Ultimate Edition, and with these two in combination the situation has drastically changed. At last there is a single environment that is capable of handling all aspects of ALM, and is also capable of dealing with migration and integration with existing systems to make the transition to a single solution much easier.

The possibilities (and practicalities) are nothing short of amazing, Architecture thru Testing integration? YES. Being able to correlate specific requirement items (and their history) to actual code (and code history)? YES. Identification of which tests will be potentially impacted by a given code change? YES. Resiliant Automated Testing of User Interfaces? YES. Automatic Deployment Management? YES. Integraton Level testing as part of (designated) Builds? YES.

I could easily double or triple the above list, but these items should be enough to get you thinking about the “pain points” your team and organization currently face and the fact that there IS a way to relieve the pain.

When teams are first introduced to these capabilities, there are a couple of common reactions.  Many can be grouped under the heading of “That’s Great! What do I have to do in order to experience this goodness?”), but a fair number fall into one or more of the following groups:

  • We don’t need any of that. We are doing just fine editing our code, compiling it locally, testing it locally, and doing manual deployment.
  • We already accomplish that (or at least the important parts) using a suite of [often OpenSource/Free] tools.
  • That will make our jobs more difficult / take longer, and decrease efficiency.

If you are a member of the first two groups you are likely to be comfortable with the status quo and do not see the need for change. I recommend either a simple pilot project or a co-development effort (using both the existing methodology and an ALM approach in parallel) to identify and quantify specific areas for improvement. Trial versions of all of the tools are readily available, so this can be done without any capital expenditure.

If you are a member of the last group, it is likely that you have had a negative experience with adopting a formal comphrehensive process in the past. Unfortunately this is common, and my best suggestion is to keep an open mind, learn as much as possible about the capabilities, and (if possible) get an opportunity to work with a person or team who has successfully adopted ALM

Posted in Application Lifecycle Management | Leave a comment

Lets get started….

“Software Development” is a vast topic. It is doubtful that there is anyone in the world who is knowledgable about the entire domain. It is certain that I do not possess such knowledge.

This means that a context for this blog must be established which places some bounds on the mega-topic of Software Development. While here may be exceptions in certain posts, the following is the environment that will typically be considered:

  • Microsoft Languages, Tools, and Platforms. Unless there are specific reasons: Code will be in C# 4.0, developed using Visual Studio 2010 Ultimate Edition.
  • Small (<=5) to Medium (>5, <=20) Teams. Much of what will be written involves the recording and sharing of knowledge and this most commonly occurs when multiple people are involved. This is not meant to exclude the individual developer, who also needs to record this information if only for easly at reliable recall at a latter date.  Finally, since most large development efforts are broken down into smaller team effort, even the biggest teams will be able to leverage the information.
  • Moderate to High Project Complexity. This is not really a “requirement”, but as project complexity increases the return on investment in processes and practices become much more obvious. Simple systems (so called “jelly bean” or “cookie cutter”) can definately benefit also, but getting tangible metrics on the value may be more difficult.

Within this scope are some key topic areas which will be categorized for easy of reference.

  • Application Lifecycle Management[ALM] will be the the primary focus as this covers all aspects of the effort from the gathering of initial requirements/ideas, through development and deployment, and only finishes when the system is finally decommissioned.
  • .NET Architecture & Implementationwill cover specific design choices that have been adopted within Dynamic Concepts Development Corp. and have been proven to have applicability to other scenarios.
  • Tales from the Trencheswill cover specific situations that I have encountered that turned into key learning issues.

Next Up: ALM – What is it and Why Do I Care?

Posted in General Interest | Leave a comment