Over the last couple of months I’ve been the recipient of some coaching sessions from James Bach. Our last session was a around a month or so ago now, over which time I’ve been pretty busy editing the latest issue of The Testing Planet and, off-and-on trying to solve the testing problem he left me with. The dilemma was this:

**What is Gray Code, and how can it be used practically to solve a testing problem?**

I’m going to try to answer this question below, but first we need some context. Not too much because I don’t want to spoil any of James’ lessons, but hopefully enough that you can see how to apply the lessons learned in a day-to-day testing situation.

So – imagine we have a problem. We need to test a system that transitions between states. Let’s assume it’s an algorithm for a robot that can move in 1 of 4 directions, notify us of an error state (blinkng red zero) or not move at all (State 1). The possible state transitions look like this:

- Move left
- Move right
- Move forward
- Move back
- Blinking red zero

We also need to verify that, having performed a state transition, the end state is valid. For the purposes of this example, we have only one state and each change *should *return the system to the original state.

We can document this information by way of a state model diagram, below where states and transitions are represented by nodes and edges respectively.

It’s important to note here that arriving at an accurate model of the system under test is crucial. Developing a true understanding of the system is what the role of the tester is all about. If the model we use to derive our tests is inaccurate, our tests may *pass*, but the product will be wrong. When the model is right, if our tests *pass* we know it is because the system satisfies our criteria, whatever that may be. If the tests *fail*, because our model is right, we know the system is wrong.

So having arrived at a true understanding of all of the possible states and transitions we now need to determine how to achieve the best test coverage. We could reasonably ask here, what is meant by “best?” and have a few options open to us, but the problem essentially boils down to combinatorial testing. In effect, we need to test for combinations of transitions that may result in improper behaviour. We’d like to do this as efficiently as possible.

The first step is to determine all of the possible transition combinations. We have 5 changes available to us in the model, and each change can be combined with any other transition. That gives us the 5×5 table below:

Move back | Move forward | Blinking red zero | Move left | Move right |

Move forward | Blinking red zero | Move left | Move right | Move back |

Blinking red zero | Move left | Move right | Move back | Move forward |

Move left | Move right | Move back | Move forward | Blinking red zero |

Move right | Move back | Move forward | Blinking red zero | Move left |

We can use this information as input for a pairwise algorithm that will provide us with all of the combinations. I like to use TVG Pairwise for this. We end up with 3125 possible combinations, 5x5x5x5x5, or 5^{5}. A sample of the output is included below:

Michael Bolton expounds on Pairwise theory in his excellent article here, so I’m not going to dwell on the subject. There is another issue we need to consider.

If we were to examine all of the combinations given to us by the pairwise algorithm, we would find considerable redundancy. Many of the combinations are repeated or overlap with each other. For example, even in the small extract above there is duplication. Consider:

`move left:move left:move left:move right:blinking red zero:`

`move left:move left:move left:move right:move forward:`

`move left:move left:move left:move right:move back:`

In a 3 line subset of our pairwise output we can see there are several repeating sequences of change:

- “move left: move left:” repeated x6
- “move left:move left:move left:” repeated x3
- “move left:move right:” repeated x3

And there will be many more throughout. If we were to run all of the 3125 pairwise combinations end to end, we would end up with a total of 15625 transitions. Clearly testing *all *of these transitions will be time consuming, even if automated. We already know that there is considerable duplication within the combinations. *We need to find a way of reducing the amount of time and effort required to test all of the necessary transitions and eliminate the unnecessary duplication of combinations.*

I’ll come back to some possible ways of doing this in* Pa*rt 2.

Look forward to part two…

Question – Is this continuous movement in the robot? Your diagram would suggest not (i.e. it moves left, don’t move. It moves right, don’t move).

If there a 5 states (4 directions, 1 still) then would you not represent “don’t move” within State 1? Otherwise your diagram points to 6 states.

Also, if it doesn’t move between states (i.e. always returns to “don’t move” then why spend time on conbinations?

Oh the irony of telling people to get their models right. I maybe should have thought through the imaginary system a bit more, but really just wanted to get to the de Bruijn and Gray code bits.

Anyway – the robot can move continuously yes. So “move ahead” could be followed by “move ahead” etc.

All good mate, just trying to work through it too… Got to take every learning opportunity. :0)

Questions were just an effort to piece it all together in my head. So I appreciate the clarification.

Just realised I didn’t answer your other question, regarding the testing of combinations. This was something that James picked up on also, but also helped to answer: “The reason you are doing this testing is because you suspect that the states really aren’t same: in other words, that the model is wrong.”

So you are right – if state we return to is always the same, there would be no reason to perform the test. But as James notes above, if we suspect the state really isn’t the same each time, then we are testing in effect to prove the model is actually wrong. This point could be more easily made if the “State 1” label was changed to “Resting.” So our (arbitrarily) chosen model of a robot should return to a rest state inbetween movements. But we suspect that certain combinations of movements may disrupt this resting state, hence the testing.