Fork me on GitHub

Wednesday, May 27, 2009

Business Reabable Tests or Writable Tests?

There have always been a quest for helping the business users or clients to write tests for their applications directly. Elaborate frameworks like FitNesse, Cucumber, Concordion have been constructed to help writing tests in English like language. The reason being business experts know about the domain, the problem they want software to solve are in the best position to set the expectation of the software which is being developed. But there are several catches to business experts directly writing tests.

1) Acceptance tests are much more than verification tests. While building acceptance tests with the team the business experts discuss with the team to create these tests. As a result of these discussions the knowledge and understanding is being shared with the team helping the team understand the domain and the problem better .I think this is a very important for the team as well as to the evolution of software being built. If the users are able to build tests by themselves it may lead to a situation where they create the tests and just hand it over to the team as validations. Although the validation is important the knowledge sharing part may be missed.

2) This reason may not be application for all the business experts. Writing tests is also writing code. Tests tend to use English like language but they are never natural language. They are just higher level abstractions from code. So they still have some conventions like naming, casing, passing arguments to follow when writing these kinds of tests else they will not work. Also normal program practices of refactoring may apply in this context as well. So to do this effectively I think it is better for a business expert to work with a technical team to write these test. (On a different note I believe that tests are written in domain language are better than English like language. Capturing the domain part is important not the English part)

So as they say in DSL world, it is good to be business readable tests so that business experts can actively participate and help to build and use them. Business writable tests are not impossible but difficult for now. Some of the concepts in programming world may pave way to this concept in reality. Things like Language Oriented Programming, External DSLs and Intentional Programming may help us in future to work with business experts to construct tests. But it is good to keep in mind the fact that it is sharing of knowledge important for building better software and tests facilitate to do that.

Sunday, May 17, 2009

Value of small and focussed tests

I have always advocated this but still I learned the same lesson last week and that is what is driving me to write this (This also serves as a not for me). Am working on a project which is an large inherited legacy app. Most of the application has been rewritten except for a few parts. And the work I was doing kind of flows from new parts to old parts. It all started well as my work spanned mostly new parts, normal Red->Green->Refactor cycle. After my work was completed, I wanted to check the end to end flow so I ran the Selenium test. It all went fine till the part where the app reached the old code. Then all hell broke loose. A table which was supposed to show the past orders was not coming up. We weren't able to localise the problem as there was no tests covering that area. We knew what was broken but not where and so had to debug though hell for next three hours to find what was happening and solve the problem as well write tests.

It is nice that I had selenium test and it helped me in catching the problem. But it couldn't do more than that for me (Or at least I couldn't do more :). What I really wanted was a test which could tell me where exactly it is broken like a x-ray which doctor uses to find a broken bone. And shallow and broadly focused selenium test couldn't show me that. A better test for this situation would be a unit test or a focused integration test which can tell me where exactly the app is broken. Localizing a defect is a very important characteristic of these kinds of tests and it is this property which is important when something is broken. It is frustrating to find a test which says something is broken but not where. Also even if we have a unit test it should not be testing too many things at a time in the unit as we will not know what failed. So we need small tests which is focused in testing the behavior.

Does that mean if I have these test I don't need to debug? Well sometimes I don't but most of the time they help me find where I need to debug exactly to see the values coming it. I don't need to debug though all the layers or across layers to find five lines of code having the problem.

So writing small and focussed tests is good and if you have a legacy code base it is even more important to start doing this as soon as possible. I like writing small and focussed tests even at functional level though I have end to end tests also.

P.S May be I will add code examples here for this post. But now my code highlighter is broken so deferring it till I fix that :)

Monday, May 4, 2009

Does ATDD deliver value to an agile team?

I am writing this as my response to Biran Marick's view of ATDD which I found in this interview of him at Agile 2008. As well conversations which are happening in Twitter.

I really like the three ways he says of driving the design and testing the app

  • Doing heavy amounts of unit testing
  • Using white boards and charts to draw, discuss and distill the knowledge
  • Use large amounts of manual exploratory testing
All three methods are really good in helping to develop the product better.
But I have reservations about a few statements he makes about automating acceptance tests and following are my views on that.

I think the name ATDD is bad or it doesn't convey its actual purpose. I don't know how the practice of ATDD started but I never think it as an extrapolation of TDD as Marick mentions. I see TDD, ATDD and ET (Exploratory Tests) as different things which help us in three different ways. There may be some overlap in the function of all three but when compared to the primary purpose I think this overlap is minimal. ATDD is not an extrapolation of TDD as TDD is not about unit testing.

For me TDD drives how classes collaborate,which class takes up which responsibility. Exploratory testing is completely about using the power of human minds to creatively explore and test the application (Not just application. You can explore design, architecture, pretty much anything). Acceptance Test Driven Development for me is articulating and crystallizing the domain knowledge we need for our application.

For me these three methods are different and they serve different purposes. It is incidental that TDD and ATDD are also used as tests but that is a secondary effect. And I feel that we need all three kinds of tests in a project.

ATDD drives design of your application domain not majorly in a technical point of view as TDD does. It helps more in concentrating on the domain, DDD and ubiquitous language. It helps in making the domain rules clear and captures the knowledge as an executable verification mechanism. Doing this helps in preventing defects (which is much more powerful) than finding one at a later stage. So expecting acceptance tests to drive design from a technical stand point may not be most profitable one. They do help in separation of layer and a few other design points but they help more in domain based design.

It is possible to do this by drawing diagrams on a white board and discussing with domain expert and customer. We do this in ATDD also but after we discuss where do keep this knowledge? How do we stop the domain ambiguity creep? I think it is good to crystallize the knowledge as test more than keeping them as text in wikis (For me English is always ambiguous so I prefer tests).

Another important question is, after we develop a complex business logic how do we regress it at a later stage if we want to know it is working fine. None wants to draw these drawings while developing a feature and throw it away after we develop. We rarely develop software where we don't revisit what we have written a few iterations back. How do we regress? Unit tests help you but how do we know if the domain logic and business rules are working fine. Doing ET for this is not a great way to approach the problem. I like to explore features which are being developed and not do the same feature exploratory testing again and again. ATDD which are also tests help us in this regressing.

About Acceptance tests being fragile and cost more in maintenance. It may be because we are approaching it wrong. TDD costs more if we approach it wrong. The problem may be we are trying to write the tests from a layer of app from which we shouldn't be doing it. I never prefer to write acceptance tests driving from UI as I feel we are using a wrong layer(I do have some end to end UI tests in Watir). And as Lisa Crispin said acceptance tests also need care to be alive and working all the time giving us valuable feedback.

So from my point of we need all three forms of tests as they take up three different kinds of roles. I also wish we can change the name ATDD as people may misunderstand that just because it has a TDD embedded in it, it may be an extrapolation of it. I like the names Executable Specification or Examples.

Conclusion I feel ATDD does add value provided you set the right expectations for it and approach it right. Don't expect it to help you in a way it is not meant to be or may be you are looking for something else.

Last but not least if you want to know how to use acceptance tests to achieve fast feedback please attend Elizabeth Hendrickson's Acceptance Test Driven Development workshop. She has a wonderful way of demonstrating this and I learned a lot from her.

Needless to say I always welcome your views and thoughts. If you want to discuss more please add it as comments :)

Saturday, May 2, 2009

Why I don't like overworking in a project

The normal reasons. This happens with a over enthusiastic team. They sign up more and find that they can't finish as committed. Put some extra effort. After a few iterations reach a balance. Also over work is not sustainable. Team burns out. All the stuff you already know.

But there is another form of overwork I have seen (Should call it over work.? If you have a better way to express let me know). A more subtle one. Not sure how many have observed or felt this. How many of us have gone to the office on Sunday and saw someone from our team working on something related to project? It is nice to have these kind of committed people in the project right? It is good that they take up something and help the project grow. But there is a risk attached with this.

The problem is the work of these people tends to help in setting unrealistic expectations with the customer. As long as they are in, the team tends to keep a good velocity. As they leave the team, the velocity drops to a lower level (which must have been the actual velocity). And this leads to all kinds of confusion with the management and customers scurrying to find out how to improve productivity unless they understand the real problem. The advantage with agile is it is easy to notice this effect really soon after they leave because of the short iterations.

The second problem is as most would already know, they work alone they may tend to leave the team behind and learn new things about the project and code by themselves. It is good and they can hold workshops to share what they learned. But when it comes to work related to a project I believe in learning with the team and not share knowledge in isolated chunks.

Most of the time these people tend to do this because they are interested in writing code or doing technical work. When you identify those people help them in trying to direct their potential in other ways. 
  • Contributing to open source projects
  • Involving them in coding dogo or technical book club
  • Participating in Barcamps or DevCamps or local user groups.
  • Help with the recruitement.
Also help them to understand the value of whole team learning and working together.

I have written my thoughts. Let me know yours. Have you worked with enthusiastic and committed people like these? How did you help them? What are the problems you faced?

I really like Alistair Cockburn's metaphor of Agile as a cooperative game. It gives meaning to the whole team approach or working and learning. I also like building knowledge which is not limited just by what we are doing in the project. 

Friday, May 1, 2009

Build Operate Check Clear - Test Pattern

A common pattern which lot of people including me know and practise but I am writing here as a reference to me as well for people who are not familiar with this.

This is a common pattern for writing tests the idea being initializing the infrastructure and SUT we need for the test and cleaning it up after we test. This pattern applies for any kind of test unit or UI level but the clear phase may depend on the kind of fixture we use. This pattern helps to a level to keep the test reliable by making them independent but this largely depends on how Build and Clear phases are run.

Build - Create the infrastructure we need for the test and initialise system under test (SUT) to a state where we can start testing it. In case of unit test we may set up mocks or objects we want to test, in case of functional test we may bring the application to the page or screen where we want to test.

Operate - Exercise the behaviour we want to test. Work on the SUT in case of functional or call relevant methods on objects in case of unit.

Check - Assert if exercising the behaviour has produced relevant effects. Assert the return from a method, verify expectations on mock.

Clear - This phase depends on what kind of fixtures have been set up for the test. It helps in tests being independent as it brings the world back to the state as it was before the test started running. The set up and clear stages may be run at suite level or test case level or test level depending on the cost of running them. The more frequent the better as they reduce the tests interdependency.

In case of transient fixture which is in memory we don't need to explicitly clear it, the garbage collector may do the work for us. If it is permanent fixture like writing files, or set up of services we need to bring them down explicitly. If a test fails or an exception is thrown while test is run the clear phase should still run to bring SUT to base state.

I have found this pattern to be useful when writing tests. It is simple and effective and I am sure a lot will be practising this. If you have any good test patterns let me know. I am interested in collecting these patterns and documenting them.

Please let me know your feedback through comments :).