Fork me on GitHub

Wednesday, December 30, 2009

EventMachine Client for CouchDB (em-couchdb)

I have been crazy about NoSql databases for sometime. CouchDB is one of my favorite apart from redis :). I was looking for clients for CouchDB in Ruby and found most to be using Net/Http and blocking in nature. So I began my quest of writing an asynchronous non blocking awesome EventMachine based CouchDB client inspired by EventMachine::Redis client.

Here is a sample code to enjoy... It creates a database, saves a document inside it, reads the doc, deletes it and then deletes the database.



The current version supports database manipulation like creation and deletion and document manipulation.

Looks cool for the code I wrote in a day but still there are things to improve (a lot). One thing is the CPS style. Because the code is asynchronous and the EventMachine reactor loop will take control once the code starts to run we need to explicitly pass callbacks to actions. This can be fixed as it was done in EM::Redis using callcc to make the passing implicit.

Please feel free to fork and contribute and happy hacking :).

The code is available at http://github.com/saivenkat/em-couchdb

Thursday, December 10, 2009

Simple (Or not so simple) Node.js Redis Client

I love node.js. For some reason I am fascinated by V8 javascript engine and its performance. I am also a big fan of Event driven I/O frameworks like twisted or EventMachine, Javascript and nosql databases of which redis is one of the stars. For those who are not familiar with terminology Google them :).

Here is a small redis client I wrote in node.js. All it can do is to see if a key can exist or not :).

Wednesday, December 9, 2009

What good is the feedback for?

What good is the feedback for if I am not going to use it to correct things? What good is a QA for sitting across my desk providing feedback if I am going to ignore it and continue what I am doing? What good is a showcase for if I am going to ignore my client's suggestions? What good is TDD for if I am not going to use it to evolve the design? I have heard a lot of people say the Agile is about short continuous feedback but the question comes back to why? Why we need feedback? Is feedback alone enough? Getting feedback is a way, part of the activity we are doing, not the end.

All these questions made me think about what it means to Agile? I always consider software development to be more of an exploration activity. We explore ways to do things better and along the way we learn new ways and also make a few mistakes. Working over the years my definition of Agile has evolved and my current one seems to be "Agile is a method which helps to reduce the assumptions we hold as fast as possible and make things concrete". When we work on a project we have base our work on a few assumptions or understanding of what needs to be done. They may be assumptions of what client is expecting, or assumptions about a design being evolutionary... But holding on to those assumptions and thinking that we are going down the right path is a way to disaster.

So how is Agile special in reducing the assumptions when compared to waterfall? Both help in that aspect of getting feedback except for Agile gives quick ways to correct our actions or find the right way. Waterfall delays it to the point where sometimes the cost of correction is too high.

Working closely with the client helps in making our understanding better or reducing the assumptions we make, TDD does the same for design, Continuous Integration and deployment helps us to reduce the assumptions we have on production environment. The examples are numerous but I hope you get the point.

So for me Agile is about reducing the assumptions and making things concrete so that we can continue working with confidence.

Let me know your thoughts.

Thursday, December 3, 2009

Curious case of object initialization in python

While working on writing some Async code for CouchDb connector using Tornado webclient I used the code of AyncHTTPClient as a doc. Reading the code I found an interesting piece of initialization method. This may not be anything new for a lot of python people but for me I found this pattern interesting. I had used python long time back while it was in 2.5.x time and I came back to it recently to work on UNICEF project. So my knowledge of python may not be as sharp as knife :)

So lets jump into the code directly. The initialization method as I remember in python is __init__. For those unfamiliar with python __init__ is the first method called when an object is initialized. Or in other words a constructor. Similar to ruby's init method. Pretty simple right?

But checkout the __new__ method. A simple example of usage of __new__ method for singleton pattern in python.



So how is __new__ method different from __init__? As seen from example __new__ gives a complete control of the object I construct. So when initializing the object the order of calling is
1. You try to create an object --> 2. __new__ called if available to create an instance --> 3. object initialized using __init__. Also __new__ can return an instance of the class you are creating. __init__ only does the job of initializing it and does not return anything.

I know a lot of you may know about this already. But I wanted to share what I understood. Let me know if you think there is a better explanation or I have goofed up something.

Eventlet based CouchDB connector

This is an attempt to write a nonblocking CouchDB API based on Eventlet. Eventlet is a wonderful python library written by Second Life people based on coroutine based nonblocking I/O. I also tried using AsyncHTTPClient provided by Tornado framework released by FriendFeed people but liked the eventlet version more. So here goes... This API may grow to be a full fledged one in future.

Also look out a Ruby non blocking one based on NeverBlock :)

Wednesday, November 11, 2009

Code Swarm Visualization of Watir Project

Presenting the code swarm visualization of Watir project. For those who don't know what is code swarm, it is an experiment in organic software visualization or what ever that means :). For me it meant visualization of activities on a code repo or project. So here is the one for Watir project. If you wanna watch it in Hi-def go over to Vimeo and take a look. Its pretty awesome...


Code Swarm visualization of Watir project from saivenkat on Vimeo.

Friday, November 6, 2009

Git - Branch Per Feature Experiment

I am sure others are using Git like this before but these are my thoughts I collected over sometime using Git. I have been using Git for over a year now but mostly for my Watir projects where I haven't had a big chance to work with branches other than master till now. My current project has given me an opportunity to use Git in a different perspective. I have been observing that I almost never use master but always am in some branch working and constantly merging with master. And the branch I work on is something related to the feature I am working at that point of time or exploring.
To illustrate the point better, I thought a picture will illustrate the point better :).

As you see in the picture I am currently working on tagging feature but there seems to be other features as branches there. I am exploring the option of text search and well as bookmarking a search for projects. So if I want to work on some other feature I switch the branch and work. Once the feature I am working on is done I switch to master, merge the feature with main branch, pull from remote repo, fix any merge conflicts and commit the feature to master before pushing it to remote repo. Then the branch gets deleted. If any work needs to be done on the feature the branch is recreated.

This seems to be my work pattern for the past few months. Using branch per feature importantly helps in keeping concerns of different features separate while working on them. As well working on a project where spiking is important creating branches for spikes seems to be a good option for me. Sometimes when I want some parts of the spike may be just a commit cherry picking also helps me a lot. Most of this seems easy and possible now because the cost of branching and merging is less in Git. The same approach will be painful in VCS like svn.

This is just sharing of my experience. I am exploring better ways to use Git. Let me know if you have found good ways of using Git or similar DVCS.

Sunday, October 25, 2009

Infinite lists in python

My first post from Uganda :)

As I am working in Django now I think I will post more on python. I had worked on python before but that was long time back and mostly on system admin stuff. So working on python now for developing web based applications and getting to understand things like object orientation, functional programming, TDD, Ruby and more I think it has changed my way of working with python a lot.

This post is mostly about creating infinite lists in python. Infinite lists are an example of lazy evaluation mostly rooted in functional programming. The idea is we don't construct elements of list but what we do is give a way to construct the next element of the list and let it construct the list when we need (we still need to stop construction of elements depending on some condition). Infinite lists are also rightly called streams. In python, it is easy to use develop infinite lists using generator as shown in the examples below.

Examples of infinite list in python

Increment Example -



Fibonacci Example -

Thursday, October 8, 2009

A simple PingPong example based on Erlang

This is a test for Github's gist script embed in my blog :).

Saturday, September 26, 2009

Setup script and shell command for Pylot

I gave a talk at PyCon India on "Testing Web Applications using Python" which featured my favourite toolkit Windmill, WebDriver, Twill and Pylot. Think Pylot impressed the audience (Great work by Corey Goldberg :) and they started trying it out immediately. Unfortunately before figuring out that they have to download it from Pylot website, I think a vast majority of them searched at PyPi and tried using easy_install. As a result one of the questions I got was unavailability of any installation medium for Pylot. Seizing the chance for me to learn how to create a setup.py based installer I jumped on the task of creating an installer for Pylot. Also to make the usage of the tool easier I have added a command line command pylot which works the same way as using python run.py to run pylot tests.

So how to use this? Pretty simple. I have changed the directory structure a bit (Hope Corey doesn't get pissed off :). The pylot directory remains same except there will be a parent directly with setup.py. You can run the set up command as usual "sudo python setup.py install" (Yes mostly you may need to do sudo depending on where your python dist-packages are). Once you do this you will get pylot-runner into path and add pylot to the packages. Now you should be able to run any pylot test case with a pylot-runner. All the commandline options hold same as before "pylot-runner --agent=377 --xmlfile=awesome_website_tests.xml". Your results will be created in the same folder as where the xml test case is.

This is my first script of setup.py. Let me know if there are better ways to do this stuff. Also I will convert the installer into an egg based one but it will take few days for me to do it.

Gotcha - I have created, installed and tested this on my linux system and it works amazing. So I haven't added anything specific to windows which includes the recorder. I may do this along with converting this installer to egg but it may take sometime. The wxpython Gui gets bundled though. Only the recorder is left behind...

Download for Pylot with setup script and shell command - http://miscellaneous-downloads.googlecode.com/files/pylot_with_setup_and_command.tar.gz

Thursday, September 17, 2009

Functional Test suite in C# or Java... Why?

Every time I see a functional test suite being developed in a static language, I get to the same question. Why? I feel odd when I hear people that the development language is C# or Java and so the functional test suite testing the web application or REST interface also needs to be in the same language. I understand and accept the reason for unit test as TDD is a design activity and it is better to be in the same language. But functional tests? Why?

I also feel sorry for people in the team who spend time mangling a static language to build test suites and framework to make the whole thing work. Also the slowness of the feedback cycle as most popular static languages are compiled and so functional test suite also needs to go through the same cycle.

So here are my reasons to choose a cool funky dynamic language to write your application level test suite.
  • Easier development cycle. Typically dynamic languages have more essence to ceremony ratio. They are lightweight. They don't have the type noise which static languages have as we don't need that kind of information in our functional tests a lot. Also dynamic languages being less ceremonious is easier to pick up and start working.
  • No wastage in compiling time. Most of the dynamic languages (Actually all I know) are not compiled and so donot have that long compile cycle. Typically functional test suites don't run as an application and donot have the same considerations as them. For functional tests, the quicker they run better the feedback cycle.
  • Easier to build abstractions. It is easier to build abstractions like frameworks, DSLs or Fluent Interfaces in dynamic languages. Less noise, more essence and better abstractions :)
  • Better drivers and libraries - Almost all the drivers I know are in dynamic languages. Watir, Selenium, WebDriver (Use with JRuby or python :), Celerity. Also you have frameworks like Rspec, Cucumber and Robot framework already available to build your test suite better. You have HttpDrivers, XmlParsers, JSonParsers or whatever infrastructure or libraries you need are available as well.
  • Easier to experiment with. Dynamic languages are lightweight to work with. They also typically come with a REPL which is a good place to learn and experiment. If you want to know how to play with a REST based service, fire up the REPL. Import a Http library and start sending requests.
  • They have a great community to work with. Difficult to substantiate but I can challenge that Ruby and Python community are way ahead in catching up with new stuff than .Net or Java. Also if you see Github and find the major languages of open source projects hosted there :).
May be there are more reasons. If you can think of any add them to the comments to discuss.

If you are a developer or a tester in a team (I don't differentiate between both of these roles :), do yourself and your team a favor. If your development itself is a dynamic language, you lucky dog :). Else choose a suitable language of choice to write functional or end to end tests. Don't get into the dogma of using the same language as development language.

Wednesday, August 26, 2009

Configure your Visual Studio to run your tests automatically

It all started after I saw Misko Hevery demonstrating this technique of running unit tests whenever he saves code in Eclipse at his Thoughtworks Geek Night talk and reading his blog post on it. I am a big sucker of fast feedback and always felt that faster the feedback, better it is. This technique to run unit tests, I felt is a big step towards it (Also my patience is too short for me to go and run the tests by navigating through menus :). Being a compulsive TDD guy, unfortunately in a .Net project and so forced to use Visual Studio (I have nothing to complain about. Move along :) every day, I felt the need for this badly. And so began my quest...

I was looking for a way to hook into the save event of Visual Studio to do something same as Misko's but couldn't find any till now. The next best thing for me is Post Build Event hook. Conveniently this is available in Visual Studio for every project to be configured in project properties. So on the test project properties "Build Events" section, you can hook into the pre and post build events to run some shell scripts.

Prerequisites - Have nunit-console.exe in path or better have it in your project as a dependency. I have it inside my project root folder inside bin\NUnit. To run tests using nunit-console you need to pass your test assembly path to that. Also check the build order in your solution. Typically it will be Main project -> Unit Tests.
  • Choose the project properties of the project which contains unit tests in your solution.
  • In the Build Event section of project properties, choose Run the post build event list box to be "On successful build" as I want my tests to run only when build completes without error.
  • In the Post-build event command line text area it is possible to write any shell script which will execute once the build is done. So refer to the nunit-console.exe here with your target assembly path (Your test project assembly). Conveniently here you have macro's to refer the parameters you need, so the job gets easier.Check out the screenshot of my configuration in post build event.

  • Now it is time to try out the cool thing :D.
  • Every time I refactor or add code or tests you build the code by ctrl + shift + b, I can find the build run and after successful build unit tests being run as a part of build.
  • The cool thing is if the unit test fails, it is a build failure :). So I make a small refactoring I build it and with in seconds I can see if I have broken something and fix it or do a ctrl + z if needed...
Check out the output from the build and unit tests.


My Red Green Refactor cycle has just got shorter :). I have done this in the simplest way I know and it works for me. Let me know if there is a better way to do this

Also I have done this as a part of my yet to be open sourced project FluentAssertions. No spoilers now but more on this soon...

Tuesday, August 25, 2009

Using Git As A Tool to Help In Refactoring

I am a big fan of Git. Initially started as a urge to use the cool GitHub. But later when I started to realize the potential of Git itself the simplicity and power of this VCS amazes me.

As most of you know Git is a distributed version control system (VCS). For those who haven't used it or heard of it (Please come out of SVN dark ages :) please do a Google search and you will find a flood of links hitting you. What I am write speak here is about my typical usage of Git and how it helps me in day to day refactoring making it easier.

I am firm believer of RGR (Red Green Refactor) cycle. But beyond that I do a lot of micro refactorings like renaming variables, classes, moving behavior, adding tests etc as I thinking more about what I have written and finding better ways to express my code.

Now the problem when I used to use SVN is that I have only two choices. Either I pile up these small refactorings to commit in my local copy which I don't really like. Or commit each one individually and trigger the build for every small change I make. Also if I pile up and commit, at the time of commit how can I express my intentions of the commit I am making. I never like to put a comment in commit which blankly says "Refactoring and adding some tests.".

So how did Git change the picture? Git as most of you know is a two stage repository VCS. One repository may in remote server (incase of Github it does or a SVN repo if you use Git SVN) and other repository in your local machine which is also a complete repo of your source code. You can treat your local repo as a main repo and commit in it as many times as you want. This gives me a great advantage.

Every refactoring I do can be tested and committed to the repository (local one) individually. I can express my intent of the commit of every refactoring. When I feel substantial amount of work is done I can push to the remote repo and continue with my next item of work. Also when I push I can see it as a series of work I have done on a feature not as one bulky piece. Typically I have this code, refactor, test and commit cycle for every five to ten minutes when I use Git. This is one amazing advantage which I have when using Git. Similar feelings will be expressed by people who use Mercurial as well.

Also as an added advantage when build is broken and someone is fixing it I can still commit to my local repo and continue working and push my changes later. But we should be cautious not to pile up too much in our local repo that we distance ourselves with the mainline code base.

Similar thoughts on using Git and Github has been expressed by Mark Needham here.

Let me know if you have similar experience as this.

Mocks and Expressing Test Intent

Still I am stuck in the mock world. Tests have been a central part of coding life and mocks play a crucial role in them. My opinion is, test which doesn't express itself explicitly has lost its value. It may run as a part of suite but its value to the programmer as a way to understand the behavior is lost which is important.

So coming back to main topic, how mocks help in expressing your intention in test? In short mocks show a way to express constraints on how a collaborator should be used. Constraints are part of intention which in turn represent specification. Confusing as usual? Lets see an example specification and find how mock helps us to express this in our test.

Example Spec or Constraint: If I am building a Greeting application to greet in multiple languages and I use a third party translator which I need to pay for every call I make. My default language is English. So when I want greeting in English then I should not call the translator so that I don't need to pay unnecessarily.

Following is the test code I wrote for this.

[Test]
public void DonotUseTranslatorWhenLanguageIsEnglish()
{
var translatorMock = new Mock();
translatorMock.Setup(mocktranslator => mocktranslator.Translate("English", "English", "Hello"));
var translator = translatorMock.Object;
var greeting = new Greeting(translator);
greeting.SayHello("English", "Sai");
translatorMock.Verify(tr => tr.Translate("English", "English", "Hello"), Times.Never());
}

The intention of this test is to see if the language is English I don't use the translator. Here I will use a mock translator because there is a constraint on its usage and I am interested in expressing this in the test. So I use a mock translator on which I set constraints of usage and then after exercising the behavior I validate if the constraint I set has been satisfied.

You may get similar constraints in your application like retry thrice for connecting with credit card processor in case of failure. Or send mail in case of critical failure in application. Such situations are really good usages of mock.

A note of caution with using mocks - Mocks are way to look into an object's working i.e. how it uses its collaborator. Unless you are really interested in this information don't use them. If your interested lies only in the behavior of the object you are trying to test, use a stub instead for a collaborator.

Monday, August 17, 2009

Mocks and Verifying Expectations

Mocks are uber cool. They are really convenient to replace dependent components as well useful in TDDing out the behavior of an object and its collaborators. But what I have seen people doing is using mock and ceremoniously verify the expectations (in case of RhinoMocks it is using VerifyAllExpectations) and I question this. Here are my views on this practice.

Example -
public void GreetInFrench()
{
var translator = MockRepository.GenerateMock();
translator.Expect(tr => tr.Translate("English", "French", "Hello")).Return("Bonjour").Repeat.Once();
var greeting = new Greeting(translator);
greeting.SayHello("French", "Sai");
translator.VerifyAllExpectations();
}


Above test has been written using RhinoMocks framework. But the important point to note is, is it really necessary to verify if the expectation we set on translation mock is satisfied or not at the end of the test?

For me mock expectation is a form of assertion and in a test I don't like to assert on something which is not related to the behavior of the object I am trying to evolve. So when I pass a collaborator to an object through constructor or some method do I really care how the collaborator gets used or do I care the behavior of the object is as expected? Well obviously I care about the behavior and not much about the collaborator. So the assertion happens on what the object returns on exercising that behavior and not whether collaborators have been used or not.

But when would I explicitly assert using a mock? When I really care about how the collaborator gets used or there is no way for me to observe the effects of the object behavior I am testing. Situation like these bring out the need for mock.
For example

1) If there is a condition which says retry thrice to contact mail server in case of failure I set an explicit expectation to verify that the method on mock is called thrice on the failure test. Or I have a spec which says every call to translator service costs 2$. Then again I explicitly verify that the service is called once through a mock translator service.

2) Or when I am at the boundary of my system working on something which sends messages to the outside world (like sending mails or printing something on screen such methods typically return void) then I have no way to observe the effects. Again in such case I use a explicit verification to see if the method has been called on mock


So for me a better way to write the initial test is

public void GreetInFrench()
{
var translator = MockRepository.GenerateStub();
translator.Stub(tr => tr.Translate("English", "French", "Hello")).Return("Bonjour");
var greeting = new Greeting(translator);
Assert.That(greeting.SayHello("French", "Paulo"), Is.EqualTo("Bonjour Paulo"));
}

Here we use a stub for translator as I have nothing to verify against it and it is up to the greeting object to use it. I am happy as long as my expectations with greetings object is met.

Every test we write is a form of specification and mocks at times helps us to makes some of the conditions in these specifications explicit. Making unnecessary expectations will confuse the reader of the test as it mangles the meaning of what we behavior are trying to evolve.

Well... These are my views on using mocks. If you feel the stuff is really confusing let me know I will try to explain in a different way. Or if you have an alternate view lets discuss :)

Wednesday, July 15, 2009

Setup and teardown are evil for a unit test

I have been writing different types of test for quiet a while now. TDDing with unit tests, writing acceptance tests with FitNesse and tests with Watir. Except for FitNesse for writing other kinds of tests, I prefer to use some kind of xunit framework as it gives me most of infrastructure I need already. I may build my own test framework on top of this if needed but that is a different story.

Now the problem comes to set up and tear down methods in unit tests. Mark that these apply for unit tests and may not be directly useful for other kinds of tests.

By definition set up method is used to create the infrastructure needed for the test to run and tear down is used clear any hangover state from the test run. So if these methods are useful why are they evil for a unit test?

Here are the problems I find with a set up and tear down methods
  • Typically people use set up and tear down as places to refactor the common part which needs to run before and after each test method. But the problem for me is I need to scroll up and down to set up and tear down when I am reading the test. I cannot the get complete picture of the test unless I see these methods every time. I hope the test class is not so long that there is need to scroll up and down but I also feel that the meaning of the test should not be fragmented.
  • Also more importantly using a set up method will hide the details and complexity of dependencies being created to make the SUT work as this information is not directly visible in related test method which will look simple. Too many dependency may be a candidate for refactoring.
  • Tear down is typically used for clearing the hangover from the tests. The objects created in the test methods in a unit test should not so heavy that we need a separate tear down method to clear them from memory. The garbage collector should be good enough to take care of them. But more importantly predominant use of tear down in a unit test may mean that some state created by the test or SUT (Bad. I don't think I need to talk about importance of side effect free code :). Either way it is important to refactor them to become side effect free.
Lesson - Don't use set up and tear down in a unit test . They normally end up hiding the intent of the test and may be a signal for code smell like side effects.

So where would I use these? I will definitely use set up and tear down in end to end tests which I write in Watir or API level tests which create some heavy resource. I will use these methods to create the Build Operate Check Clear pattern which I discussed in one of the previous posts.

If you feel that you need to factor out some set up things to somewhere, create a set up method and explicitly call it in each test so that it becomes explicit of what dependencies we are setting up for that test method.

The approach and thoughts in this post may be a bit extreme. But they havyou as well e helped me and hopefully to make tests more expressive and code cleaner :). Let me know your thoughts...

Thursday, June 18, 2009

How to measure QA velocity?

Answer - Trick question. There is nothing called QA velocity. (Applause for those who answered correctly :). To make the point strong I have seen having a separate QA burn down charts and story walls as well.

I always wondered what drives people to measure such numbers? I found that the main argument is QA backlog in a project is increasing, so we need to measure the velocity to effectively manage and track testing resources. I don't think measuring QA velocity will lead to any meaningful answers because testing is not a separate process from development. Unless a feature is developed, tested and accepted by the customer it doesn't make any sense. I have seen people measuring the number of stories completed by the development team and having a dev complete phase. To reiterate my point I think doing this also doesn't make any sense. In Agile development, testing and working with the customer happen collaboratively and are not handoffs from one group to another. It is always the team owning the quality and not a group of people who are given the position QA or tester.

So how do we help teams understand that there is a problem in flow and the constraint lies in a specific part of development process. I used the term flow specifically because I believe software is developed as a flow of value (throughput). It doesn't happens in phases (staggered waterfall model) and handoffs doesn't happen between groups. When working with teams with such problem, one tool which really helps me with this problem is the kanban boards. Not that I am also joining kanban bandwagon but I think it is really a good add on to the normal story boards we have when we want to identify the constraint which is obstructing the flow.

Illustration of a kanban board is shown in the figure.


One of my favourite features of the kanban board is the WIP limits in each swim lane. I will not go into describing each phase in the board but the numbers in each phase is the maximum number of working items in the lane. If the number of items becomes too low then we need to pull work from the upstream. But if the number of items in a lane is high, it clearly indicates the constraint. Say if the number of items in testing lane becomes high for some reason, then may be developers in team needs to pitch in to see if things can be automated or help QA's with some of the tasks like writing fixtures or Watir scripts (I love Watir :). If the number of items in development lanes becomes high, QA can pitch in the help writing fixtures of acceptance tests or setting up of the environment and build and sometimes write some production code as well (It is really a good way to learn the system internals). I normally help the team fix the maximum number of items in each lane by discussing with them to understand their capacity.

The example I use for software development is flow of water through a pipe. It is not a staged process. If the water flows out of the pipe it is good and adds value. If there is some problem in the flow it is good to know where the problem is and correct it to bring the flow back to normal. It is good to concentrate on flow and throughput than individual stages.

Let me know what you think and share your experiences through comments. :)

Wednesday, May 27, 2009

Business Reabable Tests or Writable Tests?

There have always been a quest for helping the business users or clients to write tests for their applications directly. Elaborate frameworks like FitNesse, Cucumber, Concordion have been constructed to help writing tests in English like language. The reason being business experts know about the domain, the problem they want software to solve are in the best position to set the expectation of the software which is being developed. But there are several catches to business experts directly writing tests.

1) Acceptance tests are much more than verification tests. While building acceptance tests with the team the business experts discuss with the team to create these tests. As a result of these discussions the knowledge and understanding is being shared with the team helping the team understand the domain and the problem better .I think this is a very important for the team as well as to the evolution of software being built. If the users are able to build tests by themselves it may lead to a situation where they create the tests and just hand it over to the team as validations. Although the validation is important the knowledge sharing part may be missed.

2) This reason may not be application for all the business experts. Writing tests is also writing code. Tests tend to use English like language but they are never natural language. They are just higher level abstractions from code. So they still have some conventions like naming, casing, passing arguments to follow when writing these kinds of tests else they will not work. Also normal program practices of refactoring may apply in this context as well. So to do this effectively I think it is better for a business expert to work with a technical team to write these test. (On a different note I believe that tests are written in domain language are better than English like language. Capturing the domain part is important not the English part)

So as they say in DSL world, it is good to be business readable tests so that business experts can actively participate and help to build and use them. Business writable tests are not impossible but difficult for now. Some of the concepts in programming world may pave way to this concept in reality. Things like Language Oriented Programming, External DSLs and Intentional Programming may help us in future to work with business experts to construct tests. But it is good to keep in mind the fact that it is sharing of knowledge important for building better software and tests facilitate to do that.

Sunday, May 17, 2009

Value of small and focussed tests

I have always advocated this but still I learned the same lesson last week and that is what is driving me to write this (This also serves as a not for me). Am working on a project which is an large inherited legacy app. Most of the application has been rewritten except for a few parts. And the work I was doing kind of flows from new parts to old parts. It all started well as my work spanned mostly new parts, normal Red->Green->Refactor cycle. After my work was completed, I wanted to check the end to end flow so I ran the Selenium test. It all went fine till the part where the app reached the old code. Then all hell broke loose. A table which was supposed to show the past orders was not coming up. We weren't able to localise the problem as there was no tests covering that area. We knew what was broken but not where and so had to debug though hell for next three hours to find what was happening and solve the problem as well write tests.

It is nice that I had selenium test and it helped me in catching the problem. But it couldn't do more than that for me (Or at least I couldn't do more :). What I really wanted was a test which could tell me where exactly it is broken like a x-ray which doctor uses to find a broken bone. And shallow and broadly focused selenium test couldn't show me that. A better test for this situation would be a unit test or a focused integration test which can tell me where exactly the app is broken. Localizing a defect is a very important characteristic of these kinds of tests and it is this property which is important when something is broken. It is frustrating to find a test which says something is broken but not where. Also even if we have a unit test it should not be testing too many things at a time in the unit as we will not know what failed. So we need small tests which is focused in testing the behavior.

Does that mean if I have these test I don't need to debug? Well sometimes I don't but most of the time they help me find where I need to debug exactly to see the values coming it. I don't need to debug though all the layers or across layers to find five lines of code having the problem.

So writing small and focussed tests is good and if you have a legacy code base it is even more important to start doing this as soon as possible. I like writing small and focussed tests even at functional level though I have end to end tests also.

P.S May be I will add code examples here for this post. But now my code highlighter is broken so deferring it till I fix that :)

Monday, May 4, 2009

Does ATDD deliver value to an agile team?

I am writing this as my response to Biran Marick's view of ATDD which I found in this interview of him at Agile 2008. As well conversations which are happening in Twitter.

I really like the three ways he says of driving the design and testing the app

  • Doing heavy amounts of unit testing
  • Using white boards and charts to draw, discuss and distill the knowledge
  • Use large amounts of manual exploratory testing
All three methods are really good in helping to develop the product better.
But I have reservations about a few statements he makes about automating acceptance tests and following are my views on that.

I think the name ATDD is bad or it doesn't convey its actual purpose. I don't know how the practice of ATDD started but I never think it as an extrapolation of TDD as Marick mentions. I see TDD, ATDD and ET (Exploratory Tests) as different things which help us in three different ways. There may be some overlap in the function of all three but when compared to the primary purpose I think this overlap is minimal. ATDD is not an extrapolation of TDD as TDD is not about unit testing.

For me TDD drives how classes collaborate,which class takes up which responsibility. Exploratory testing is completely about using the power of human minds to creatively explore and test the application (Not just application. You can explore design, architecture, pretty much anything). Acceptance Test Driven Development for me is articulating and crystallizing the domain knowledge we need for our application.

For me these three methods are different and they serve different purposes. It is incidental that TDD and ATDD are also used as tests but that is a secondary effect. And I feel that we need all three kinds of tests in a project.

ATDD drives design of your application domain not majorly in a technical point of view as TDD does. It helps more in concentrating on the domain, DDD and ubiquitous language. It helps in making the domain rules clear and captures the knowledge as an executable verification mechanism. Doing this helps in preventing defects (which is much more powerful) than finding one at a later stage. So expecting acceptance tests to drive design from a technical stand point may not be most profitable one. They do help in separation of layer and a few other design points but they help more in domain based design.

It is possible to do this by drawing diagrams on a white board and discussing with domain expert and customer. We do this in ATDD also but after we discuss where do keep this knowledge? How do we stop the domain ambiguity creep? I think it is good to crystallize the knowledge as test more than keeping them as text in wikis (For me English is always ambiguous so I prefer tests).

Another important question is, after we develop a complex business logic how do we regress it at a later stage if we want to know it is working fine. None wants to draw these drawings while developing a feature and throw it away after we develop. We rarely develop software where we don't revisit what we have written a few iterations back. How do we regress? Unit tests help you but how do we know if the domain logic and business rules are working fine. Doing ET for this is not a great way to approach the problem. I like to explore features which are being developed and not do the same feature exploratory testing again and again. ATDD which are also tests help us in this regressing.

About Acceptance tests being fragile and cost more in maintenance. It may be because we are approaching it wrong. TDD costs more if we approach it wrong. The problem may be we are trying to write the tests from a layer of app from which we shouldn't be doing it. I never prefer to write acceptance tests driving from UI as I feel we are using a wrong layer(I do have some end to end UI tests in Watir). And as Lisa Crispin said acceptance tests also need care to be alive and working all the time giving us valuable feedback.

So from my point of we need all three forms of tests as they take up three different kinds of roles. I also wish we can change the name ATDD as people may misunderstand that just because it has a TDD embedded in it, it may be an extrapolation of it. I like the names Executable Specification or Examples.

Conclusion I feel ATDD does add value provided you set the right expectations for it and approach it right. Don't expect it to help you in a way it is not meant to be or may be you are looking for something else.

Last but not least if you want to know how to use acceptance tests to achieve fast feedback please attend Elizabeth Hendrickson's Acceptance Test Driven Development workshop. She has a wonderful way of demonstrating this and I learned a lot from her.

Needless to say I always welcome your views and thoughts. If you want to discuss more please add it as comments :)

Saturday, May 2, 2009

Why I don't like overworking in a project

The normal reasons. This happens with a over enthusiastic team. They sign up more and find that they can't finish as committed. Put some extra effort. After a few iterations reach a balance. Also over work is not sustainable. Team burns out. All the stuff you already know.

But there is another form of overwork I have seen (Should call it over work.? If you have a better way to express let me know). A more subtle one. Not sure how many have observed or felt this. How many of us have gone to the office on Sunday and saw someone from our team working on something related to project? It is nice to have these kind of committed people in the project right? It is good that they take up something and help the project grow. But there is a risk attached with this.

The problem is the work of these people tends to help in setting unrealistic expectations with the customer. As long as they are in, the team tends to keep a good velocity. As they leave the team, the velocity drops to a lower level (which must have been the actual velocity). And this leads to all kinds of confusion with the management and customers scurrying to find out how to improve productivity unless they understand the real problem. The advantage with agile is it is easy to notice this effect really soon after they leave because of the short iterations.

The second problem is as most would already know, they work alone they may tend to leave the team behind and learn new things about the project and code by themselves. It is good and they can hold workshops to share what they learned. But when it comes to work related to a project I believe in learning with the team and not share knowledge in isolated chunks.

Most of the time these people tend to do this because they are interested in writing code or doing technical work. When you identify those people help them in trying to direct their potential in other ways. 
  • Contributing to open source projects
  • Involving them in coding dogo or technical book club
  • Participating in Barcamps or DevCamps or local user groups.
  • Help with the recruitement.
Also help them to understand the value of whole team learning and working together.

I have written my thoughts. Let me know yours. Have you worked with enthusiastic and committed people like these? How did you help them? What are the problems you faced?

I really like Alistair Cockburn's metaphor of Agile as a cooperative game. It gives meaning to the whole team approach or working and learning. I also like building knowledge which is not limited just by what we are doing in the project. 

Friday, May 1, 2009

Build Operate Check Clear - Test Pattern

A common pattern which lot of people including me know and practise but I am writing here as a reference to me as well for people who are not familiar with this.

This is a common pattern for writing tests the idea being initializing the infrastructure and SUT we need for the test and cleaning it up after we test. This pattern applies for any kind of test unit or UI level but the clear phase may depend on the kind of fixture we use. This pattern helps to a level to keep the test reliable by making them independent but this largely depends on how Build and Clear phases are run.

Build - Create the infrastructure we need for the test and initialise system under test (SUT) to a state where we can start testing it. In case of unit test we may set up mocks or objects we want to test, in case of functional test we may bring the application to the page or screen where we want to test.

Operate - Exercise the behaviour we want to test. Work on the SUT in case of functional or call relevant methods on objects in case of unit.

Check - Assert if exercising the behaviour has produced relevant effects. Assert the return from a method, verify expectations on mock.

Clear - This phase depends on what kind of fixtures have been set up for the test. It helps in tests being independent as it brings the world back to the state as it was before the test started running. The set up and clear stages may be run at suite level or test case level or test level depending on the cost of running them. The more frequent the better as they reduce the tests interdependency.

In case of transient fixture which is in memory we don't need to explicitly clear it, the garbage collector may do the work for us. If it is permanent fixture like writing files, or set up of services we need to bring them down explicitly. If a test fails or an exception is thrown while test is run the clear phase should still run to bring SUT to base state.

I have found this pattern to be useful when writing tests. It is simple and effective and I am sure a lot will be practising this. If you have any good test patterns let me know. I am interested in collecting these patterns and documenting them.

Please let me know your feedback through comments :).

Sunday, April 26, 2009

FireDriver => FireWatir + WebDriver Experiment

I think I have become the evil scientist in the Watir world creating all these experiments :). Well everyone knows the story. FireWatir is great especially after the integration with Watir and getting the same API. There are people out there working busy to iron out the differences in API and making things better. But the FireWatir world is not all perfect. The JSSH which FireWatir depends on to drive FireWatir by injecting JavaScript is a legacy code or more accurately an orphaned code (No maintainer). The code base is a black box (At least for me. Not sure for other but Angrez will know the internals). We know how to use it but any defects on its front has practically no owner. And there is no scarcity for defects also. The interface with firefor is also weak. Enough ranting. I think you get the picture. To make FireWatir better we must move away from JSSH.

Thus this experiment started, an effort to save FireWatir from Jssh. Looking at the options, I really like the way WebDriver works with the browsers through native code layer to automate them. I think Bret and Charley will agree with me on this. So with Simon, Bret and Charley's help I set out to do a spike on using WebDriver's Firefox driver to drive Firefox from Ruby to use it with Watir.

I didn't want to consume WebDriver jar from JRuby and wrap it up with Watir API as this will not help the people who use native Ruby. So I am actually using the JavaScript extension which is the core of the WebDriver's Firefox driver. This extension sits inside Firefox and drives it according to the commands which it gets over the wire in a JSON based protocol. If you need to know more details about the implementation, please check out the code in GitHub FireDriver project. There are a few things like setup the extension and install the gems which you need to do for now (They will be automated soon). The way to do this is in the Wiki. Again any help needed in this, please let me know.

The solution is not perfect yet. It is an experiment. There are synchronization problems. The API is very small. I am working on this to make it evolve better. If you wish to contribute please fork the repo and start working on it right away. Thats the GIT way :).

Thank you Jijesh for helping me in this. Thank you Bret, Charley and Simon for supporting me.

Tuesday, April 14, 2009

Invest in tests wisely for a secure future for your application

Tests are there for different reasons like design drivers, safety net, executable specification. The reason I am taking here is safety net where your tests will help you to understand that if something is broken in the application after a change has been made. For an application which is in flux this is a really important usage of test.

It is possible and many would be doing this, to write tests at different layers. We can have tests in Domain layer, at service layer and at UI layer. The metaphor I see tests across these layer is "The different tests are your application's investment portfolio". (WARNING - I am not really expert on finance terms.). Invest wisely the application will live happily or the investment is going to drag the application down.
As with the real investment portfolio tests are risk limiting strategy. And nobody with their brains working will invest on one kind of investment too much and ignore the rest. That kind of increases the risk. So it is important to distribute the investment across the options we have. Again the percentage we are going to use for distribution is really important.

I am not going to say here that investing on UI tests is a bad thing as it all depends on the context . On general UI tests are fragile, slow and have a high probability to drag the application down. But they are the easiest to create (with all the cool drivers like Watir and Selenium, recorders and everything). On the flip side the domain layer tests are kind of stable, run fast but they tests units in isolation (This has both +ve and -ve aspect).

The biggest problem I have is when a functional test fails because the id of an button has changed. From a user's perspective, I don't care as long as functionality works. So this raises a question should I be testing functionality through UI? We have to have those tests because that is how the app is going to be used. So I tend to keep at dozen of end to end UI tests. In my case anything more than that will potentially turn to pain at the end of the day.

So before you go ahead and create a recorded test using the new shiny tool you got there, ask yourself is it the right investment you are making for you application. Secure the future of your app with the right investment :)

Thanks @lisacrispin - Look at your ROI when thinking about investing in a kind of test. (Warning - ROI doesn't mean Excel sheet which your managers use :)

(I kind of tend to use this as my investment guideline - http://c2.com/cgi/wiki?TestFoodPyramid)
(Picture from Flickr. Thanks to lastminute.)

Sunday, April 5, 2009

Performance Testing Ideas in Agile Context

It all started for me as an idea to understand how the performance testing figured its way in Agile. All my quest for finding some useful info lead me to the same place of getting the client requirements, finding the load pattern, simulating the load... normal stuff. But my question of how to effectively find a simple way to do performance testing earlier and often remains? After a few hours or days of brainstorming (I forgot :D) and posting on twitter here are a list of points which I could come up with.

Some of them are ideas and some are things which I have done before like using functional tests. I would like to keep this as a living document constantly updating them when new ideas come up. I would also encourage you to contribute to this. Ideas can be crazy but plausible :).

1) When writing some complicated functionality which may create performance or scalability problems, use something like jUnitPerf along with TDD to continuously test how it performs. The functionality as I say need not be end to end but just the unit which is complicated. This would facilitate the reduction in area of concentration as well performance problem localization.

2) Use profiling to understand the application performance and memory usage early.

3) Place some performance metrics in the functional tests like Selenium or Watir suite, and graph the test suite execution time to find out the abnormalities earlier. (Thanks @jtf and @cgoldberg). If there is a sudden increase in the execution time there may be a problem.

4) Use YSlow, PageSpeed and Firebug to identify performance bottlenecks on the client side like too many HTTP requests or browser side caching stuff. It is important to understand the bottlenecks across different layers and in typical Agile iteration the app gets created in thin vertical slices of functionality rather than horizontal. I really like the idea of integrating YSlow reports in Continuous Integration process to understand the bottlenecks by Adam Goucher. Check his post here http://adam.goucher.ca/?p=1017.

5) When performance is one of a primary considerations, use a continuous performance benchmarking tool may be with your CI build pipeline to understand the changes in performance over time. (Thanks @cgoldberg)

6) In an iteration when developing a functionality use something like Pylot or JMeter to identify the bottlenecks in that functionality and clear them. Before release it is important to test the application in realistic conditions so use something like BrowserMob or SOASTA to use real browsers to performance test from internet. (Thanks @bradjohnsonsv). I have spoken about testing in the cloud in CloudCamp before :)

7) Discuss with the team and identify the patterns in code which can affect performance. Use a code quality to catch these patterns in the build process if possible.

8) Wrap all integration-level or API-level calls in a timer, so the system logs always contain timing information. Then you can always mine the system logs in dev/test/staging/prod for real performance information, and fix anything that takes too long. Thank you Chris McMahon.

I have faced two problems with respect to performance work in Agile teams. One is premature optimization is evil. There is a difference between premature optimization and context sensitive optimization which is really needed. Second is in Agile most of the time the user stories revolve around functionality. As a user I would like to..... When giving the stories customers assume that it will perform and the developers will miss out because it is not stated. Testers need to pitch in and ask if the functionality is business critical what are the performance considerations for that (Lisa's Agile Testing book has an excellent chapter on this).

Well that is all I have to say for now. But this is in no way complete. I would really love to have people with much better knowledge than me to pitch in here and correct my mistakes or contribute better ideas than this.

Feel free to comment on this post :).

Tuesday, March 31, 2009

My first podcast - Watir Podcast :)

The Watir podcast featuring me is finally done. Thanks to Zeljko Filipin for doing this wonderful podcast. Also thank you dude for making my voice sound nice and for a really good interview. I also like the music played at the beginning :).

We discussed a lot of stuff especially the open source projects I maintain as well the technologies I work on and about Chrome and flash watir. Sometimes my voice swings up and down may be because of internet or Skype service. Sorry for that :).

The podcast is hosted at watirpodcast.com. There is a whole bunch of Watir podcasts available there.

For those interested in hearing you can download or play it here directly. Your comments are welcome. Please let me know your feedback.


icon for podpress  watir_podcast_20.mp3 [26:45m]: Hide Player | Play in Popup | Download (41)

Saturday, March 21, 2009

Want to learn Agile Clean Code and Craftsmanship in a cool way?

Reading Lisa's post on Ten Principle's for Agile Testers, I found this really good site Agile in a flash, which gives you Agile, Clean Code and Craftsmanship principles as flash cards. There were around forty cards in that site as individual blog posts. I was going through them one by one and they were really good. I wanted to download each one of them to print and use the for my Agile training and as reference when I speak with others. But going through each one of them post by post is really difficult. I will write a mechanize script in time for that :) but before that I really found a cool way of reading all of them in one place.

If you have Firefox you can install a plugin called Cooliris. This plugin can be used to view pictures as well and it shows it in a really cool black back ground where you can scroll up and down in a Mac style. So when I did a mouse over on a card I got the cooliris icon on that asking me if I wanted to open it. I gave it a try and to my surprise it showed me all the Agile flash cards neatly arranged. Check the screenshot out :). Now I can use this as a way to show these card to people I speak or train.... You can't learn everything in Agile with this but it is a start. Go firefox and Cooliris :)

Thursday, March 19, 2009

JWatir, A Watir on JRuby Experiment

Edited 20 March 2009

I have been seeing a lot of posts in Watir mailing list about JRuby support for Watir. I have always wanted this as I would like to use it with other Java libraries out there. The major stumbling block in making Watir with JRuby is the support for win32 ole support. And by the trend in which JRuby is developing, I don't think that this support is going to happen any time soon.

I found there are different ways to solve this problem. One was using something like Jacob to control COM components from Java or JRuby. But using Jacob I need to build everything ground up by myself. So the next natural choice was the consume the WebDriver's IE driver. I wrote a small wrapper around IE driver to expose that as Watir API and am calling it JWatir (Help from anyone who has better sense of naming libraries is welcome).

The code is available here in watir mailing list.The functionality is really small and supports only text field and button. There is a quirks in running the code. The webdriver's InternetExplorerDriver.dll, available inside webdriver-jobbie.jar needs to be extracted and somewhere in the system accessible path. Once the dll is placed, the test can be run...

The code is still at a very initial stage and mostly a kind of hack. It has practically no test coverage also except for the Google search example I have written. So no serious applications with JWatir for sometime except for developing it further :).


One more thing, I will also be moving it to GitHub soon. Any comments and views are welcome.

Thank you Simon and the webdriver team for creating a wonderful and solid webdriver API.

Wednesday, March 11, 2009

How is my blog and twitter coming along? Check this out

I had used Wordle to create the tag cloud of my blog some months back and posted it in my blog. Today I wanted to see if there is any change in that and how my blog evolved. So here are the tag clouds which were generated for my blog and twitter.

Twitter cloud -



Blog cloud -



See how Watir, Testing are highlighted :).Also the comparison of the tag clouds of my blog from previous to current one is interesting.So what is your tag cloud?

Sunday, February 22, 2009

Watir, Is it good or just good enough?

Update - Added a few more points and links

When I was going through a few things to find some info, I found a lightning talk by Alister Scot about Watir at AWTA. The concluding thoughts struck me and made me think for quite sometime. It read "Should Watir be really good at a few things (IE, Firefox testing) or just good enough at a lot of things (IE, Firefox, Chrome, Opera, Safari...)?". Interesting point...

As an active Watir user and author of ChromeWatir, I would like to present my point of view on this subject. A lot of people will already be knowing this but I just wanted to write something about this for sometime and this point has given me this opportunity.

I feel that Watir has seen a huge paradigm shift especially in the last year. My thinking is, it has shifted from being a test driver implementation to an API to which individual drivers conform to. If you see my slides on Watir, please take a look at the slide Watir Architecture. In there I have separated the Watir API and the individual implementations.

There are several advantages for this approach. One is for the end user he needs to learn only one API to write test for any browser. As well it will be possible to switch the browser he is testing by changing the implementation. But a bigger advantage for the Watir developers is individual implementations are being done by seperate teams. So even though Watir supports different browsers, there will be individual teams which will be responsible for developing as well as maintaining them.

Keeping the interface separate is really important for this. The implementations need not be merged. This will help in delegating the responsibility to individual teams. In future we have have even more implementations but there will always be people who know about the implementation to take care of these. So if we take this point of view into consideration, I think we don't need to care about Watir being just good enough. Watir will always be good.

Well talking all the green and lush things is good. But I like to think on both the sides of the problem. The problems I see in this approach are except for the mainline IE implementation, all other Watir implementations seem to be young and some differences from the original one. It may take sometime for them to stabilize and get on the same platform as the original implementation which is understandable.
As well getting disparate teams to work together may be difficult as they will have a different way of working. There is no ready made solution to this but I would love to know your thoughts on this.

Let me know your thoughts about this?

Friday, February 20, 2009

ChromeWatir 1.5.0 Released

I am happy to announce that we have released a new version of ChromeWatir. You can get the gem or source from the project page. We have been working on it for quite sometime and it is me whom you should blame for doing a long spike on Chrome AutomationProxy which I did not complete till now :).

What is new in this release

  • Support for table and file field elements
  • Support for Element Collections like links, images, etc.
  • Refactoring and fixing defects in launcher code.
ChromeWatir is still in alpha but we have got some really good feedback and support make it better. Thanks for everyone who helped and encouraged us.

Now that we are done with the release, I think it is time to start working on C++ code to use AutomationProxy for the next release now ;).

Tuesday, February 17, 2009

Watir slides for Ruby and Bangalore BarCamps

I wanted to post a few things in this blog for quite some time. But I have started liking Twitter more as it does not make me type so much :). That is not meant to say that I will abandon this blog. I will be still posting here and keep things updated.

Anyhoooo I am speaking at RubyFunDay and Bangalore BarCamp about Watir. Here are the slides for the talk. If you are there, let me know. We can discuss on Agile Testing, Opensource testing tools, Polyglot stuff and Watir.


Monday, January 26, 2009

Rapa.kind_of? ActiveResource # For Java

Check out Rapa - ActiveResource like REST Client for Java.

See the project page and wiki for info about downloads and usage.

We are working on it to make it better and support more functionality like message formats and Authentication. So please let us know your feedback to make Rapa better.

Wednesday, January 21, 2009

ChromeWatir new release is available!!!

It just feels like yesterday... And now we have a new release of ChromeWatir-1.4.0.

Whats new in this release?

  • Support for element containers like frames, SPAN, div.
  • Refactored the locators and made them better :)
  • More test coverage and improved doc. (Check out the wiki)
This release has been quicker than we expected but a good one. Get the downloads from the project page.

Future releases... They are quiet far ;). We have lots of things planned. Using the Automation Framework of Chrome is one of them. Compatibility with Watir API, support for table and element collections are also in the list. If you have anything in mind, add it to the wish list we will be putting up in the wiki soon.

I also added the ChromeWatir page in the Watir wiki. Check out the page here.

Download the gem, install, use and let us know the feedback.

Friday, January 9, 2009

Announcing the release of ChromeWatir

I am happy to announce that the first version of ChromeWatir has
been released today. The API follows a similar convention to Watir for
Google Chrome browser. Please visit the ChromeWatir Google code
website for more details


We are in the process of making it better by stabilizing it as well
adding more functionality.

The first release is available as gem as well as source in the
downloads page.

Please have a try and give the feedback.