Fork me on GitHub

Sunday, April 26, 2009

FireDriver => FireWatir + WebDriver Experiment

I think I have become the evil scientist in the Watir world creating all these experiments :). Well everyone knows the story. FireWatir is great especially after the integration with Watir and getting the same API. There are people out there working busy to iron out the differences in API and making things better. But the FireWatir world is not all perfect. The JSSH which FireWatir depends on to drive FireWatir by injecting JavaScript is a legacy code or more accurately an orphaned code (No maintainer). The code base is a black box (At least for me. Not sure for other but Angrez will know the internals). We know how to use it but any defects on its front has practically no owner. And there is no scarcity for defects also. The interface with firefor is also weak. Enough ranting. I think you get the picture. To make FireWatir better we must move away from JSSH.

Thus this experiment started, an effort to save FireWatir from Jssh. Looking at the options, I really like the way WebDriver works with the browsers through native code layer to automate them. I think Bret and Charley will agree with me on this. So with Simon, Bret and Charley's help I set out to do a spike on using WebDriver's Firefox driver to drive Firefox from Ruby to use it with Watir.

I didn't want to consume WebDriver jar from JRuby and wrap it up with Watir API as this will not help the people who use native Ruby. So I am actually using the JavaScript extension which is the core of the WebDriver's Firefox driver. This extension sits inside Firefox and drives it according to the commands which it gets over the wire in a JSON based protocol. If you need to know more details about the implementation, please check out the code in GitHub FireDriver project. There are a few things like setup the extension and install the gems which you need to do for now (They will be automated soon). The way to do this is in the Wiki. Again any help needed in this, please let me know.

The solution is not perfect yet. It is an experiment. There are synchronization problems. The API is very small. I am working on this to make it evolve better. If you wish to contribute please fork the repo and start working on it right away. Thats the GIT way :).

Thank you Jijesh for helping me in this. Thank you Bret, Charley and Simon for supporting me.

Tuesday, April 14, 2009

Invest in tests wisely for a secure future for your application

Tests are there for different reasons like design drivers, safety net, executable specification. The reason I am taking here is safety net where your tests will help you to understand that if something is broken in the application after a change has been made. For an application which is in flux this is a really important usage of test.

It is possible and many would be doing this, to write tests at different layers. We can have tests in Domain layer, at service layer and at UI layer. The metaphor I see tests across these layer is "The different tests are your application's investment portfolio". (WARNING - I am not really expert on finance terms.). Invest wisely the application will live happily or the investment is going to drag the application down.
As with the real investment portfolio tests are risk limiting strategy. And nobody with their brains working will invest on one kind of investment too much and ignore the rest. That kind of increases the risk. So it is important to distribute the investment across the options we have. Again the percentage we are going to use for distribution is really important.

I am not going to say here that investing on UI tests is a bad thing as it all depends on the context . On general UI tests are fragile, slow and have a high probability to drag the application down. But they are the easiest to create (with all the cool drivers like Watir and Selenium, recorders and everything). On the flip side the domain layer tests are kind of stable, run fast but they tests units in isolation (This has both +ve and -ve aspect).

The biggest problem I have is when a functional test fails because the id of an button has changed. From a user's perspective, I don't care as long as functionality works. So this raises a question should I be testing functionality through UI? We have to have those tests because that is how the app is going to be used. So I tend to keep at dozen of end to end UI tests. In my case anything more than that will potentially turn to pain at the end of the day.

So before you go ahead and create a recorded test using the new shiny tool you got there, ask yourself is it the right investment you are making for you application. Secure the future of your app with the right investment :)

Thanks @lisacrispin - Look at your ROI when thinking about investing in a kind of test. (Warning - ROI doesn't mean Excel sheet which your managers use :)

(I kind of tend to use this as my investment guideline -
(Picture from Flickr. Thanks to lastminute.)

Sunday, April 5, 2009

Performance Testing Ideas in Agile Context

It all started for me as an idea to understand how the performance testing figured its way in Agile. All my quest for finding some useful info lead me to the same place of getting the client requirements, finding the load pattern, simulating the load... normal stuff. But my question of how to effectively find a simple way to do performance testing earlier and often remains? After a few hours or days of brainstorming (I forgot :D) and posting on twitter here are a list of points which I could come up with.

Some of them are ideas and some are things which I have done before like using functional tests. I would like to keep this as a living document constantly updating them when new ideas come up. I would also encourage you to contribute to this. Ideas can be crazy but plausible :).

1) When writing some complicated functionality which may create performance or scalability problems, use something like jUnitPerf along with TDD to continuously test how it performs. The functionality as I say need not be end to end but just the unit which is complicated. This would facilitate the reduction in area of concentration as well performance problem localization.

2) Use profiling to understand the application performance and memory usage early.

3) Place some performance metrics in the functional tests like Selenium or Watir suite, and graph the test suite execution time to find out the abnormalities earlier. (Thanks @jtf and @cgoldberg). If there is a sudden increase in the execution time there may be a problem.

4) Use YSlow, PageSpeed and Firebug to identify performance bottlenecks on the client side like too many HTTP requests or browser side caching stuff. It is important to understand the bottlenecks across different layers and in typical Agile iteration the app gets created in thin vertical slices of functionality rather than horizontal. I really like the idea of integrating YSlow reports in Continuous Integration process to understand the bottlenecks by Adam Goucher. Check his post here

5) When performance is one of a primary considerations, use a continuous performance benchmarking tool may be with your CI build pipeline to understand the changes in performance over time. (Thanks @cgoldberg)

6) In an iteration when developing a functionality use something like Pylot or JMeter to identify the bottlenecks in that functionality and clear them. Before release it is important to test the application in realistic conditions so use something like BrowserMob or SOASTA to use real browsers to performance test from internet. (Thanks @bradjohnsonsv). I have spoken about testing in the cloud in CloudCamp before :)

7) Discuss with the team and identify the patterns in code which can affect performance. Use a code quality to catch these patterns in the build process if possible.

8) Wrap all integration-level or API-level calls in a timer, so the system logs always contain timing information. Then you can always mine the system logs in dev/test/staging/prod for real performance information, and fix anything that takes too long. Thank you Chris McMahon.

I have faced two problems with respect to performance work in Agile teams. One is premature optimization is evil. There is a difference between premature optimization and context sensitive optimization which is really needed. Second is in Agile most of the time the user stories revolve around functionality. As a user I would like to..... When giving the stories customers assume that it will perform and the developers will miss out because it is not stated. Testers need to pitch in and ask if the functionality is business critical what are the performance considerations for that (Lisa's Agile Testing book has an excellent chapter on this).

Well that is all I have to say for now. But this is in no way complete. I would really love to have people with much better knowledge than me to pitch in here and correct my mistakes or contribute better ideas than this.

Feel free to comment on this post :).