Tuesday, August 20, 2013

confused on choosing test suite's language?

confused on choosing test suite's language? the K.I.S.S. answer is, let your test suite be in same language/environment as of the application. If there are multiple language used if the application, then choose a language that is used to define your application's models and their builders.

And the long answer...
Recently I came across a project where application was written in Java and the test suite was in Ruby. The reason for the choice of test suite’s language was that the tester in the team was conformable with Ruby, and the 'team felt that' it would be quicker to develop tests in Ruby.

Given the same scenario, if I were the tester I would have chose otherwise. Yes I would have faced difficulty in setting the initial framework in Java, since I would be less familiar with the language, and my churn of initial set of stories would be less.

Let me explain why I would have chosen the difficult path (though at the face it looks so, let me argue it is the other way). Below I would compare the various aspects and their effort with the test suite is in Ruby and the application is Java against both test suite and application in same language.

When application is in Java and if tests are in Ruby and if tests are in Java
To set up a development environment, We have to spend effort in setting Java environment (JDK, class path, java_home, maven), and the Ruby environment (RVM, bundle, gem, rake ...). And the pain would be two fold considering the ground difficulties  (it work in that machine, and not in this machine) All the effort spent/confusion in setting a new environment is saved
Setting test execution Configure and set up test execution form scratch, I my experience though are familiar with a language every time new set of issues will be waiting for us to solve Happily reuse the configuration and execution scripts developed for unit/integration tests done by developers
Setting CI pipe line Every time you set a new box for your regression suite, same story as development environment will repeat here Sit back and relax, if your app work test would work
IDE Multiple ones, and do consider the cost associated with them :)
Maintenance Once you happily handover the project to the client, the poor guy have to find a person who is good with both the languages to maintain the code base Skills is narrowed to a single language
If you choose to ignore all the above argument, here is a lethal one.
Test Data Set up Whatever the approach you choose to use,
  • be it writing your own model to match the application and builders (like factory girl) with that
  • be reverse engineering the DB
  • parsing the migration scripts
  • using bridge (like jruby or ironruby) to use developer builders
It would involve a heavy effort. Most likely it would turn dirty as we run behind to chase and match all the changes happening in your application model and their relations
go ahead and use builders developed for unit/integration tests

Irrespective of the languages (Java and Ruby) used in above argument, the issues remains same for any combination of languages.

Tuesday, August 13, 2013

layering your tests

When ever my wife leaves our house alone to me for a week or so, things start piling around. Cloths will be lying on the floor left exactly where I took then off, sink will be full, layer of dust in the floor, mobile/laptop/tablets/purse/keys hidden under pillow/bed sheets/pillow/shoes!!....

So when I look around for a nail cutter, obviously I'll not be able to locate it. After enough googling around my house, I'll go buy a new a nail cutter, which would eventually get lost into those black holes in my house.

My wife comes back, scolds me for the mess I created, make me slog with her in cleaning it. Oops she finds the extra nail cutter, give an other round of bash. I end up buying some gifts in consoling her.

Moral of the story: Keep thing organised, though it takes an extra effort at first place, so you can avoid
  • living along with the mess
  • search for thing you want
  • investing (cost/time) on getting some thing that you have already
  • cost/time spent on cleaning
  • penalty we pay for creating the mess
  • and mostly importantly, those bitter moments in life.

Enough of my story, coming to work, in an automation code base, if we put things where ever we want, we will end up repeating my story at house.
 
For example, if you are writing a function to delete a file if it exist and create a new one. And it part of some test and it is not abstracted into proper class and method. When your team mate wants same function, he/she would google to find how to implement the functionality, and may do in a different way than yours, since they will not be able to see that this already exist.
 
There are multiple reasons why this mess gets created
  • Test suite is not organised enough
  • Though with a good framework in place, lake of discipline, awareness or skill level.
The ways in which we can avoid such mess are
  • Create modules/projects/namespace that does a specific work. Like tests(will hold your tests), application (operation in a application like login, create user), file operations, test tool(implements button click, type text, select option, drag drop). "Page Object Model" is a good starting point
  • Define contracts between the modules. Ex. test tool module should only be used in application module and not in tests. In other words button click in your test class is wrong, they should be in your application module.
  • Educate you team about the structures. Pair with them in writing few test, so they would learn.
  • Use custom checks as part of your build to find module leakages and if found fail the build. Ex, use a parser to find if there are any button click in you test class, and if found fail the build

more to come...


Monday, August 12, 2013

hate random test failures

Here was the scenario that I was facing, and I hope most of you would be facing this. Your regression build will always fail due some random test failure. When you try to rerun the test locally (or in the build machine itself) it would pass. Every time you would have spent an hour or two in setting up things and rerunning the failed test, finding no real defect.

Over a period of time, a dirty fact that "Failing build is due to random test failure, (and not because of application issue)" will get registered in your mind and in your team's mind.
When a real application defects comes in, the tests would fail, alas we would mistake it for random failure.

I would give you a rough numbers form one of my projects that suffered with this issue. On a high level there were 400 tests. out of which only 150 were passing. On a deeper analysis we found
  • 60 failing tests were tests for out dated application features
  • 84 tests failing due to application defects. (totally 14 defects)
  • Others were real test issue.
The test suite failed to capture the application defect, which was it's only responsibility. It needs hell a lot of maintenance in removing tests for outdated features and find which tests are failing for genuine reason and which are not. Hence that would make sense if I would say, value of failing test suite is zero.Hence effort spent in adding new test on it will as well be waste, with out fixing the current failing tests.

The following would be quick (and some what dirty) fix for the above issue.Create a downstream project, that would pick the failed test run them alone, once the your regression tests run gets completed. Chances are more that a random test will pass in a another run, ( In fact, in one of my project all the random test pass here ). So the downstream project will be your indicator on your test suite sanity.
This gives you some breathing time find the real issue behind the random failing tests and fixing it.

The following might be the reason behind the randomness,
  • The timing issue, like waiting for a page/control to load/disappear. Please don't add blind sleep. Make a feature in your framework to wait for a condition, and mention what exact condition to wait for.
  • Application behaviour. There are time when your application itself behave randomly. Having screen shots for failing tests would help you pin such issues.
  • Random generators used for the test data. There is a practice of creating test data randomly. Avoid this. If you rely on random data to find a defect, chances are that it may not occur at all.