Saturday, November 24, 2007

RFC: A mock too far?

I love using mocks as it makes it possible to write unit tests for small parts of a bigger system without depending on it. Using mocks is very simple, just create a mock of an interface (or class, schhhh) , set up the expectations on it, and then execute the test. When using mocks it is easy to get 100% coverage without relying on others. But I have come to the conclusion that this can be taken too far, way too far.

I have tests that do more setting up the mock expectations than actually testing the code. This makes the tests much harder to read, and I'm having problems going back to 6 months old code and understand what a test actually tests. And in my eyes this is a big problem because unit test code should be as simple as possible.

So what I'm interested in is how should I do mock testing, without drowning in mocking expectations. Does anyone have any ideas on how to do it? Have I gone a mock too far?

As a simple example I will show my unit tests of my Clear case implementation for the Hudson CI server. (But my production code looks very similar unfortunately).

I've selected to mock out the clear tool command execution in the test, ie I dont want to issue real commands using the clear case command line tool (and I don't have a Clear case server or client at home). In the below example I want to verify that when the plugin polls the CC server for changes, then it should issue a "lshistory" method call. But to get it to the point where I can run the tests I have to mock out the Hudson dependencies, ie get the latest build, check the time stamp of it, mock a list of changes that the "lshistory" should return.


@Test
public void testPollChanges() throws Exception {
final ArrayList list = new ArrayList();
list.add(new String[] { "A" });
final Calendar mockedCalendar = Calendar.getInstance();
mockedCalendar.setTimeInMillis(400000);

context.checking(new Expectations() {
{
one(clearTool).lshistory(with(any(ClearToolLauncher.class)),
with(equal(mockedCalendar.getTime())),
with(equal("viewname")), with(equal("branch")));
will(returnValue(list));
one(clearTool).setVobPaths(with(equal("vob")));
}
});
classContext.checking(new Expectations() {
{
one(build).getTimestamp();
will(returnValue(mockedCalendar));
one(project).getLastBuild();
will(returnValue(build));
}
});

ClearCaseSCM scm = new ClearCaseSCM(clearTool, "branch",
"configspec", "viewname", true, "vob", false, "");
boolean hasChanges = scm.pollChanges(project, launcher,
workspace, taskListener);
assertTrue("The first time should always return true", hasChanges);

classContext.assertIsSatisfied();
context.assertIsSatisfied();
}
As you can see the mock expectations take up at least two thirds of the test. Of course I can refactor the tests by adding helper methods that will set up the expections, but many times the expectations are similar but not equal.

Tuesday, November 20, 2007

Hudson embraces Python

A few months ago I started using Hudson for my XBMC TV.com scripts that are written in Python. The Hudson continuous integration server will retrieve the latest sources from subversion, package the sources into a zip file, run a few unit tests, analyze the source using pylint and then display the outcome in a easy navigated web-ui. You can see the result at http://hudson.ramfelt.se where the latest build is always accessible.

Python unit tests
Hudson supports JUnit report files, but unfortunately the py-unit framework does not support outputting the unit test results to an XML file. But thanks to
Sebastian Rittau, there is a unit test runner that will write the result into an XML file that can be parsed as a JUnit file. Using the xmlrunner.py when running my tests I can produce XML files that can be read by Hudson. So now for every commit or forced build the tests are executed and shown in a nice graph.


The python code looks like this:
suite = unittest.TestSuite([
unittest.TestLoader().loadTestsFromTestCase(ShowTestCase),
unittest.TestLoader().loadTestsFromTestCase(VideoTestCase),
])
if __name__ == '__main__':
runner = xmlrunner.XmlTestRunner(sys.stdout)
runner.run(suite)


In Hudson you must enable JUnit report files by checking the
"Publish JUnit test result report" and enter the path to the XML reports.


Python code analysis
Hudson supports the common Java analyzers such as PMD, FindBugs, CPD, checkstyle, etc. It didn't have any support for pylint, but thanks to a great plugin it was very easy to write a parser for pylint reports. Using this plugin, Hudson can now analyze my python files and show a nice trend graph on how many issues there are and for long. It will also produce a report for each file with code snippets to show where the issue is.



Ant target to create the pylint file (important that the pylint output is parseable)
<target name="report-pylint" depends="report-prepare">
<exec dir="${src.python}" executable="${pylint.binary}" output="${report.pylint}">
<env key="PYTHONPATH" path="${build.bin}:${build.bin}/lib/"/>
<arg line="-f parseable -i y TVcom.py tvguicontroller.py tvweb.py tv.py settingsmanager.py --output-format=parseable --ignore-comments=y --min-similarity-lines=4 --disable-msg=R0903 --disable-msg=C0301"/>
</exec>
</target>


Configuring the Violation plugin in Hudson looks like this:



Verification tests
I've also set up Hudson to run the verification tests when ever a build has been completed for the main XBMC TV Script job. This verification job will crawl through the www.tv.com web using the crawling sources to verify that there is no crashes because www.tv.com have changed the html format. This job takes a long time, up too 3-5 hours, and the result will be displayed in a nice trend graph as other unit tests. At the same time I can continue develop the next version and even build a new version while Hudson is running the verification job.


I must say that Hudson is a great CI tool, which works with Java, Python and C#. My favorite languages.