Our team has been adding automated UI testing to our JUnit tests over the past year. Since there seems to be growing interest in doing this I thought I would tell our story.
To begin with our developers wanted to run JUnit tests and our test team wanted to automate much of the testing they were doing by hand. It made sense to combine those efforts and have one set of tests everyone could run and contribute to. But we couldn't test everything we needed from JUnit without the ability to occasionally punch a button, enter text into a wizard, or choose a menu item.
We looked at tools that let you record user actions and then play them back. This makes a great demo and has immediate appeal to what many people initially think they want from automated UI testing. If you are a test team that is given a product and you just want to automate what would otherwise be a laborious manual test effort this is probably the best you can do. But tests constructed this way are often fragile and superficial: minor UI changes can break them and they don't really know what's going on underneath. We wanted our development and test efforts to be better integrated so we looked for a way to drive the UI from our JUnit tests.
We ended up using Abbot to do this with pretty good results. Abbot also has a script editor/recorder called Costello that doesn't work with SWT yet. I think this has led some people to think Abbot doesn't work with SWT at all, but we were interested in the stuff that we could call directly from our JUnit code and got that working right away.
Much of Abbot is pretty low-level and we ended up building a utility layer on top of it to handle common tasks and boilerplate code. This was easy to do and makes the code that uses it a lot cleaner.
A problem all UI testing tools have is identifying the exact widget to act on. If it doesn't have a unique name you have to do something fragile like reference it by relative position to known widgets. Even if it is something like a button that has a name like "Cancel" the test will break as soon as 'Cancel" is translated to another language so you'll need to translate that portion of the tests as well. The solution is to give significant SWT widgets unique IDs when you create them. This is easy to do so we started adding the ID to any UI that we create for Carbide. Of course that leaves the UI from the platform and CDT so there are still plenty of cases where we have to hunt around for the correct widget.
Our tests can drive the UI faster than any human could. This can lead to false problems with the test: it may click on a button in one view and then want to look at an expanded tree item in another view but the tree doesn't expand instantly. Events often have to ripple through the platform before the UI settles down. There are places where you may need to insert a pause to make the test work reliably. This has to be done judiciously because there may be someone out there who downs enough coffee to turn what seems like unlikely behavior into a real bug.
With the addition of Abbot we've been able to write tests that automate much of the debugger in CDT. While some of the tests in CDT are updated and run with every build the debugger tests are out of date and wouldn't provide the same level of coverage. I'd like to get our general debugger tests into CDT as most things can be tested the same way regardless of the debugger engine on the back-end.
I've glossed over a lot of detail but I think that when looking at automated UI testing tools it's best to begin by deciding if you want to integrate them into a test driven development cycle. In many organizations testing is a completely segregated function done by people who aren't in a position to collaborate with developers on JUnit tests and so one of the record/playback/scripting tools is probably the best option. But for Carbide.c++ we've been able to use Abbot to extend the reach of our JUnit tests for a more integrated and comprehensive approach to testing and development. Now we just need to build more tests...