A year of UI Testing with XCTest

If you ask around you’ll quickly realize that UI tests don’t have an enviable reputation in the iOS developer community. “Difficult to write”, “Slow”, “Hard to maintain”, “Don’t work” are complaints you hear a lot. While to some extent part of these are genuine, truth is that there’s no viable alternative when working on medium/large scale projects.

Do you submit your app to the AppStore relying solely on the result of unit tests? It is likely you don’t and instead precede submission with a set of tedious and time-consuming manual tests. This was our release flow for a rather long time to the point that manual testing became unsustainable, making some kind of automation mandatory.

We decided to give Apple’s XCTest framework a try and a year (and a lot of sweat) later we managed to reduce time wasted on manual tests to a bare minimum, getting pretty close to the continuous delivery dream.
Today we have 220 test cases on iPhone, 214 on iPad which run respectively in ~2h and ~2h30m (iPad execution is slower). The development team took in charge the task of writing UI tests (along with units) greatly increasing the quality of the first development iteration outcome. As a result we achieved a happier team that is able to submit updates quicker and with greater confidence.

The good
If you’re still skeptic focus for a moment on the bright sides of UI tests and you’ll realize there is indeed a lot of potential:

  • Interact with your app exactly as the user will do (isn’t that exactly what you want?)
  • Forget about the application’s implementation, which allows thoughtless refactorings
  • Test a lot with short and readable code where acceptance criteria can be easily expressed
  • Lift yourself from the pain of manual testing (isn’t that alone enough?)

The bad
There are of course some downsides and limitations but you’ll see how we worked them out

  • Unflexible. UI tests run in a separate process that has no access to the testing application’s code. This makes it tricky to do things like mocking or data injection
  • Slow. You don’t run them locally but rather on a continuous integration environment, using xcodebuild / fastlane. However… ⤸
  • Results. It can get pretty difficult to understand what went wrong from the test logs

Making tests flexible
By far the most discouraging issue when approaching UI testing is its apparent inflexibility. You quickly realize that the main issue is that the test process has no access to the application’s internal state. Doing something trivial like checking that a setting is properly stored in NSUserDefaults can get close to impossible.
We managed to workaround this by developing SBTUITestTunnel which adds the missing link between test and application target. The idea is very simple: when running tests, a web server is instantiated on the app which receives a set of requests from the testing target. The tool sits on top of XCTest extending its functionality in a neat and usable fashion allowing to stub and monitor network requests, inject data to NSUserDefaults and Keychain, download/upload files from/to the app’s sandbox and much more.
Let’s see for example how easy is it to stub a specific network request:

 

you can also retrieve and upload UserDefaults:

 

These are very simple examples that show some of the capabilities of the testing framework we developed. A lot more can be achieved making it possible to write very complex, yet readable, tests.
A real life example of what one of our ui tests look like is the following:

 

Adapting workflow to UI testing
It’s a fact that it takes time to execute a full UI test suite, this is however not a problem per se if you adapt your testing workflow appropriately. Our tests today take ~4h30m to execute running overnight (using fastlane / xcodebuild) on a single continuous integration machine. This daily outcome is all we need to get an effective feedback of our latest changes.
On the other hand we work locally when developing new tests, as running few of them takes an acceptable amount of time.

xcodebuild, screencasting and finding what went south
We chose xcodebuild for running tests since it allows greater flexibility of integration with our development pipeline and opens up for unique features which would be not possible otherwise. Among others we were able to add a screencasting job ? to the whole testing session as shown below.
The downside of using xcodebuild is the massive amount of hard-to-read logging output that is produced which makes it challenging to understand why a particular test is failing. A similar problem can be experienced in Xcode where there is a lot of clicking involved before having the picture of what eventually failed.
Rethinking how we would want tests results to be summarized we built another custom tool, sbtuitestbrowser, that parses and presents results in a simple (yet effective) web interface. Written in Swift, of course ?.
The home shows the latest testing sessions, testing classes and single tests.

Each test has the list of actions that were performed, the same you’re used in Xcode. You can enable/disable screenshots or rely on the screencast that is synched with the actions: tap on an action and the video jumps to that point of the test.

Specifying environments/devices
Along with the tools described so far we’ve been exploring other ways to further enhance our testing experience. Having different backend environments (development, integration, pre-production, production) we needed a way to easily specify which tests should run on which environments. We don’t want certain tests to run in production, i.e. replying to a real user with a test message.
By swizzling the XCTest framework and by defining a set of protocols used to «decorate» XCTest classes we made it possible to specify on which devices and environment a test cases should run. Protocols are divided in two categories: environments and devices.

 

All it takes is adding conformance to your XCTestCase class, like in the example below.

 

These 3 test cases will be executed independently from the device (SBTTestOnAnyDevice) and environment (SBTTestOnAnyEnvironment) the app is running.
The implementation of this functionality has been tailored to our specific app configuration (with regards to how we are specifying the running environment), but we are planning to make something reusable available soon.

What‘s next?
There are scenarios, like for example a broad refactoring (i.e. changing the network layer), where you want to verify the entire app’s behaviour before pushing your changes to the main branch. In these cases a quicker feedback might be required which you can achieve running tests in parallel. We’ll work on expanding our continuous integration environment allowing to run tests in parallel that will provide a quicker feedback.

Conclusions
We expanded the UI testing framework provided by Apple building additional tools that helped us to reduce manual testing and increase confidence in our code.

  • SBTUITestTunnel: add a link between app and test target allowing to write much more sophisticated tests
  • sbtuitestbrowser: parse xcodebuilds test results and make them accessible via web
  • A way of marking test cases run only on specific environment/device

It’s definitely worth giving UI tests a try if you didn’t yet. If you did, check out our tools, we would really appreciate your feedback!

Read more from the Software engineering category
SUBSCRIBE TO OUR UPDATES
Menu