Shift-Left iOS Testing with Focus Flows

Creating a great modern-day software product requires a shift-left approach to testing by ensuring faster, more frequent, and earlier testing. Shift-left testing is an approach to software testing and system testing in which testing is performed earlier in the lifecycle (i.e., moved left on the project timeline). The current standard of automated mobile testing is less effective in implementing the shift-left approach due to some inherent challenges such as time and flakiness of test executions.

Most automated tests complete some irrelevant steps in order to reach the application state where the real assertions happen. Let’s explore an example of testing the Lyft Pass details screen of the Lyft Rider app. The test sequence is as follows:

  1. Log in
  2. Tap the hamburger menu
  3. Tap payment
  4. Select a Lyft Pass
  5. Verify behavior on the Lyft Pass details page

In the above example, the test controller must execute all the pre-conditional steps before finally testing the Lyft Pass details screen, even if they are of little to no relevance. These pre-conditional UI steps increase the time of execution and its flakiness. If the test fails, there is a high probability that the cause of failure is something other than what the test is verifying.

One way to address these issues would be to remove the pre-conditional UI steps and focus only on the flow that the test cares about. Removing the pre-conditional steps would allow the tests to execute in less time compared to when executing the full test flow. It would also bring down the flakiness because the test would no longer be impacted by these steps, and the failure would be limited to the actual assertion of the test.

We introduced the concept of Focus Flows to address the above concerns. A Focus Flow is a mini application that’s independent of other modules and compiles a smaller (“focused”) flow of a specific product feature. It runs in a reduced environment which carries both the session state (configs, feature flags, service state, user account) and device state (device settings, simulated locations, stored values) that is expected for the feature to function properly.

Let us take an example of the same Lyft Pass scenario. The Focus Flow starts the application right from the Lyft Pass details screen without going through any of the previous steps such as login, the hamburger menu, payment, etc. It has the same device and session state as that of a real-life Lyft Pass scenario. This reduces the time spent on executing the test tremendously and fixes the flakiness that may have otherwise been caused by the pre-conditional test steps. Since the flow depends on only a subset of the app’s features, we can use it to test a small piece of the app without waiting for the whole application to be compiled.

Now that we’ve introduced Focus Flows, let’s take a closer look at how traditional UI tests compare to Focus Flows, assuming that we have a page-driven framework available for our automation which uses XCUITest to run the tests. In order to keep things simple, this post limits the discussion to only Focus Flows, and does not go deeper into test framework details.

Let’s take the same example of the Lyft Pass details page verification**.** Lyft Pass is a promo code given by an organization to their employees that covers all or a portion of the cost of their rides. The traditional flow to reach the Lyft Pass details screen requires the controller to pass through steps 1 through 9 as depicted below:

Traditional Test Flow To Verify Lyft Pass Details

Traditional Automated UI Tests

The code for the traditional way of automating the above scenario would look something like this:

Traditional Automated Test Code for Lyft Pass Details Verification

As we can see from the above code snippet, the automated test:

  1. Logs in
  2. Navigates to the payment section
  3. Taps the Lyft Pass
  4. Verifies that the details of the Lyft Pass are loaded and rendered

The primary goal of the test is to verify that the Lyft Pass details are loaded correctly. Due to the limitations of the traditional setup, the test has to navigate through many unrelated steps like logging in, navigating to payment_,_ etc., which increases the test execution time and could introduce flakiness. Consider the following scenarios:

  1. Lyft Pass details page works but logging in fails
  2. Lyft Pass details page works but the hamburger menu button is broken
  3. Lyft Pass details page verification takes 2 seconds to load and test, but getting there takes 30 seconds

Pre-Assertion Failures: In scenarios 1 and 2, the test fails for reasons that have nothing to do with the Lyft Pass details page. These pre-assertion failures increase the overall flakiness of the test.

Pre-Assertion Delays: Scenario 3 spends a greater percentage of time on the steps that the test does not intend to verify than it does on the screen actually being tested.

Focus Flow Automated UI Tests

In contrast, the code below shows a Focus Flow test in action:

Focus Flow Automated Test Code for Lyft Pass Details Verification

As shown by the above code snippet, the test does not go through all the previously required steps such as logging in and navigating to payments. The Focus Flow starts the app right from the Lyft Pass details screen by providing the module with all of its necessary configuration, removing the need for the previous screens in the test:

Focus Flow Enabled Test Flow To Verify Lyft Pass Details

How to Design Flow-Based UI Tests

Our apps are built leveraging what we call a Flow Architecture (more details in this post). At a high level, rather than depending on one monolithic router to drive the entire application, this architecture relies on a set of smaller, composable routers. Each router encapsulates a Flow, which is what Focus Flows are built on top of.

Creating features in isolation like this allowed us to split Focus Flows out of the main application for rapid iteration and testing. At Lyft, we utilize Bazel to provide application targets for each Focus Flow, but a similar result for UI tests could be achieved through some other mechanism for jumping to specific features (such as a menu, deeplinks, etc.). When doing so, it’s important that the destination screen is injected with any required state such as authentication, location information, and feature flags. The specifics of how this is implemented will vary by architecture, but we were able to extend our dependency injection system to allow for customization from both tests and Focus Flows. Here’s an example of how we initialize a Focus Flow for testing purposes:

Builder Method Containing Preparation Config For Focus Flow Tests

We executed the Lyft Pass details scenario using both the traditional and Focus Flow automations. The results showed that the Focus Flow approach outperformed the traditional test automation both in terms of speed and reliability:

Performance Comparison Between Traditional and Focus Flow Tests

Takeaways

Running our tests with Focus Flows has reduced execution time and flakiness by enabling us to skip irrelevant steps in test scenarios. Although Focus Flows provide an excellent tool for iterating and testing quickly and reliably, they haven’t replaced our smoke tests or end-to-end test suite since they’re focused on smaller features of the app while smoke tests are focused on broader end-to-end workflows. Nonetheless, Focus Flows have been an excellent addition to our testing strategy, and help us to quickly deliver features with the best possible quality.

Acknowledgments

Special thanks to Gonzalo Larralde for introducing me to Focus Flows. Thanks to Edric Ta, Chhaya Patel and Gowtham J S for the support reviewing this article!

We are looking for talented engineers like you. If you’re interested in mobile test engineering, check out our careers page.

Happy testing!

Accueil - Wiki
Copyright © 2011-2024 iteam. Current version is 2.137.1. UTC+08:00, 2024-11-15 15:38
浙ICP备14020137号-1 $Carte des visiteurs$