Skip to content

Optimize Testing with Feature-Based Testing

Feature-based testing is a strategy that co-locates tests with the features they verify. This approach helps you test only what's changed, reducing unnecessary test execution and improving feedback loops.

Many projects have a single, large test project that depends on the entire application, such as e2e projects. While this setup ensures tests run when any dependency changes, it also means all tests run even when only one subset of the app changes.

Consider a typical setup where all e2e tests live in a single project at the top of the graph:

Loading...

In this example, when feat-cart changes, all tests in fancy-app-e2e run, which includes tests for feat-products along with other unrelated features. This happens because fancy-app-e2e depends on the entire application.

Since these features have minimal overlap, you can optimize testing by splitting the monolithic test project into smaller, feature-scoped test projects.

Instead of keeping all tests in one large project, break them down by feature and co-locate them with the feature libraries they test. This way, only the tests for changed features run.

To set up feature-based testing, add test configurations directly to your feature projects. Nx provides plugins to automate and speed up the test configuration for common testing tools. Here are some guides for using each plugins generators:

If there isn't a generator for your testing tool of choice, you can manually set up the configuration on each feature project. This includes adding relevant configuration files for the testing framework and adding the test target (e.g. test, e2e) to the project's project.json or package.json. Typically these can be copied and slightly modified from the existing top-level monolithic project that is being split apart.

With this setup, when you run nx affected -t e2e, only the tests for changed features will execute. For example, when feat-cart changes, only feat-cart:e2e runs and feat-products:e2e does not run since it wasn't affected.

Combining with Automated Task Splitting (Atomizer)

Section titled “Combining with Automated Task Splitting (Atomizer)”

Typically, teams enable Atomizer, also known as task splitting, for a quick win to improve CI times when using Nx Agents. Combining both strategies yields the best results. Here's how they complement each other:

  • Feature-based testing ensures only relevant feature tests run when code changes
  • Atomizer splits each feature's test suite into individual file-level tasks that can be distributed across multiple CI agents.

For example, if feat-cart has 10 test files and feat-products has 15 test files, when you change the cart feature:

  1. Feature-based testing runs only feat-cart:e2e-ci (skipping feat-products:e2e-ci)
  2. Atomizer splits feat-cart:e2e-ci into 10 parallel tasks, one per test file
  3. These tasks get distributed across your CI agents for faster execution

Learn more about setting up Automated Task Splitting.

Don't delete your top level test project, fancy-app-e2e in this example. Instead, repurpose it for:

  • Smoke tests: Quick sanity checks that the app starts and critical paths work
  • Cross-feature integration tests: Tests that verify multiple features work together
  • End-to-end user journeys: Tests that span multiple features

This gives you a balanced testing strategy: focused feature tests that run frequently, plus comprehensive integration tests when needed.

When running tests from multiple features in parallel, be mindful of shared resources. Since all feature tests run against the same application instance, avoid conflicts by:

  • Using unique test data: Don't rely on specific database records or application state
  • Managing ports: Configure each test to use different ports, or let the test framework find free ports automatically
    • For Cypress, use the --port flag to specify or auto-detect ports
    • For Playwright, the webServerAddress can be dynamically assigned
  • Isolating state: Use test-specific user accounts, temporary data, or cleanup between tests

With feature-based testing, you can leverage Nx's affected commands to run only the tests that matter:

# Run all affected tests based on your changes
nx affected -t test
# Run affected e2e tests
nx affected -t e2e

This ensures you're only testing what changed, whether locally or in CI.