Feature-based testing is a strategy that co-locates tests with the features they verify. This approach helps you test only what's changed, reducing unnecessary test execution and improving feedback loops.
The Problem: Monolithic Test Projects
Section titled “The Problem: Monolithic Test Projects”Many projects have a single, large test project that depends on the entire application, such as e2e projects. While this setup ensures tests run when any dependency changes, it also means all tests run even when only one subset of the app changes.
Consider a typical setup where all e2e tests live in a single project at the top of the graph:
In this example, when feat-cart
changes, all tests in fancy-app-e2e
run, which includes tests for feat-products
along with other unrelated features. This happens because fancy-app-e2e
depends on the entire application.
Since these features have minimal overlap, you can optimize testing by splitting the monolithic test project into smaller, feature-scoped test projects.
The Solution: Feature-Scoped Testing
Section titled “The Solution: Feature-Scoped Testing”Instead of keeping all tests in one large project, break them down by feature and co-locate them with the feature libraries they test. This way, only the tests for changed features run.
How to Implement Feature-Based Testing
Section titled “How to Implement Feature-Based Testing”To set up feature-based testing, add test configurations directly to your feature projects. Nx provides plugins to automate and speed up the test configuration for common testing tools. Here are some guides for using each plugins generators:
If there isn't a generator for your testing tool of choice, you can manually set up the configuration on each feature project. This includes adding relevant configuration files for the testing framework and adding the test target (e.g. test
, e2e
) to the project's project.json
or package.json
. Typically these can be copied and slightly modified from the existing top-level monolithic project that is being split apart.
With this setup, when you run nx affected -t e2e
, only the tests for changed features will execute. For example, when feat-cart
changes, only feat-cart:e2e
runs and feat-products:e2e
does not run since it wasn't affected.
Best Practices
Section titled “Best Practices”Combining with Automated Task Splitting (Atomizer)
Section titled “Combining with Automated Task Splitting (Atomizer)”Typically, teams enable Atomizer, also known as task splitting, for a quick win to improve CI times when using Nx Agents. Combining both strategies yields the best results. Here's how they complement each other:
- Feature-based testing ensures only relevant feature tests run when code changes
- Atomizer splits each feature's test suite into individual file-level tasks that can be distributed across multiple CI agents.
For example, if feat-cart
has 10 test files and feat-products
has 15 test files, when you change the cart feature:
- Feature-based testing runs only
feat-cart:e2e-ci
(skippingfeat-products:e2e-ci
) - Atomizer splits
feat-cart:e2e-ci
into 10 parallel tasks, one per test file - These tasks get distributed across your CI agents for faster execution
Learn more about setting up Automated Task Splitting.
Keep the Top-Level Test Project
Section titled “Keep the Top-Level Test Project”Don't delete your top level test project, fancy-app-e2e
in this example. Instead, repurpose it for:
- Smoke tests: Quick sanity checks that the app starts and critical paths work
- Cross-feature integration tests: Tests that verify multiple features work together
- End-to-end user journeys: Tests that span multiple features
This gives you a balanced testing strategy: focused feature tests that run frequently, plus comprehensive integration tests when needed.
Running Tests in Parallel
Section titled “Running Tests in Parallel”When running tests from multiple features in parallel, be mindful of shared resources. Since all feature tests run against the same application instance, avoid conflicts by:
- Using unique test data: Don't rely on specific database records or application state
- Managing ports: Configure each test to use different ports, or let the test framework find free ports automatically
- For Cypress, use the
--port
flag to specify or auto-detect ports - For Playwright, the
webServerAddress
can be dynamically assigned
- For Cypress, use the
- Isolating state: Use test-specific user accounts, temporary data, or cleanup between tests
Running Affected Tests
Section titled “Running Affected Tests”With feature-based testing, you can leverage Nx's affected commands to run only the tests that matter:
# Run all affected tests based on your changesnx affected -t test
# Run affected e2e testsnx affected -t e2e
This ensures you're only testing what changed, whether locally or in CI.