Testing Tips

We run several kinds of tests at Sentry as part of our CI process. This section aims to document some of the sentry specific helpers and give guidelines on what kinds of tests you should consider including when building new features.

Getting Setup

The acceptance and python tests require a functioning set of devservices. It is recommended you use devservices to ensure you have the required services running. If you also use your local environment for local testing you will want to use the --project flag to separate local testing volumes from test suite volumes:

Copied
# Turn off services for local testing.
sentry devservices down

# Turn on services with a test prefix to use separate containers and volumes
sentry devservices up --project test

# Verify that test containers came up correctly
docker ps --format '{{.Names}}'

# Later when you're done running tests and want to run local servers again
sentry devservices down --project test && sentry devservices up

When using the --project option you can confirm which containers are running docker ps. Each running container should be prefixed with test_. See the devservices docs section for more information on managing services.

Python Tests

For python tests we use pytest and testing tools provided by Django. On top of this foundation we've added a few base test cases (in sentry.testutils.cases).

Endpoint integration tests is where the bulk of our test suite is focused. These tests help us ensure that the APIs our customers, integrations and front-end application continue to work in expected ways. You should endeavour to include tests that cover the various user roles, and cross organization/team access scenarios, as well as invalid data scenarios as those are often overlooked when manually testing.

Running pytest

You can use pytest to run a single directory, single file, or single test depending on the scope of your changes:

Copied
# Run tests for an entire directory
pytest tests/sentry/api/endpoints/

# Run tests for all files matching a pattern in a directory
pytest tests/sentry/api/endpoints/test_organization_*.py

# Run test from a single file
pytest tests/sentry/api/endpoints/test_organization_group_index.py

# Run a single test
pytest tests/snuba/api/endpoints/test_organization_events_distribution.py::OrganizationEventsDistributionEndpointTest::test_this_thing

# Run all tests in a file that match a substring
pytest tests/snuba/api/endpoints/test_organization_events_distribution.py -k method_name

# When running tests Django rebuilds the database on every run, which can make
# your test startup time slow. To avoid this you can use the `--reuse-db` flag,
# so that the database will persist between runs. This should significantly
# improve your test startup time after the first time you use it.
# Note: If the schema changes you may need to run with `--create-db` once so
# that you have the latest schema.
pytest --reuse-db tests/sentry/api/endpoints/

Some frequently used options for pytest are:

  • -k Filter test methods/classes by a substring.
  • -s Don't capture stdout when running tests.

Refer to the pytest docs for more usage options.

Creating data in tests

Sentry has also added a suite of factory helper methods that help you build data to write your tests against. The factory methods in sentry.testutils.factories are available on all our test suite classes. Use these methods to build up the required organization, projects and other postgres based state.

You should also use store_event() to store events in a similar way that the application does in production. Storing events requires your test to inherit from SnubaTestCase. When using store_event() take care to set a timestamp in the past on the event. When omitted, the timestamp is uses 'now' which can result in events not being picked due to timestamp boundaries.

Copied
from sentry.testutils.helpers.datetime import before_now
from sentry.utils.samples import load_data

def test_query(self):
    data = load_data("python", timestamp=before_now(minutes=1))
    event = self.store_event(data, project_id=self.project.id)

Setting options and feature flags

If your tests are for feature flagged endpoints, or require specific options to be set. You can use helper methods to mutate the configuration data into the right state:

Copied
def test_success(self):
    with self.feature('organization:new-thing'):
        with self.options({'option': 'value'}):
            # Run test logic with features and options set.

    # Disable the new-thing feature.
    with self.feature({'organization:new-thing': False}):
        # Run you logic with a feature off.

Notes on Database Tests

For the love of god, please stop writing tests using the Django test case. However, if for whatever reason you are extending one of them and you do not feel motivated enough to convert them into function style tests, be extra careful about using it for non-database-related functionality.

The Django TestCase class incurs an incredibly high cost due to database management, and not all tests require the database. To check if your new tests are unnecessarily using the database export the SENTRY_DETECT_TESTCASE_MISUSE environment variable and set it to 1:

Copied
SENTRY_DETECT_TESTCASE_MISUSE=1 pytest my_new_test.py

If the test runner detects that you used the Django TestCase class but you did not end up needing it, it will yell at you. This protects you against other developers yelling at you later for slowing down CI.

External Services

Use the responses library to add stub responses for an outbound API requests your code is making. This will help you simulate success and failure scenarios with relative ease.

Working with time reliably

When writing tests related to ingesting events we have to operate within the constraint of events cannot be older than 30 days. Because all events must be recent, we cannot use traditional time freezing strategies to get consistent data in tests. Instead of choosing arbitrary points in time we work backwards from the present, and have a few helper functions to do so:

Copied
from sentry.testutils.helpers.datetime import before_now, iso_format

five_min_ago = before_now(minutes=5)
iso_timestamp = iso_format(five_min_ago)

These functions generate datetime objects, and ISO 8601 formatted datetime strings relative to the present enabling you to have events at known time offsets without violating the 30 day constraint that relay imposes.

Inspecting SQL queries in tests

Add the following to conftest.py in the project root:

Copied
@pytest.fixture(scope="function", autouse=True)
def log_sql(request):
    from django.conf import settings
    from django.db import connections, reset_queries

    reset_queries()
    settings.DEBUG = True

    yield

    print(f"\nQuery Log from {request.node.nodeid}")
    for connection in connections.all():
        for query in connection.queries:
            print(f"{connection.alias}: {query['sql']}")

Now all SQL executed during tests will be printed to stdout. It's recommended to narrow down the output by using pytest's -k selector. Also note that you'll need to pass -s to see stdout.

Silo modes

The test suite is aware of silo modes and by default tests will run in REGION mode. To have tests run in CONTROL mode, you can use control_silo_test annotation. To have tests run in all silo modes you can use all_silo_test:

Copied
from sentry.testutils.silo import all_silo_test, control_silo_test

@control_silo_test
class UserDetailsTest(TestCase):
    ...

@all_silo_test
class AllSiloUserDetailsTest(TestCase):
    ...

You’ll often need to create or update user records when writing tests for region silo code. If you use the user model directly you’ll encounter failures related to incorrect silo boundaries. To avoid those errors you can temporarily swap the current silo assignment.

Copied
from sentry.silo import SiloMode
from sentry.testutils.silo import assume_test_silo_mode_of

with assume_test_silo_mode_of(UserOption):
    UserOption.objects.clear_local_cache()

The assume_test_silo_mode_of(model) will temporarily switch the active silo mode to match the silo mode of the model passed in. If you'd like to explicitly name the silo mode you can use assume_test_silo_mode(SiloMode.CONTROL) to switch into control mode.

Flushing outbox messages

In tests if you create or update records that use asynchronous outbox messages, you can force outbox delivery and processing in tests using outbox_runner:

Copied
from sentry.testutils.outbox import outbox_runner

with outbox_runner():
    member = OrganizationMember(
        organization=self.organization,
        email="foo@example.com"
    )
    member.token = member.generate_token()
    member.save()

The above code will update a member and flush the relevant outbox messages to be delivered synchronizing data in control silo tables. You can also use outbox_runner in isolation to flush outbox driven state like audit logs:

Copied
response = self.get_success_response(
    self.project.organization.slug, self.project.slug, status_code=302
)
# Flush outboxes for audit logs
with outbox_runner():
    pass

with assume_test_silo_mode_of(AuditLogEntry):
    assert (
        AuditLogEntry.objects.get(
            organization_id=self.project.organization_id,
            event=audit_log.get_event_id("PROJECT_EDIT"),
        ).data.get("old_slug")
        == self.project.slug
    )

Transaction Safety

By default the test suite will validate a few data consistency and safety constraints. The disallowed operations are:

  1. Transactions cannot span database connections. We do not allow a transaction to contain changes on a different database connection. For example a model on the default and crons database connections cannot participate in a transaction.
  2. Transactions cannot include RPC calls. Because RPC calls can span the network, they cannot participate in a transaction.
  3. Any model using outboxes cannot be modified with QuerySet.update() or other update operations that would not generate outboxes.

Disabling transaction safety

While the transaction safety tooling provides a good foundation, there are scenarios where we need to bypass these checks. An example of this is performing a read-only RPC call within a transaction, or mutating a record to reproduce invalid states. To bypass an outbox consistency check use unguarded_write:

Copied
from sentry.silo import unguarded_write

with unguarded_write(using=router.db_for_write(User)):
    user.update(name="sara")

To bypass a RPC-within-transaction check use in_test_hide_transaction_boundary:

Copied
from sentry.db.postgres.transactions import in_test_hide_transaction_boundary

with in_test_hide_transaction_boundary():
    users = user_service.get_many(filter={"user_ids": list(owners)})

Acceptance Tests

Our acceptance tests leverage selenium and chromedriver to simulate a user using the front-end application and the entire backend stack. We use acceptance tests for two purposes at Sentry:

  1. To cover workflows that are not possible to cover with just endpoint tests or with Jest alone.
  2. To prepare snapshots for visual regression tests via our visual regression GitHub Actions.

Acceptance tests can be found in tests/acceptance and run locally with pytest.

Running acceptance tests

When you run acceptance tests, webpack will be run automatically to build static assets. If you change Javascript files while creating or modifying acceptance tests, you'll need to rm .webpack.meta after each change to trigger a rebuild on static assets.

Copied
# Run a single acceptance test.
pytest tests/acceptance/test_organization_group_index.py \
    -k test_with_onboarding

# Run the browser with a head so you can watch it.
pytest tests/acceptance/test_organization_group_index.py \
    --no-headless=true \
    -k test_with_onboarding

# Run the browser with an artificially slow network (useful for debugging
# flakey tests).
pytest tests/acceptance/test_organization_group_index.py \
    --no-headless=true \
    --slow-network=true \
    -k test_with_onboarding

# Open each snapshot image
SENTRY_SCREENSHOT=1 VISUAL_SNAPSHOT_ENABLE=1 \
    pytest tests/acceptance/test_organization_group_index.py \
    -k test_with_onboarding

Locating Elements

Because we use emotion our classnames are generally not useful to browser automation. Instead we liberally use data-test-id attributes to define hook points for browser automation and Jest tests.

Dealing with Asynchronous actions

All of our data is loaded asynchronously into the front-end and acceptance tests need to account for various latencies and response times. We favour using selenium's wait_until* features to poll the DOM until elements are present or visible. We do not use sleep().

Dealing with always changing data

Because visual regression compares image snapshots, and a significant portion of our data deals with timeseries data we often need to replace time based content with 'fixed' data. You can use the getDynamicText helper to provide fixed content for components/data that is dependent on the current time or varies too frequently to be included in a visual snapshot.

Jest Tests

Our Jest suite covers providing functional and unit testing for frontend components. We favour writing functional tests that interact with components and observe outcomes (navigation, API calls) over checking prop passing and state mutations. See the Frontend Handbook for more Jest testing tips.

Copied
# Run jest in interactive mode
yarn test

# Run a single test
yarn test tests/js/spec/views/issueList/overview.spec.js

API Fixtures

Because our Jest tests run without an API we have a variety of fixture builders available to help generate API response payloads. They are available for import with the alias sentry-fixture/*. New fixtures can be created inside of sentry/fixtures/js-stubs/.

You should also use MockApiClient.addMockResponse() to set responses for API calls your components will make. Failing to mock an endpoint will result in tests failing.

You can edit this page on GitHub.