Health checks

Hypothesis tries to detect common mistakes and things that will cause difficulty at run time in the form of a number of ‘health checks’.

These include detecting and warning about:

  • Strategies with very slow data generation

  • Strategies which filter out too much

  • Recursive strategies which branch too much

  • Tests that are unlikely to complete in a reasonable amount of time.

If any of these scenarios are detected, Hypothesis will emit a warning about them.

The general goal of these health checks is to warn you about things that you are doing that might appear to work but will either cause Hypothesis to not work correctly or to perform badly.

To selectively disable health checks, use the suppress_health_check setting. The argument for this parameter is a list with elements drawn from any of the class-level attributes of the HealthCheck class. Using a value of HealthCheck.all() will disable all health checks.

class hypothesis.HealthCheck(value)[source]

Arguments for suppress_health_check.

Each member of this enum is a type of health check to suppress.

data_too_large = 1

Checks if too many examples are aborted for being too large.

This is measured by the number of random choices that Hypothesis makes in order to generate something, not the size of the generated object. For example, choosing a 100MB object from a predefined list would take only a few bits, while generating 10KB of JSON from scratch might trigger this health check.

filter_too_much = 2

Check for when the test is filtering out too many examples, either through use of assume() or filter(), or occasionally for Hypothesis internal reasons.

too_slow = 3

Check for when your data generation is extremely slow and likely to hurt testing.

return_value = 5

Checks if your tests return a non-None value (which will be ignored and is unlikely to do what you want).

large_base_example = 7

Checks if the natural example to shrink towards is very large.

not_a_test_method = 8

Checks if @given has been applied to a method defined by unittest.TestCase (i.e. not a test).

function_scoped_fixture = 9

Check if @given has been applied to a test with a pytest function-scoped fixture. Function-scoped fixtures run once for the whole function, not once per example, and this is usually not what you want.

Because of this limitation, tests that need to need to set up or reset state for every example need to do so manually within the test itself, typically using an appropriate context manager.

Suppress this health check only in the rare case that you are using a function-scoped fixture that does not need to be reset between individual examples, but for some reason you cannot use a wider fixture scope (e.g. session scope, module scope, class scope).

This check requires the Hypothesis pytest plugin, which is enabled by default when running Hypothesis inside pytest.

Deprecations

We also use a range of custom exception and warning types, so you can see exactly where an error came from - or turn only our warnings into errors.

class hypothesis.errors.HypothesisDeprecationWarning[source]

A deprecation warning issued by Hypothesis.

Actually inherits from FutureWarning, because DeprecationWarning is hidden by the default warnings filter.

You can configure the Python warnings to handle these warnings differently to others, either turning them into errors or suppressing them entirely. Obviously we would prefer the former!

Deprecated features will be continue to emit warnings for at least six months, and then be removed in the following major release. Note however that not all warnings are subject to this grace period; sometimes we strengthen validation by adding a warning and these may become errors immediately at a major release.