Hypothesis tries to have good defaults for its behaviour, but sometimes that’s not enough and you need to tweak it.

The mechanism for doing this is the settings object. You can set up a @given based test to use this using a settings decorator:

@given invocation is as follows:

from hypothesis import given, settings

def test_this_thoroughly(x):

This uses a settings object which causes the test to receive a much larger set of examples than normal.

This may be applied either before or after the given and the results are the same. The following is exactly equivalent:

from hypothesis import given, settings

def test_this_thoroughly(x):

Available settings

class hypothesis.settings(parent=None, **kwargs)[source]

A settings object controls a variety of parameters that are used in falsification. These may control both the falsification strategy and the details of the data that is generated.

Default values are picked up from the settings.default object and changes made there will be picked up in newly created settings.


The size of the underlying data used to generate examples. If you need to generate really large examples you may want to increase this, but it will make your tests slower.

default value: 8192


An instance of hypothesis.database.ExampleDatabase that will be used to save examples to and load previous examples from. May be None in which case no storage will be used, :memory: for an in-memory database, or any path for a directory-based example database.

default value: (dynamically calculated)


The file or directory location to save and load previously tried examples; :memory: for an in-memory cache or None to disable caching entirely.

default value: (dynamically calculated)

The database_file setting is deprecated in favor of the database setting, and will be removed in a future version. It only exists at all for complicated historical reasons and you should just use database instead.


If set, a time in milliseconds (which may be a float to express smaller units of time) that each individual example (i.e. each time your test function is called, not the whole decorated test) within a test is not allowed to exceed. Tests which take longer than that may be converted into errors (but will not necessarily be if close to the deadline, to allow some variability in test run time).

Set this to None to disable this behaviour entirely.

In future this will default to 200. For now, a HypothesisDeprecationWarning will be emitted if you exceed that default deadline and have not explicitly set a deadline yourself.

default value: not_set


If this is True then hypothesis will run in deterministic mode where each falsification uses a random number generator that is seeded based on the hypothesis to falsify, which will be consistent across multiple runs. This has the advantage that it will eliminate any randomness from your tests, which may be preferable for some situations. It does have the disadvantage of making your tests less likely to find novel breakages.

default value: False


Once this many satisfying examples have been considered without finding any counter-example, falsification will terminate.

default value: 100


This doesn’t actually do anything, but remains for compatibility reasons.

default value: not_set

The max_iterations setting has been disabled, as internal heuristics are more useful for this purpose than a user setting. It no longer has any effect.


Once this many successful shrinks have been performed, Hypothesis will assume something has gone a bit wrong and give up rather than continuing to try to shrink the example.

default value: 500


This doesn’t actually do anything, but remains for compatibility reasons.

default value: not_set

The min_satisfying_examples setting has been deprecated and disabled, due to overlap with the filter_too_much healthcheck and poor interaction with the max_examples setting.


If set to True, Hypothesis will run a preliminary health check before attempting to actually execute your test.

default value: not_set

This setting is deprecated, as perform_health_check=False duplicates the effect of suppress_health_check=HealthCheck.all(). Use that instead!


Control which phases should be run. See the full documentation for more details

default value: (<Phase.explicit: 0>, <Phase.reuse: 1>, <Phase.generate: 2>, <Phase.shrink: 3>)


Determines whether to print blobs after tests that can be used to reproduce failures.

See the documentation on @reproduce_failure for more details of this behaviour.

default value: <PrintSettings.INFER: 1>


Number of steps to run a stateful program for before giving up on it breaking.

default value: 50


Strict mode has been deprecated in favor of Python’s standard warnings controls. Ironically, enabling it is therefore an error - it only exists so that users get the right type of error!

default value: False

Strict mode is deprecated and will go away in a future version of Hypothesis. To get the same behaviour, use warnings.simplefilter(‘error’, HypothesisDeprecationWarning).


A list of health checks to disable.

default value: ()


Once this many seconds have passed, falsify will terminate even if it has not found many examples. This is a soft rather than a hard limit - Hypothesis won’t e.g. interrupt execution of the called function to stop it. If this value is <= 0 then no timeout will be applied.

default value: 60

The timeout setting is deprecated and will be removed in a future version of Hypothesis. To get the future behaviour set timeout=hypothesis.unlimited instead (which will remain valid for a further deprecation period after this setting has gone away).


Whether to use coverage information to improve Hypothesis’s ability to find bugs.

You should generally leave this turned on unless your code performs poorly when run under coverage. If you turn it off, please file a bug report or add a comment to an existing one about the problem that prompted you to do so.

default value: True


Control the verbosity level of Hypothesis messages

default value: Verbosity.normal

Controlling What Runs

Hypothesis divides tests into four logically distinct phases:

  1. Running explicit examples provided with the @example decorator.
  2. Rerunning a selection of previously failing examples to reproduce a previously seen error
  3. Generating new examples.
  4. Attempting to shrink an example found in phases 2 or 3 to a more manageable one (explicit examples cannot be shrunk).

The phases setting provides you with fine grained control over which of these run, with each phase corresponding to a value on the Phase enum:

  1. Phase.explicit controls whether explicit examples are run.
  2. Phase.reuse controls whether previous examples will be reused.
  3. Phase.generate controls whether new examples will be generated.
  4. Phase.shrink controls whether examples will be shrunk.

The phases argument accepts a collection with any subset of these. e.g. settings(phases=[Phase.generate, Phase.shrink]) will generate new examples and shrink them, but will not run explicit examples or reuse previous failures, while settings(phases=[Phase.explicit]) will only run the explicit examples.

Seeing intermediate result

To see what’s going on while Hypothesis runs your tests, you can turn up the verbosity setting. This works with both find() and @given.

>>> from hypothesis import find, settings, Verbosity
>>> from hypothesis.strategies import lists, integers
>>> find(lists(integers()), any, settings=settings(verbosity=Verbosity.verbose))
Tried non-satisfying example []
Found satisfying example [-1198601713, -67, 116, -29578]
Shrunk example to [-67, 116, -29578]
Shrunk example to [116, -29578]
Shrunk example to [-29578]
Shrunk example to [-115]
Shrunk example to [115]
Shrunk example to [-57]
Shrunk example to [29]
Shrunk example to [-14]
Shrunk example to [-7]
Shrunk example to [4]
Shrunk example to [2]
Shrunk example to [1]

The four levels are quiet, normal, verbose and debug. normal is the default, while in quiet mode Hypothesis will not print anything out, not even the final falsifying example. debug is basically verbose but a bit more so. You probably don’t want it.

If you are using pytest, you may also need to disable output capturing for passing tests.

Building settings objects

Settings can be created by calling settings with any of the available settings values. Any absent ones will be set to defaults:

>>> from hypothesis import settings
>>> settings().max_examples
>>> settings(max_examples=10).max_examples

You can also pass a ‘parent’ settings object as the first argument, and any settings you do not specify as keyword arguments will be copied from the parent settings:

>>> parent = settings(max_examples=10)
>>> child = settings(parent, deadline=200)
>>> parent.max_examples == child.max_examples == 10
>>> parent.deadline
>>> child.deadline

Default settings

At any given point in your program there is a current default settings, available as settings.default. As well as being a settings object in its own right, all newly created settings objects which are not explicitly based off another settings are based off the default, so will inherit any values that are not explicitly set from it.

You can change the defaults by using profiles (see next section), but you can also override them locally by using a settings object as a context manager

>>> with settings(max_examples=150):
...     print(settings.default.max_examples)
...     print(settings().max_examples)
>>> settings().max_examples

Note that after the block exits the default is returned to normal.

You can use this by nesting test definitions inside the context:

from hypothesis import given, settings

with settings(max_examples=500):
    def test_this_thoroughly(x):

All settings objects created or tests defined inside the block will inherit their defaults from the settings object used as the context. You can still override them with custom defined settings of course.

Warning: If you use define test functions which don’t use @given inside a context block, these will not use the enclosing settings. This is because the context manager only affects the definition, not the execution of the function.

settings Profiles

Depending on your environment you may want different default settings. For example: during development you may want to lower the number of examples to speed up the tests. However, in a CI environment you may want more examples so you are more likely to find bugs.

Hypothesis allows you to define different settings profiles. These profiles can be loaded at any time.

Loading a profile changes the default settings but will not change the behavior of tests that explicitly change the settings.

>>> from hypothesis import settings
>>> settings.register_profile("ci", max_examples=1000)
>>> settings().max_examples
>>> settings.load_profile("ci")
>>> settings().max_examples

Instead of loading the profile and overriding the defaults you can retrieve profiles for specific tests.

>>> with settings.get_profile("ci"):
...     print(settings().max_examples)

Optionally, you may define the environment variable to load a profile for you. This is the suggested pattern for running your tests on CI. The code below should run in a or any setup/initialization section of your test suite. If this variable is not defined the Hypothesis defined defaults will be loaded.

>>> import os
>>> from hypothesis import settings, Verbosity
>>> settings.register_profile("ci", max_examples=1000)
>>> settings.register_profile("dev", max_examples=10)
>>> settings.register_profile("debug", max_examples=10, verbosity=Verbosity.verbose)
>>> settings.load_profile(os.getenv(u'HYPOTHESIS_PROFILE', 'default'))

If you are using the hypothesis pytest plugin and your profiles are registered by your conftest you can load one with the command line option --hypothesis-profile.

$ pytest tests --hypothesis-profile <profile-name>


The timeout functionality of Hypothesis is being deprecated, and will eventually be removed. For the moment, the timeout setting can still be set and the old default timeout of one minute remains.

If you want to future proof your code you can get the future behaviour by setting it to the value hypothesis.unlimited.

from hypothesis import given, settings, unlimited
from hypothesis import strategies as st

def test_something_slow(i):

This will cause your code to run until it hits the normal Hypothesis example limits, regardless of how long it takes. timeout=unlimited will remain a valid setting after the timeout functionality has been deprecated (but will then have its own deprecation cycle).

There is however now a timing related health check which is designed to catch tests that run for ages by accident. If you really want your test to run forever, the following code will enable that:

from hypothesis import given, settings, unlimited, HealthCheck
from hypothesis import strategies as st

@settings(timeout=unlimited, suppress_health_check=[
def test_something_slow(i):