Projects extending Hypothesis

Hypothesis has been eagerly used and extended by the open source community. This page lists extensions and applications; you can find more or newer packages by searching PyPI by keyword or filter by classifier, or search libraries.io.

If there’s something missing which you think should be here, let us know!

Note

Being listed on this page does not imply that the Hypothesis maintainers endorse a package.

External strategies

Some packages provide strategies directly:

Others provide a function to infer a strategy from some other schema:

Or some other custom integration, such as a “hypothesis” entry point:

  • deal is a design-by-contract library with built-in Hypothesis support.

  • icontract-hypothesis infers strategies from icontract code contracts.

  • Pandera schemas all have a .strategy() method, which returns a strategy for matching DataFrames.

  • Pydantic automatically registers constrained types - so builds() and from_type() “just work” regardless of the underlying implementation.

Other cool things

schemathesis is a tool for testing web applications built with Open API / Swagger specifications. It reads the schema and generates test cases which will ensure that the application is compliant with its schema. The application under test could be written in any language, the only thing you need is a valid API schema in a supported format. Includes CLI and convenient pytest integration. Powered by Hypothesis and hypothesis-jsonschema, inspired by the earlier swagger-conformance library.

Trio is an async framework with “an obsessive focus on usability and correctness”, so naturally it works with Hypothesis! pytest-trio includes a custom hook that allows @given(...) to work with Trio-style async test functions, and hypothesis-trio includes stateful testing extensions to support concurrent programs.

pymtl3 is “an open-source Python-based hardware generation, simulation, and verification framework with multi-level hardware modeling support”, which ships with Hypothesis integrations to check that all of those levels are equivalent, from function-level to register-transfer level and even to hardware.

libarchimedes makes it easy to use Hypothesis in the Hy language, a Lisp embedded in Python.

battle_tested is a fuzzing tool that will show you how your code can fail - by trying all kinds of inputs and reporting whatever happens.

pytest-subtesthack functions as a workaround for issue #377.

returns uses Hypothesis to verify that Higher Kinded Types correctly implement functor, applicative, monad, and other laws; allowing a declarative approach to be combined with traditional pythonic code.

icontract-hypothesis includes a ghostwriter for test files and IDE integrations such as icontract-hypothesis-vim, icontract-hypothesis-pycharm, and icontract-hypothesis-vscode - you can run a quick ‘smoke test’ with only a few keystrokes for any type-annotated function, even if it doesn’t have any contracts!

Writing an extension

See CONTRIBUTING.rst for more information.

New strategies can be added to Hypothesis, or published as an external package on PyPI - either is fine for most strategies. If in doubt, ask!

It’s generally much easier to get things working outside, because there’s more freedom to experiment and fewer requirements in stability and API style. We’re happy to review and help with external packages as well as pull requests!

If you’re thinking about writing an extension, please name it hypothesis-{something} - a standard prefix makes the community more visible and searching for extensions easier. And make sure you use the Framework :: Hypothesis trove classifier!

On the other hand, being inside gets you access to some deeper implementation features (if you need them) and better long-term guarantees about maintenance. We particularly encourage pull requests for new composable primitives that make implementing other strategies easier, or for widely used types in the standard library. Strategies for other things are also welcome; anything with external dependencies just goes in hypothesis.extra.

Tools such as assertion helpers may also need to check whether the current test is using Hypothesis:

hypothesis.currently_in_test_context()[source]

Return True if the calling code is currently running inside an @given or stateful test, False otherwise.

This is useful for third-party integrations and assertion helpers which may be called from traditional or property-based tests, but can only use assume() or target() in the latter case.

Hypothesis integration via setuptools entry points

If you would like to ship Hypothesis strategies for a custom type - either as part of the upstream library, or as a third-party extension, there’s a catch: from_type() only works after the corresponding call to register_type_strategy(), and you’ll have the same problem with register_random(). This means that either

  • you have to try importing Hypothesis to register the strategy when your library is imported, though that’s only useful at test time, or

  • the user has to call a ‘register the strategies’ helper that you provide before running their tests

Entry points are Python’s standard way of automating the latter: when you register a "hypothesis" entry point in your setup.py, we’ll import and run it automatically when hypothesis is imported. Nothing happens unless Hypothesis is already in use, and it’s totally seamless for downstream users!

Let’s look at an example. You start by adding a function somewhere in your package that does all the Hypothesis-related setup work:

# mymodule.py


class MyCustomType:
    def __init__(self, x: int):
        assert x >= 0, f"got {x}, but only positive numbers are allowed"
        self.x = x


def _hypothesis_setup_hook():
    import hypothesis.strategies as st

    st.register_type_strategy(MyCustomType, st.integers(min_value=0))

and then tell setuptools that this is your "hypothesis" entry point:

# setup.py

# You can list a module to import by dotted name
entry_points = {"hypothesis": ["_ = mymodule.a_submodule"]}

# Or name a specific function too, and Hypothesis will call it for you
entry_points = {"hypothesis": ["_ = mymodule:_hypothesis_setup_hook"]}

And that’s all it takes!

HYPOTHESIS_NO_PLUGINS

If set, disables automatic loading of all hypothesis plugins. This is probably only useful for our own self-tests, but documented in case it might help narrow down any particularly weird bugs in complex environments.

Interaction with pytest-cov

Because pytest does not load plugins from entrypoints in any particular order, using the Hypothesis entrypoint may import your module before pytest-cov starts. This is a known issue, but there are workarounds.

You can use coverage run pytest ... instead of pytest --cov ..., opting out of the pytest plugin entirely. Alternatively, you can ensure that Hypothesis is loaded after coverage measurement is started by disabling the entrypoint, and loading our pytest plugin from your conftest.py instead:

echo "pytest_plugins = ['hypothesis.extra.pytestplugin']\n" > tests/conftest.py
pytest -p "no:hypothesispytest" ...

Alternative backends for Hypothesis

Warning

EXPERIMENTAL AND UNSTABLE.

The importable name of a backend which Hypothesis should use to generate primitive types. We aim to support heuristic-random, solver-based, and fuzzing-based backends.

See issue #3086 for details, e.g. if you’re interested in writing your own backend. (note that there is no stable interface for this; you’d be helping us work out what that should eventually look like, and we’re likely to make regular breaking changes for some time to come)

Using the prototype crosshair-tool backend via hypothesis-crosshair, a solver-backed test might look something like:

from hypothesis import given, settings, strategies as st


@settings(backend="crosshair")  # pip install hypothesis[crosshair]
@given(st.integers())
def test_needs_solver(x):
    assert x != 123456789