Third-party extensions¶
There are a number of open-source community libraries that extend Hypothesis. This page lists some of them; you can find more by searching PyPI by keyword or by framework classifier.
If there’s something missing which you think should be here, let us know!
Note
Being listed on this page does not imply that the Hypothesis maintainers endorse a package.
External strategies¶
Some packages provide strategies directly:
hypothesis-fspaths - strategy to generate filesystem paths.
hypothesis-geojson - strategy to generate GeoJson.
hypothesis-geometry - strategies to generate geometric objects.
hs-dbus-signature - strategy to generate arbitrary D-Bus signatures.
hypothesis-sqlalchemy - strategies to generate SQLAlchemy objects.
hypothesis-ros - strategies to generate messages and parameters for the Robot Operating System.
hypothesis-csv - strategy to generate CSV files.
hypothesis-networkx - strategy to generate networkx graphs.
hypothesis-bio - strategies for bioinformatics data, such as DNA, codons, FASTA, and FASTQ formats.
hypothesis-rdkit - strategies to generate RDKit molecules and representations such as SMILES and mol blocks
hypothesmith - strategy to generate syntatically-valid Python code.
hypothesis-torch - strategy to generate various Pytorch structures (including tensors and modules).
Others provide a function to infer a strategy from some other schema:
hypothesis-jsonschema - infer strategies from JSON schemas.
lollipop-hypothesis - infer strategies from lollipop schemas.
hypothesis-drf - infer strategies from a djangorestframework serialiser.
hypothesis-graphql - infer strategies from GraphQL schemas.
hypothesis-mongoengine - infer strategies from a mongoengine model.
hypothesis-pb - infer strategies from Protocol Buffer schemas.
Or some other custom integration, such as a “hypothesis” entry point:
deal is a design-by-contract library with built-in Hypothesis support.
icontract-hypothesis infers strategies from icontract code contracts.
pandera schemas all have a
.strategy()
method, which returns a strategy for matchingDataFrame
s.Pydantic automatically registers constrained types - so
builds()
andfrom_type()
“just work” regardless of the underlying implementation.
Other cool things¶
Tyche (source) is a VSCode extension which provides live insights into your property-based tests, including the distribution of generated inputs and the resulting code coverage. You can read the research paper here.
schemathesis is a tool for testing web applications built with Open API / Swagger specifications. It reads the schema and generates test cases which will ensure that the application is compliant with its schema. The application under test could be written in any language, the only thing you need is a valid API schema in a supported format. Includes CLI and convenient pytest integration. Powered by Hypothesis and hypothesis-jsonschema, inspired by the earlier swagger-conformance library.
Trio is an async framework with “an obsessive
focus on usability and correctness”, so naturally it works with Hypothesis!
pytest-trio includes a custom hook
that allows @given(...)
to work with Trio-style async test functions, and
hypothesis-trio includes stateful testing extensions to support
concurrent programs.
pymtl3 is “an open-source Python-based hardware generation, simulation, and verification framework with multi-level hardware modeling support”, which ships with Hypothesis integrations to check that all of those levels are equivalent, from function-level to register-transfer level and even to hardware.
libarchimedes makes it easy to use Hypothesis in the Hy language, a Lisp embedded in Python.
battle-tested is a fuzzing tool that will show you how your code can fail - by trying all kinds of inputs and reporting whatever happens.
pytest-subtesthack functions as a workaround for issue #377.
returns uses Hypothesis to verify that Higher Kinded Types correctly implement functor, applicative, monad, and other laws; allowing a declarative approach to be combined with traditional pythonic code.
icontract-hypothesis includes a ghostwriter for test files and IDE integrations such as icontract-hypothesis-vim, icontract-hypothesis-pycharm, and icontract-hypothesis-vscode - you can run a quick ‘smoke test’ with only a few keystrokes for any type-annotated function, even if it doesn’t have any contracts!
Writing an extension¶
Note
See CONTRIBUTING.rst for more information.
New strategies can be added to Hypothesis, or published as an external package on PyPI - either is fine for most strategies. If in doubt, ask!
It’s generally much easier to get things working outside, because there’s more freedom to experiment and fewer requirements in stability and API style. We’re happy to review and help with external packages as well as pull requests!
If you’re thinking about writing an extension, please name it
hypothesis-{something}
- a standard prefix makes the community more
visible and searching for extensions easier. And make sure you use the
Framework :: Hypothesis
trove classifier!
On the other hand, being inside gets you access to some deeper implementation
features (if you need them) and better long-term guarantees about maintenance.
We particularly encourage pull requests for new composable primitives that
make implementing other strategies easier, or for widely used types in the
standard library. Strategies for other things are also welcome; anything with
external dependencies just goes in hypothesis.extra
.
Tools such as assertion helpers may also need to check whether the current test is using Hypothesis:
- hypothesis.currently_in_test_context()[source]¶
Return
True
if the calling code is currently running inside an@given
or stateful test,False
otherwise.This is useful for third-party integrations and assertion helpers which may be called from traditional or property-based tests, but can only use
assume()
ortarget()
in the latter case.
Hypothesis integration via entry points¶
If you would like to ship Hypothesis strategies for a custom type - either as
part of the upstream library, or as a third-party extension, there’s a catch:
from_type()
only works after the corresponding
call to register_type_strategy()
, and you’ll have
the same problem with register_random()
. This means that
either
you have to try importing Hypothesis to register the strategy when your library is imported, though that’s only useful at test time, or
the user has to call a ‘register the strategies’ helper that you provide before running their tests
Entry points
are Python’s standard way of automating the latter: when you register a
"hypothesis"
entry point in your pyproject.toml
, we’ll import and run it
automatically when hypothesis is imported. Nothing happens unless Hypothesis
is already in use, and it’s totally seamless for downstream users!
Let’s look at an example. You start by adding a function somewhere in your package that does all the Hypothesis-related setup work:
# mymodule.py
class MyCustomType:
def __init__(self, x: int):
assert x >= 0, f"got {x}, but only positive numbers are allowed"
self.x = x
def _hypothesis_setup_hook():
import hypothesis.strategies as st
st.register_type_strategy(MyCustomType, st.integers(min_value=0))
and then declare this as your "hypothesis"
entry point:
# pyproject.toml
# You can list a module to import by dotted name
[project.entry-points.hypothesis]
_ = "mymodule.a_submodule"
# Or name a specific function, and Hypothesis will call it for you
[project.entry-points.hypothesis]
_ = "mymodule:_hypothesis_setup_hook"
And that’s all it takes!
- HYPOTHESIS_NO_PLUGINS¶
If set, disables automatic loading of all hypothesis plugins. This is probably only useful for our own self-tests, but documented in case it might help narrow down any particularly weird bugs in complex environments.
Interaction with pytest-cov¶
Because pytest does not load plugins from entrypoints in any particular order, using the Hypothesis entrypoint may import your module before pytest-cov starts. This is a known issue, but there are workarounds.
You can use coverage run pytest ... instead of pytest --cov ...,
opting out of the pytest plugin entirely. Alternatively, you can ensure that Hypothesis
is loaded after coverage measurement is started by disabling the entrypoint, and
loading our pytest plugin from your conftest.py
instead:
echo "pytest_plugins = ['hypothesis.extra.pytestplugin']\n" > tests/conftest.py
pytest -p "no:hypothesispytest" ...
Another alternative, which we in fact use in our CI self-tests because it works well also with parallel tests, is to automatically start coverage early for all new processes if an environment variable is set. This automatic starting is set up by the PyPi package coverage_enable_subprocess.
This means all configuration must be done in .coveragerc
, and not on the
command line:
[run]
parallel = True
source = ...
Then, set the relevant environment variable and run normally:
python -m pip install coverage_enable_subprocess
export COVERAGE_PROCESS_START=$PATH/.coveragerc
pytest [-n auto] ...
coverage combine
coverage report
Alternative backends for Hypothesis¶
Warning
Alternative backends are experimental and not yet part of the public API. We may continue to make breaking changes as we finalize the interface.
Hypothesis supports alternative backends, which tells Hypothesis how to generate primitive types. This enables powerful generation techniques which are compatible with all parts of Hypothesis, including the database and shrinking.
Hypothesis includes the following backends:
- hypothesis
The default backend.
- hypothesis-urandom
The same as the default backend, but uses
/dev/urandom
to back the randomness behind its PRNG. The only reason to use this backend over the default is if you are also using Antithesis, in which case this enables Antithesis mutations to drive Hypothesis generation./dev/urandom
is not available on Windows, so we emit a warning and fall back to the hypothesis backend there.- crosshair
Generates examples using SMT solvers like z3, which is particularly effective at satisfying difficult checks in your code, like
if
or==
statements.Requires
pip install hypothesis[crosshair]
.
You can change the backend for a test with the backend
setting. For instance, after
pip install hypothesis[crosshair]
, you can use crosshair to
generate examples with SMT via the hypothesis-crosshair backend:
from hypothesis import given, settings, strategies as st
@settings(backend="crosshair") # pip install hypothesis[crosshair]
@given(st.integers())
def test_needs_solver(x):
assert x != 123456789
Failures found by alternative backends are saved to the database and shrink just like normally generated examples, and in general interact with every feature of Hypothesis as you would expect.