Awesome
ipytest - Pytest in Jupyter notebooks
PyPI | Usage | Global state | How does it work? | Changes | Reference | Development | Related packages | License
ipytest
allows you to run Pytest in Jupyter notebooks.
ipytest
aims to give access to the full pytest
experience and to make it
easy to transfer tests out of notebooks into separate test files.
Usage
Install ipytest
by running
pip install ipytest
The suggested way to import ipytest
is
import ipytest
ipytest.autoconfig()
Afterwards in a new cell, tests can be executed as in
%%ipytest -qq
def test_example():
assert [1, 2, 3] == [1, 2, 3]
This command will first delete any previously defined tests, execute the cell
and then run pytest. For further details on how to use ipytest
see the
example notebook or the reference below.
Global state
There are multiple sources of global state when using pytest inside the notebook:
- pytest will find any test function ever defined. This behavior can lead to
unexpected results when test functions are renamed, as their previous
definition is still available inside the kernel. Running
%%ipytest
per default deletes any previously defined tests. As an alternative theipytest.clean()
function allows to delete previously defined tests. - Python's module system caches imports and therefore acts as a global state.
To test the most recent version of any module, the module needs to be
reloaded.
ipytest
offers theipytest.force_reload()
function. Theautoreload
extension of IPython may also help here. To test local packages, it is advisable to install them as development packages, e.g.,pip install -e .
. - For async code, IPython will create an event loop in the current thread. This
setup may interfere with async tests. To support these use cases, ipytest
supports running tests in a separate thread. Simply setup ipytest via
ipytest.autoconfig(run_in_thread=True)
.
How does it work?
In its default configuration (via autoconfig()
), ipytest
performs the
following steps:
- Register pytest's assertion rewriter with the IPython kernel. The rewriter will rewrite any assert statements entered into the notebook to give better error messages. This change will affect also non test based code, but should generally improve the development experience.
- Ensure the notebook can be mapped to a file.
ipytest
will create a temporary file in the current directory and remove if afterwards. - Register the notebook scope temporarily as a module. This step is necessary to allow pytest's doctest plugin to import the notebook.
- Call pytest with the name of the temporary module
NOTE: Some notebook implementations modify the core IPython package and
magics may not work correctly (see here or here). In
this case, using ipytest.run()
and
ipytest.clean()
directly should still work as expected.
Reference
autoconfig
| %%ipytest
| config
| exit_code
| run
| clean
| force_reload
| Error
| ipytest.cov
ipytest.autoconfig(rewrite_asserts=<default>, magics=<default>, clean=<default>, addopts=<default>, run_in_thread=<default>, defopts=<default>, display_columns=<default>, raise_on_error=<default>, coverage=<default>)
Configure ipytest
with reasonable defaults.
Specifically, it sets:
addopts
:('-q', '--color=yes')
clean
:'[Tt]est*'
coverage
:False
defopts
:'auto'
display_columns
:100
magics
:True
raise_on_error
:False
rewrite_asserts
:True
run_in_thread
:False
See ipytest.config
for details.
%%ipytest ...
<!-- minidoc "function": "ipytest._impl.ipytest_magic", "header": false, "header_depth": 3 -->
IPython magic to first execute the cell, then execute ipytest.run()
.
Note: the magics are only available after running
ipytest.autoconfig()
or
ipytest.config(magics=True)
.
It cleans any previously found tests, i.e., only tests defined in the
current cell are executed. To disable this behavior, use
ipytest.config(clean=False)
.
Any arguments passed on the magic line are interpreted as command line arguments to to pytest. For example calling the magic as
%%ipytest -qq
is equivalent to passing -qq
to pytest. The arguments are formatted using
Python's standard string formatting. Currently, only the {MODULE}
variable
is understood. It is replaced with the filename associated with the
notebook. In addition node ids for tests can be generated by using the test
name as a key, e.g., {test_example}
will expand to
{MODULE}::test_example
.
The keyword arguments passed to ipytest.run()
can be
customized by including a comment of the form # ipytest: arg1=value1, arg=value2
in the cell source. For example:
%%ipytest {MODULE}::test1
# ipytest: defopts=False
is equivalent to ipytest.run("{MODULE}::test1", defopts=False)
. In this
case, it deactivates default arguments and then instructs pytest to only
execute test1
.
NOTE: In the default configuration %%ipytest
will not raise
exceptions, when tests fail. To raise exceptions on test errors, e.g.,
inside a CI/CD context, use ipytest.autoconfig(raise_on_error=True)
.
ipytest.config(rewrite_asserts=<keep>, magics=<keep>, clean=<keep>, addopts=<keep>, run_in_thread=<keep>, defopts=<keep>, display_columns=<keep>, raise_on_error=<keep>, coverage=<default>)
Configure ipytest
To update the configuration, call this function as in:
ipytest.config(rewrite_asserts=True)
The following settings are supported:
rewrite_asserts
(default:False
): enable ipython AST transforms globally to rewrite assertsmagics
(default:False
): if set toTrue
register the ipytest magicscoverage
(default:False
): ifTrue
configurepytest
to collect coverage information. This functionality requires thepytest-cov
package to be installed. It adds--cov --cov-config={GENERATED_CONFIG}
to the arguments when invokingpytest
. WARNING: this option will hide existing coverage configuration files. Seeipytest.cov
for detailsclean
(default:[Tt]est*
): the pattern used to clean variablesaddopts
(default:()
): pytest command line arguments to prepend to every pytest invocation. For example settingipytest.config(addopts=['-qq'])
will execute pytest with the least verbosity. Consider adding--color=yes
to force color outputrun_in_thread
(default:False
): ifTrue
, pytest will be run a separate thread. This way of running is required when testing async code withpytest_asyncio
since it starts a separate event loopdefopts
(default:"auto"
): either"auto"
,True
orFalse
- if
"auto"
,ipytest
will add the current notebook module to the command line arguments, if no pytest node ids that reference the notebook are provided by the user - If
True
, ipytest will add the current module to the arguments passed to pytest - If
False
only the arguments given andadopts
are passed to pytest
- if
display_columns
(default:100
): if notFalse
, configure pytest to use the given number of columns for its output. This option will temporarily override theCOLUMNS
environment variable.raise_on_error
(defaultFalse
): ifTrue
,ipytest.run
and%%ipytest
will raise anipytest.Error
if pytest fails.
ipytest.exit_code
The return code of the last pytest invocation.
<!-- minidoc "function": "ipytest.run", "header_depth": 3 -->ipytest.run(*args, module=None, plugins=(), run_in_thread=<default>, raise_on_error=<default>, addopts=<default>, defopts=<default>, display_columns=<default>, coverage=<default>)
Execute all tests in the passed module (defaults to __main__
) with pytest.
This function is a thin wrapper around pytest.main
and will execute any tests
defined in the current notebook session.
NOTE: In the default configuration ipytest.run()
will not raise
exceptions, when tests fail. To raise exceptions on test errors, e.g.,
inside a CI/CD context, use ipytest.autoconfig(raise_on_error=True)
.
Parameters:
args
: additional commandline options passed to pytestmodule
: the module containing the tests. If not given,__main__
will be used.plugins
: additional plugins passed to pytest.
The following parameters override the config options set with
ipytest.config()
or
ipytest.autoconfig()
.
run_in_thread
: if given, override the config option "run_in_thread".raise_on_error
: if given, override the config option "raise_on_error".addopts
: if given, override the config option "addopts".defopts
: if given, override the config option "defopts".display_columns
: if given, override the config option "display_columns".
Returns: the exit code of pytest.main
.
ipytest.clean(pattern=<default>, *, module=None)
Delete tests with names matching the given pattern.
In IPython the results of all evaluations are kept in global variables unless explicitly deleted. This behavior implies that when tests are renamed the previous definitions will still be found if not deleted. This method aims to simply this process.
An effective pattern is to start with the cell containing tests with a call
to ipytest.clean()
, then defined all test cases, and
finally call ipytest.run()
. This way renaming tests works
as expected.
Parameters:
pattern
: a glob pattern used to match the tests to delete. If not given, the"clean"
config option is used.items
: the globals object containing the tests. IfNone
is given, the globals object is determined from the call stack.
ipytest.force_reload(*include, modules: Optional[Dict[str, module]] = None)
Ensure following imports of the listed modules reload the code from disk
The given modules and their submodules are removed from sys.modules
.
Next time the modules are imported, they are loaded from disk.
If given, the parameter modules
should be a dictionary of modules to use
instead of sys.modules
.
Usage:
ipytest.force_reload("my_package")
from my_package.submodule import my_function
<!-- minidoc -->
<!-- minidoc "class": "ipytest.Error", "header_depth": 3 -->
ipytest.Error(exit_code)
Error raised by ipytest on test failure
<!-- minidoc --> <!-- minidoc "module": "ipytest.cov", "header_depth": 3 -->ipytest.cov
A coverage.py plugin to support coverage in Jupyter notebooks
The plugin must be enabled in a .coveragerc
next to the current notebook or
the pyproject.toml
file. See the coverage.py docs
for details. In case of a .coveragerc
file, the minimal configuration reads:
[run]
plugins =
ipytest.cov
With this config file, coverage information can be collected using pytest-cov with
%%ipytest --cov
def test():
...
ipytest.autoconfig(coverage=True)
automatically adds the --cov
flag and the
path of a generated config file to the Pytest invocation. In this case no
further configuration is required.
There are some known issues of ipytest.cov
- Each notebook cell is reported as an individual file
- Lines that are executed at import time may not be encountered in tracing and may be reported as not-covered (One example is the line of a function definition)
- Marking code to be excluded in branch coverage is currently not supported (incl. coveragepy pragmas)
ipytest.cov.translate_cell_filenames(enabled=True)
Translate the filenames of notebook cells in coverage information.
If enabled, ipytest.cov
will translate the temporary file names generated
by ipykernel (e.g, ipykernel_24768/3920661193.py
) to their cell names
(e.g., In[6]
).
Warning: this is an experimental feature and not subject to any stability guarantees.
<!-- minidoc -->Development
Setup a Python 3.10 virtual environment and install the requirements via
pip install -r requirements-dev.txt
pip install -e .
To execute the unit tests of ipytest
run
python x.py test
python x.py integration
Before committing, execute python x.py precommit
to update the documentation,
format the code, and run tests.
To create a new release execute:
python x.py release
Related packages
ipytest
is designed to enable running tests within an interactive notebook
session. There are also other packages that aim to use test full notebooks:
these packages run the notebook and compare the output of cells to the output of
previous runs. These packages include:
- nbval
- nbmake
- pytest-ipynb is no longer maintained
- ...
While PyTest itself is generally supported, support for PyTest plugins depends very much on the plugin. The following plugins are known to not work:
See ipytest.cov
on how to use ipytest
with
pytest-cov.
Please create an issue, if I missed a packaged or mischaracterized any package.
License
The MIT License (MIT)
Copyright (c) 2015 - 2024 Christopher Prohm
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.