When running unit tests with pytest, simply using the
--durations flag to measure the execution time of slow running tests.
Take a look at the following test file:
# speed_tests.py import time def test_fast(): x = 2 + 2 assert x == 4 def test_slow(): time.sleep(1) def test_superslow(): time.sleep(3)
--durations=n to get the execution time for the slowest
n unit tests
pytest --durations=1 speed_tests.py
This will print the execution time of the slowest test. The output will look something like this:
======================= slowest 1 test durations ======================== 3.00s call speed_tests.py::test_superslow ======================= 3 passed in 4.02 seconds ========================
If your test suite is much larger, you can print any number of tests. For instance:
pytest --durations=100 many_tests.py
--durations=0 to get the execution time for all unit tests
If want to print the execution time for all unit tests, just use
--durations=0. You can expect output like this:
======================== slowest test durations ========================= 3.00s call speed_tests.py::test_superslow 1.00s call speed_tests.py::test_slow (0.00 durations hidden. Use -vv to show these durations.) ======================= 3 passed in 4.01 seconds ========================
Note that one of the tests is hidden because it executed too quickly.
Measuring execution time for Python unit tests is pretty easy with pytest. Obviously, if you find any slow running tests you'll need to spend time figuring out why they're slow and fixing them if possible. Some of the most common reasons for slow tests are unintended network requests and un-mocked time delays, but there are plenty of other possibilities.
If you've used any other great tools for solving this issue or if you enjoyed this guide, let me know in the comments below!