pytest-cov : The pytest plugin for measuring coverage

Let's get started with pytest-cov, the number one most downloaded pytest plugin.

pytest-cov is a pytest plugin that helps produce coverage reports using Coverage or Coverage.py. So let's take a pause and quickly talk about what Coverage.py is. Coverage.py is what you get when you say pip install coverage. It is a tool for measuring code coverage of Python programs. It monitors your program, noting which parts of the code have been executed, then analyzes the source to identify a code that could have been executed but was not. So why do you need coverage? If I want to make sure my test code exercises all of my source code, I can run coverage while running the test to see if I missed something.

Also, coverage reports a total coverage percent, But more importantly, it reports per file, and that can show you which lines of code are missed by line number. And it can do branch coverage, so you can make sure that each branch decision possibility is hit. The configuration options are amazing also, so even if you're using a vendored-in framework or library, which you don't care about coverage for, you can exclude those parts and just test your code. There are also a lot of reporting options. For CI and quick local checks, I usually run a coverage with a text-based report. And even within that, like if I'm going to set it up with talks, I often run coverage just on the last version or the latest version only, not on all of the versions of Python. Of course, if my source code does some branches based on Python version, then I'll have to run it on multiple versions. And there are ways to combine coverage reports if you do have Python-like switches in your code. If the report shows more than a few lines uncovered, then I generally don't try to figure it out from the command line report or the terminal report. I lean on the HTML report. So you can turn on the HTML report with just running coverage HTML. This works even if the initial report was generated with the pytest-cov plugin. The HTML report is so much easier to look at.

And you can see what's missing and what branches are missing, et cetera, really easily with the HTML report.

Do you need 100% coverage? This always comes up when I talk about coverage. For me, yes, I want 100% coverage reports with an asterisk. What's the asterisk about?

That's because I'm going to write high-level tests to do system tests through the API if I can. Then I look at coverage reports.

And then I decide, does the code that's not covered need to be tested? Can it be removed? Should I test the new code at a high level or put in subsystem or module or functional level tests in place? The point is, I need to think about it. And if the code doesn't need to be covered, then if it's not obvious, why not? Then I document that. and I'll put something like pragma no cover in inline or list the files that don't need to be covered in the configuration. The point is that once my decision is made, I encode that so that future coverage ports just so show 100% or whatever, and I can believe that. I don't have to think, yeah, 90 for 5% is fine because there's some stuff that I know isn't tested, but that's okay. Don't do that. but it's just confusing to keep straight. So just encode those decisions so that 100% means 100% of the code that I want to be covered is covered.

Should you run test coverage over your test code also, not just your source code? Yes. Actually, I even brought this up on purpose because pytest is often used to test non-Python things. You can use pytest to test anything that you can access with Python, which is just about anything. In my day job, I use it to test RF communication test systems through their external API.

And a lot of people use pytest to drive Playwright or Selenium and other frameworks for web apps and web API testing, even if the thing that they're testing isn't Python. And even in those cases, coverage is helpful to run over your test code. Why would you want to run it just on your test code? Because test code is code and it's weird. Why is it weird? Because test code consists of a lot of test functions and test classes. and we don't really call these things ourselves.

pytest does. So sometimes we do a copy, paste, modify of a test function to make a new test function, but then we forget to change the name. So one of those test functions is just not going to run. It's going to be ignored and we're not going to see it unless we run coverage. Or perhaps we put some logic in some test code and some of those paths in the logic are getting hit in some of the test suites or in the test suite, but not all of the switches we don't want that we as i assume you have switches and logic in your test code because you want all the paths going to be covered so these things happen and since running coverage on your test code is such an easy fix why not just do it so how do you run it to run coverage on your test code directly you can run it with coverage run dash dash source equals then whatever where your source code is like source slash tests or not slash source comma tests and it'll grab your source code and your test code and run coverage on that and then dash m py test so all together coverage run source equals source and tests dash m py test and then whatever py test flags you got this is slightly different behavior than just running py test because it's kind of like running Python-M pytest and that puts the current directory in the search path. If putting the current directory in your search path is not a problem for you or you just don't know what I'm talking about with that, then don't worry about it.

But there's no report yet. It just runs it. And now you have to run coverage report for a terminal report or coverage HTML for the HTML reports. Or like me, you start with the terminal report and do the HTML report if necessary.

It's not too hard, but it is two steps. And that's partly why I like the plugin, pytest-cov. Yep, we're getting back to the plugin we wanted to talk about in the first place.

pytest-cov is great. for one instead of running coverage run dash m py test and then coverage report you can just run py test and then pass it in whatever you want to cover like dash dash cov equals source or and dash dash cov equals tests and it's really not that much less typing but it feels like less typing i think to me at least and i can put those command line flags in the configuration file if i want and actually I'm probably using talks so I'll probably have all of that in the file the talks file anyway so why do I care about running the the plugin and not just running coverage itself I don't know it just seems easier and it's not just for convenience though, pytest-cov plugin also brings a lot of other stuff a few of the other things it does is it deals with subprocess support so you can fork stuff in a subprocess and that gets covered also with no extra work from you. Same goes with xDist. So pytest xDist is another plugin that allows you to run tests in parallel. And if you do that with coverage, you have to combine the output and, and then pytest, but pytest-cov just does that automatically. pytest-cov combines the reports correctly. So you just get the final report and it's all correct if you're using xDist. Another reason why people love this plugin. Also, pytest-cov doesn't add the current during the search path. So if you care about that, use the plugin. And there's a couple more reasons why I love it.

You can set a cov fail under flag, which says, like if you set it to 100 or whatever you want, but let's say I set it to 100, that means that any test suite that falls under 100% coverage will fail the suite. None of the tests will fail, but the suite will fail because it didn't hit the 100%. That's super awesome. And you can set that to 95 or whatever you want to do. So far, these might seem like minor improvements, but they're not. All of these extra things build up and they save people time and headache.

And that's a decent enough reason why pytest-cov is the number one downloaded plugin. But then there's another thing, context.

Coverage.py recently-ish, I don't know, within the last year or two, added context support. What does that mean? It means that you can figure out for each line of code that's covered, which test it came from. So it's a little so that you can like you know run your run py test with just like some of your tests and maybe like one of your tests or class and then see all the stuff it hits and make sure that it hits the source code that you think it's going to cover or you can run your whole suite and and for each line of code you can see how many tests are hitting it it's kind of fun and but But when you're debugging what's going wrong, often it can be really valuable. But it's a little fidgety to get working, but not with pytest-cov. With this plugin, it's super easy to get this set up. And then when you're done, when you're set it up, what you get is the HTML report will have a column on the right with little like for each line of code. You can see on there, you go over to the right and there's a little drop down that you can see all the tests that hit that line of code. It's super cool. I don't use it all the time, but when I need it, I really need it.

Another reason why pytest-cov is fantastic, so definitely check it out. There's also a tutorial on how to get the context thing set up. It's not that difficult, a short tutorial, but it's in the pytest-cov documentation,

so I'll put a link to that in the show notes. So who do we thank for all of this? Coverage.py is maintained by Ned Batchelter. So thank you, Ned. Awesome. He's been supporting it for a long time. pytest-cov is maintained by Ionel Cristian Mărieș. Now, I probably got that last name wrong, but I really want to thank Ionel. It's I-O-N-E-L for putting his pronunciation hint in his about page somewhere. I think it's on his blog. Anyway, cool. pytest-cov and Coverage.py. Love both of them. Also, pytest-cov is covered by other people, not just Ionel. There's been maintained by lots of people have added to it over the years it's part of the pytest dev group anyway links are in the show notes check this out thanks for listening.

Creators and Guests

Brian Okken
Host
Brian Okken
Software Engineer, also on Python Bytes and Python People podcasts
pytest-cov : The pytest plugin for measuring coverage
Broadcast by