208: Tests with no assert statements
There's an old joke about code coverage. The easiest way to get a 100% code coverage with all passing tests is to write fine grain unit tests with no assert statements. I don't recommend it. It's just a joke. It's not true.
Brian:That's actually not easy. It's a lot of work. You'll end up with a lot of useless test code, and fine grain unit tests are the worst kind of test code. They tend to break during legitimate refractors, and they muck up one of the great features of coverage tools, telling you which code you might be able to delete. And, finally, your test suite will still fail even with no assert statements.
Brian:Let's take any test that calls, say, a library function in your code under test. If that code throws an exception, not just assertion errors, but any exception, then the test will fail. So if you want really wanna make sure that your test suite never fails and cannot fail, you'll want to wrap these in try accept pass blocks, and that's even more work. Not easy. Also, we're just getting silly at this point.
Brian:So let's abandon this hack and look at reasonable uses for test cases with no assert statements. That's what this episode is about. I've written a lot of tests that have no assert statements. They're usually the first kind of tests I write for a new system or a new test suite. Let's go through some of the types of tests that have no asserts.
Brian:First off, there's the no op test. It doesn't even call any code under test. What would it look like? It would probably just be a single file, test something, maybe test no op, with a single function, test no op, that has either an ellipses or a pass in it. Doesn't really do anything at all.
Brian:What good is this test? Well, it's great for testing big picture stuff like CI workflows and merge acceptance and deployment triggering, etcetera. Things where you have to call your test suite and you wanna make sure everything passes just to make sure all the extra infrastructure around it works. Associated with the no op test is an always fails test that is often the second test I write to make sure appropriate steps are gated, like blocking merges or deployments, etcetera. But that would probably include an assert statement or an exception.
Brian:So it doesn't really count as a test with no asserts, but the no op test does. Next, we have experiments, and I'll also include exploratory manual test setup. These are these kind of tests are not gonna be part of my regression test suite, but still are useful. They're useful ways to use test frameworks. For instance, I usually have several things I'm trying to debug through an API and or just seeing how they work.
Brian:And instead of trying to do that on a REPL or command line or writing a little script, I'll just write a little test function. So I usually have at least one experiments test file, and each of the little tests inside can call an API function or query it. You can have a bunch of them in one file, and they can be run individually. So for instance, pycharm and v s code, you can you can individually run one test and it's a lot easier than writing a lot of scripts. You can write one file with a whole bunch of little tiny scripts in it.
Brian:They were awesome. How about tests that legitimately are part of a test suite? There's a test I've written frequently and I named it different things over the years, but, essentially, it's something that calls all of the API functions and tries to go through all of the range of reasonable parameters. And I might split it up into one test function for each API function, but also I might have, one function that just takes maybe a parameterized list and goes through and calls all of the API functions somehow. What good would this be?
Brian:Well, remember when we had we were trying to avoid failures, And the easiest way is to either not call anything in your system, but you don't get code coverage that way. But if you do call your system, you wanna, like, you know, you have to do a try accept pass block around it or else something might fail inside your code. Same thing we're looking at here. I'd like to call all of the API functions and make sure they don't blow up. We're not looking for correct behavior.
Brian:We're just making sure that the code under test doesn't raise any exceptions or crash. Plus, these act as change detector tests, but in a good way. If someone deletes part of the API without letting the rest of the team know this test will fail. Related to this is an API change test. If there's a if your system has a way to programmatically dump the API schema, this can be this can be an awesome change detector test.
Brian:You can have a test that just dumps the schema to some file in a test directory and then compares it on future test runs. Maybe add a command line flag to pytest so that you can, dump the schema and save it to a file when the flag is called, but most of the time it just does the compare. Okay. So this can fail, and it probably is gonna fail with, like, an equality and an assert. So maybe this doesn't really count as a no assert test, but it really is also still not testing behavior.
Brian:So I'm not sure whether to include this in the category or not. It seems dumb actually to just, like, you know, dump the schema and make sure it didn't change, But you would not believe how many times this has been useful to me, especially with large distributed teams where communication can occasionally break down. Side note about change detector test. Normally, the term change detector test is a bad thing. It's a bad thing to have.
Brian:And all of your tests being change detector tests are a horrible idea, and fine grain unit tests are often like that. They test the implementation. We don't wanna test the implementation. We wanna test the behavior. But so far, I've described a couple change detector tests, One that checks the visible that the visible changes of the API that I can call the same kind of parameters to each API function, or the test on the schema.
Brian:Those are 2 change detector tests. But they're high level interface changes or behavior changes that are being detected. These aren't implementation, really. They're at the API level, and I think that's fine to have a change detector text at that level. These changes shouldn't be prohibited.
Brian:We're not putting these tests in place to make sure nothing ever happens that it changes. What we're doing is we wanna make sure that use we're utilizing the test system to alert us that a change has happened in case some communication failed. Another type of test that I often run, like, right away and always run with my test suite is some kind of system detection or version reporting, something simple that hits the system. Let's think about this at first in terms of a library or package. You'd have a test in every test suite that just reports the version of the package or library or whatever.
Brian:It actually could be a no of function and the version reporting is is inside of an auto use session scope fixture. So it runs before any test is run, and the initial no op function is just to make sure that it works and, eventually, we don't need the no op function. We can have whatever test is run initially report the version. So what use would that be? Well, it's nice to have that at the top of the test log so that you can make sure that you're looking at the test log and can see what what version you're testing.
Brian:And within this, fixture, you can use capsid to turn off output capture for this test so that you can see the printout. Now this is useful for packages and libraries, I imagine. But, really, I use it all the time for testing devices and remote systems, etcetera. Anything that the test suite is interacting with that could possibly fail or be unreachable and then print something about it, like the device ID or the host name or the version of the software running on the host or the device. Something that says I can reach the device or the system and it is in the state that I expect it to be.
Brian:This really helps to have at the beginning of a test suite report. Now within the fixture itself, we can turn off the rest of the suite if we need to so that we're not testing against the system that we're not even reaching. Another type of test that doesn't have an assert is something that just is trying out the API. Let's go back to the idea of experiments. One of the goals of test driven development and other developer centric testing philosophies is that the tests are the first place you explore using an API.
Brian:This has huge value. Don't dismiss it. And no assert is necessary to explore how the API works and putting together API workflows. Of course, why not just add this to test suite? Try out the API, set stuff up, ask for results, assert your expectations.
Brian:These can be useful just to make sure your system is working in a rudimentary way, but it also is a useful way to document the system. You can learn about how the API works and leave it in place to document for future developers, probably you. One of the great things about this is you've already sort of played with it using tests for experiments and playground and then just going ahead and adding an assert at the end for what you expect to be going on, that is a great way to get it yourself into writing actually real tests and it's kind of easy to transition to. Adding a search is is just a natural progression from what you've been doing. The other great thing about progressing in this way is there's no blank page.
Brian:You already have a bunch of tests written. You're just adding to your test suite. That may be the best part of tests without an assert. They get you past the blank page, the 0 to 1 to many problem. And now you're on the right mindset for test writing.
Brian:What tests do you need to make sure the system behaves? What tests do you need to help you refactor without fear? What tests do you need to help you have confidence that your system is working? Now we're having fun, and this is where testing really gets powerful. Now let's turn on coverage.
Brian:What are we missing? Can we hit that through the API? If not, is it important? Can we delete it and simplify the code? Should we leave it in?
Brian:Should we check the be those behaviors to make sure they work? Speaking of coverage, I just released the test coverage chapter of the complete by test course. You can get that chapter if you grab the complete by test course or just part 2, the working with projects section. This chapter covers integrating Pytest and coverage and looking at reports to find missing test cases. Do you regularly write tests without asserts?
Brian:And why do you do that? Did I miss some no assert cases that you like to use? Let me know. You can find my contact info at python test.com/contact.