When writing automated tests, we aim towards fast & reliable tests. Many times we need to use test doubles to ensure that. In Python, we quite often use MagicMock to replace objects during tests. It comes very handy, but its usage can also backfire very quickly. Mostly because we can call it in the most stupid way possible, but it still won’t produce an error. Let’s take a look at how to improve that using create_autospec.

Standard library to the rescue

As on many other occasions, the Python standard library covers us. Inside the unittest.mock module, we can find the create_autospec function. It takes an object as a spec and creates a mock object with the same attributes as the provided spec. In other words, it creates an instance of an object with the same properties and methods as the provided one. For example, if you provide a class Carwith methods start and stop, you’ll get an instance of mock with these two methods defined. Now, if you use that instance and try to call the foo method, an exception will be raised. So how does this help us when testing? Let’s take a look!

Stubbing with create_autospec

We know a couple of different test doubles – dummies, fakes, stubs, spies, and mocks. (You can learn more here). create_autospec is very useful when replacing objects with stubs. That’s when we use predefined responses to certain calls. That’s useful when we want to replace some read-like object. For example, an ML model that’s predicting sentiment. Using create_autospec, we can make sure our code integrates the model and actually calls it correctly. By using only MagicMock, we could make a typo in method name when calling it or provide wrong arguments (e.g., one argument too few), but the test would still pass. With create_autospec, the exception is raised in both cases. Some of these errors can be caught by type checkers. Anyhow, no one likes a situation where all the tests are passing, but the code is actually broken, right? This happens quite regularly when you combine the usage of MagicMock and auto-refactoring. When using create_autospec, we’ll have a failing test if the implementation uses an object incorrectly. See the examples below:

Spying with create_autospec

The usage can be quite similar when we use spies instead of stubs. That’s when we assert that something was called in a certain way. That comes in handy when sending some data somewhere out of our current process – for example, when we need to send an email or when syncing user data to CRM. Similar to stubs, create_autospec makes sure that tests fail if the usage of mocked objects doesn’t match the expectations. See example below:

Conclusion

Many times small improvements can make a significant difference on how we perceive a certain thing. It’s the same with automated testing. If we encounter the situation where tests pass, but the deployments are broken, we start to lose trust in our test suite. Using create_autospec is another step towards a place where we can trust our test suite more. It won’t solve all the problems, but it can help greatly.

Happy Python testing!

Subscribe To Python Testing Tips

Get Python Testing Tips to Your Inbox

Subscribe To Python Testing Tips

Get Python Testing Tips to Your Inbox

Share This Story, Choose Your Platform!