The book "The Way of Python. Black belt for development, scaling, testing and deployment ”

image Hello, habrozhiteli! The Python Path lets you hone your professional skills and learn as much as possible about the capabilities of the most popular programming language. You will learn how to write effective code, create the best programs in minimal time and avoid common mistakes. It's time to get acquainted with multi-threaded computing and memoization, get expert advice in the field of API and database design, and also look inside Python to expand your understanding of the language. You have to start a project, work with versions, organize automatic testing and choose a programming style for a specific task. Then you will go on to study effective function declarations, select suitable data structures and libraries, create trouble-free programs, packages and optimize programs at the bytecode level.



Excerpt. Running tests in parallel



Running test suites can be time consuming. This is a common occurrence in large projects when a test suite takes minutes to complete. By default, pytest runs tests sequentially, in a specific order.



Since most computers have multi-core processors, you can speed up if you separate the tests to run on multiple cores.



For this, pytest has a pytest-xdist plugin that can be installed using pip. This plugin extends the pytest command line with the ––numprocesses (abbreviated –n) argument, which takes the number of cores used as an argument. Launching pytest –n 4 will run the test suite in four parallel processes, maintaining a balance between the load of available cores.



Due to the fact that the number of cores may vary, the plugin also takes the auto keyword as a value. In this case, the number of available cores will be returned automatically.



Creating objects used in tests using fixtures



In unit testing, you often have to perform a set of standard operations before and after running the test, and these instructions involve certain components. For example, you may need an object that will express the state of the application configuration, and it must be initialized before each testing, and then reset to its initial values ​​after execution. Similarly, if the test depends on a temporary file, this file must be created before the test and deleted after. Such components are called fixtures . They are installed before testing and disappear after its execution.



In pytest fixtures are declared as simple functions. The fixture function must return the desired object so that in testing where it is used, this object can be used.



Here is an example of a simple fixture:



import pytest @pytest.fixture def database(): return <some database connection> def test_insert(database): database.insert(123)
      
      





The database fixture is automatically used by any test that has the database argument in its list. The test_insert () function will receive the result of the database () function as the first argument and will use this result as it sees fit. With this use of fixtures, you do not need to repeat the database initialization code several times.



Another common feature of code testing is the ability to remove excess after the fixture works. For example, close the database connection. Implementing fixture as a generator will add functionality for cleaning verified objects (Listing 6.5).



Listing 6.5. Clearing a Verified Object



 import pytest @pytest.fixture def database(): db = <some database connection> yield db db.close() def test_insert(database): database.insert(123)
      
      



Since we used the yield keyword and made a generator from the database, the code after the yield statement is executed only at the end of the test. This code will close the database connection at the end of the test.



Closing the database connection for each test may cause unjustified waste of computing power, as other tests may use an already open connection. In this case, you can pass the scope argument to the fixture decorator, indicating its scope:



 import pytest @pytest.fixture(scope="module") def database(): db = <some database connection> yield db db.close() def test_insert(database): database.insert(123)
      
      





By specifying the scope = “module” parameter, you initialized the fixture once for the entire module, and now an open database connection will be available for all test functions requesting it.



You can run some common code before or after the test, defining fixtures as automatically used with the autouse keyword, rather than specifying them as an argument for each test function. Concretizing the pytest.fixture () function with the True argument, the autouse keyword, ensures that fixtures are called every time before running the test in the module or class where it is declared.



 import os import pytest @pytest.fixture(autouse=True) def change_user_env(): curuser = os.environ.get("USER") os.environ["USER"] = "foobar" yield os.environ["USER"] = curuser def test_user(): assert os.getenv("USER") == "foobar"</source     .  ,    :     ,      ,       . <h3>  </h3>           ,    ,   ,         .          Gnocchi,    . Gnocchi      <i>storage API</i>.    Python          .       ,      API   .        ,      (    storage API),  ,       .   ,   <i> </i>,     ,        .  6.6          ,    :    mysql,   —  postgresql. <blockquote><h4> 6.6.      </h4> <source lang="python">import pytest import myapp @pytest.fixture(params=["mysql", "postgresql"]) def database(request): d = myapp.driver(request.param) d.start() yield d d.stop() def test_insert(database): database.insert("somedata")
      
      



The driver fixture receives two different values ​​as a parameter - the names of the database drivers that are supported by the application. test_insert is run twice: once for the MySQL database, and the second for the PostgreSQL database. This makes it easy to retake the same test, but with different scripts, without adding new lines of code.



Managed Tests with Dummy Objects



Dummy objects (or stubs, mock objects) are objects that mimic the behavior of real application objects, but in a special, controlled state. They are most useful in creating environments that thoroughly describe the conditions for the test. You can replace all objects except the tested one with dummy objects and isolate it, as well as create an environment for code testing.



One use case is to create an HTTP client. It is almost impossible (or rather, incredibly difficult) to create an HTTP server on which you can run all the situations and scripts for each possible value. HTTP clients are especially difficult to test for error scenarios.



There is a mock command in the standard library to create a dummy object. Starting with Python 3.3, mock has been integrated with the unittest.mock library. Therefore, you can use the code snippet below to provide backward compatibility between Python 3.3 and earlier:



 try: from unittest import mock except ImportError: import mock
      
      





The mock library is very easy to use. Any attribute available for the mock.Mock object is created dynamically at runtime. This attribute can be assigned any value. In Listing 6.7, mock is used to create a dummy object for the dummy attribute.

Listing 6.7. Accessing the mock.Mock attribute



 >>> from unittest import mock >>> m = mock.Mock() >>> m.some_attribute = "hello world" >>> m.some_attribute "hello world"
      
      



You can also dynamically create a method for a mutable object, as in Listing 6.8, where you create a dummy method that always returns 42 and takes whatever you want as an argument.

Listing 6.8. Creating a method for the mock.Mock dummy object



 >>> from unittest import mock >>> m = mock.Mock() >>> m.some_method.return_value = 42 >>> m.some_method() 42 >>> m.some_method("with", "arguments") 42
      
      



Just a couple of lines, and the mock.Mock object now has a some_method () method, which returns 42. It takes any type of argument, while there is no verification of what the argument is.



Dynamically generated methods can also have (intentional) side effects. In order not to be just boilerplate methods that return a value, they can be defined to execute useful code.



Listing 6.9 creates a dummy method that has a side effect - it displays the string “hello world”.

Listing 6.9. Creating a method for a mock.Mock object with a side effect



  >>> from unittest import mock >>> m = mock.Mock() >>> def print_hello(): ... print("hello world!") ... return 43 ... ❶ >>> m.some_method.side_effect = print_hello >>> m.some_method() hello world! 43 ❷ >>> m.some_method.call_count 1
      
      



We assigned an entire function to the some_method ❶ attribute. Technically, this allows you to implement a more complex script in the test, due to the fact that you can include any code necessary for the test in the dummy object. Next, you need to pass this object to the function that expects it.



The ❷ call_count attribute is an easy way to check the number of times a method has been called.



The mock library uses the action-check pattern: this means that after testing you need to make sure that the actions replaced with dummies were performed correctly. Listing 6.10 applies the assert () method to dummy objects to perform these checks.

Listing 6.10. Call verification methods



  >>> from unittest import mock >>> m = mock.Mock() ❶ >>> m.some_method('foo', 'bar') <Mock name='mock.some_method()' id='26144272'> ❷ >>> m.some_method.assert_called_once_with('foo', 'bar') >>> m.some_method.assert_called_once_with('foo', ❸mock.ANY) >>> m.some_method.assert_called_once_with('foo', 'baz') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/dist-packages/mock.py", line 846, in assert_cal led_once_with return self.assert_called_with(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/mock.py", line 835, in assert_cal led_with raise AssertionError(msg) AssertionError: Expected call: some_method('foo', 'baz') Actual call: some_method('foo', 'bar')
      
      



We created methods with foo and bar arguments as tests by calling the ❶ method. An easy way to test calls to dummy objects is to use assert_called () methods, such as assert_called_once_with () ❷. For these methods, you need to pass the values ​​that you expect to be used when calling the dummy method. If the values ​​passed differ from those used, then mock raises an AssertionError exception. If you do not know what arguments can be passed, use mock.ANY as the value of ❸; it will replace any argument passed to the dummy method.



The mock library can also be used to replace a function, method or object from an external module. In Listing 6.11, we replaced the os.unlink () function with our own dummy function.

Listing 6.11. Using mock.patch



 >>> from unittest import mock >>> import os >>> def fake_os_unlink(path): ... raise IOError("Testing!") ... >>> with mock.patch('os.unlink', fake_os_unlink): ... os.unlink('foobar') ... Traceback (most recent call last): File "<stdin>", line 2, in <module> File "<stdin>", line 2, in fake_os_unlink IOError: Testing!
      
      



When used as a context manager, mock.patch () replaces the target function with the one we select. This is necessary so that the code executed inside the context uses the corrected method. Using the mock.patch () method, you can modify any part of the external code, forcing it to behave in such a way as to test all the conditions for the application (Listing 6.12).

Listing 6.12. Using mock.patch () to test many behaviors



  from unittest import mock import pytest import requests class WhereIsPythonError(Exception): passdef is_python_still_a_programming_language(): try: r = requests.get("http://python.org") except IOError: pass else: if r.status_code == 200: return 'Python is a programming language' in r.content raise WhereIsPythonError("Something bad happened") def get_fake_get(status_code, content): m = mock.Mock() m.status_code = status_code m.content = content def fake_get(url): return m return fake_get def raise_get(url): raise IOError("Unable to fetch url %s" % url) ❷ @mock.patch('requests.get', get_fake_get( 200, 'Python is a programming language for sure')) def test_python_is(): assert is_python_still_a_programming_language() is True @mock.patch('requests.get', get_fake_get( 200, 'Python is no more a programming language')) def test_python_is_not(): assert is_python_still_a_programming_language() is False @mock.patch('requests.get', get_fake_get(404, 'Whatever')) def test_bad_status_code(): with pytest.raises(WhereIsPythonError): is_python_still_a_programming_language() @mock.patch('requests.get', raise_get) def test_ioerror(): with pytest.raises(WhereIsPythonError): is_python_still_a_programming_language()
      
      







Listing 6.12 implements a test case that looks for all instances of the Python is a programming language string at python.org ❶. There is no option where the test will not find any given line on the selected web page. To get a negative result, you need to change the page, but this can not be done. But with the help of mock, you can go to the trick and change the behavior of the request so that it returns a dummy response with a fictitious page that does not contain a given string. This will allow you to test a negative scenario in which python.org does not contain a given string, and make sure that the program handles such a case correctly.



This example uses the mock.patch () decorator version. The behavior of the dummy object does not change, and it was easier to set an example in the context of a test function.



Using a dummy object will help simulate any problem: the server returns a 404 error, an I / O error, or a network delay error. We can make sure that the code returns the correct values ​​or throws the right exception in each case, which guarantees the expected behavior of the code.



Identify Untested Code with coverage



A great addition to unit testing is the coverage tool. [Code coverage is a measure used in testing. Shows the percentage of the source code of the program that was executed during the testing process - ed. ], which finds non-tested pieces of code. It uses code analysis and tracking tools to identify the lines that were executed. In unit testing, it can identify which parts of the code were reused and which were not used at all. Creating tests is necessary, and the ability to find out which part of the code you forgot to cover with tests makes this process more enjoyable.



Install the coverage module via pip to be able to use it through your shell.



NOTE



The command may also be called python-coverage if the module is installed through the installer of your OS. An example of this is the Debian OS.




Using coverage offline is pretty simple. It shows those parts of the program that never start and become a "dead weight" - such a code that you can’t remove it without changing the program’s performance. All test tools that were discussed earlier in the chapter are integrated with coverage.



When using pytest, install the pytest-cov plugin through pip install pytest-pycov and add a few switches to generate detailed output of the untested code (Listing 6.13).

Listing 6.13. Using pytest and coverage



 $ pytest --cov=gnocchiclient gnocchiclient/tests/unit ---------- coverage: platform darwin, python 3.6.4-final-0 ----------- Name Stmts Miss Branch BrPart Cover --------------------------- gnocchiclient/__init__.py 0 0 0 0 100% gnocchiclient/auth.py 51 23 6 0 49% gnocchiclient/benchmark.py 175 175 36 0 0% --snip-- --------------------------- TOTAL 2040 1868 424 6 8% === passed in 5.00 seconds ===
      
      



The --cov option enables the output of the coverage report at the end of testing. You must pass the package name as an argument so that the plugin properly filters the report. The output will contain lines of code that have not been executed, which means they have not been tested. All that remains for you is to open the editor and write a test for this code.



The coverage module is even better - it allows you to generate clear reports in HTML format. Just add -–cov-report-html and HTML pages will appear in the htmlcov directory from where you run the command. Each page will show which parts of the source code were or were not running.



If you want to go even further, use –-cover-fail-under-COVER_MIN_PERCENTAGE, which will cause the test suite to fail if it does not cover the minimum percentage of code. Although a large percentage of coverage is a good goal, and testing tools are useful for obtaining information about the status of test coverage, the percentage in itself is not very informative. Figure 6.1 shows an example coverage report with percentage coverage.



For example, covering the code with tests 100% is a worthy goal, but this does not necessarily mean that the code is fully tested. This value only shows that all lines of code in the program are fulfilled, but does not indicate that all conditions have been tested.



It is worth using coverage information to expand the test suite and create them for code that does not run. This simplifies project support and improves overall code quality.



image








about the author



Julien Danjou has been hacking freeware for about twenty years, and has been developing Python programs for almost twelve years. He currently leads the design team for the OpenStack-based distributed cloud platform, which owns the largest existing Python open source database, with about two and a half million lines of code. Prior to developing cloud services, Julien created the window manager and contributed to the development of many projects, such as Debian and GNU Emacs.



About Science Editor



Mike Driscoll has been programming in Python for over a decade. For a long time, he wrote about Python on The Mouse vs. The Python . Author of several Python books: Python 101, Python Interviews, and ReportLab: PDF Processingwith Python. You can find Mike on Twitter and on GitHub: @driscollis.



»More details on the book can be found on the publisher’s website

» Contents

» Excerpt



25% off coupon for hawkers - Python



Upon payment of the paper version of the book, an electronic book is sent by e-mail.



All Articles