Parameterization from file in py.test

In the field of automatic testing, you can find different tools, for example, py.test is one of the most popular solutions for writing auto-tests in Python.







Having gone through many resources related to pytest and having studied the documentation from the official website of the project, I could not find a direct description of the solution to one of the main tasks - running tests with test data stored in a separate file. Otherwise, it can be said, loading parameters into test functions from file (s) or parameterizing from file directly. Such a procedure is not described anywhere in the subtleties and the only mention of this feature is in only one line of the pytest documentation.







In this article I will talk about my solution to this problem.










Task



The main task is to generate test cases in the form of the test_input



and expected_result



parameters into each individual test function from the corresponding file function names.







Additional tasks:









Tools



In this article, I use Python 3 (2.7 is also suitable), pyyaml, and pytest



(versions 5+ for Python 3, or 4.6 for Python 2.7) without using third-party plugins. In addition, the standard os



library will be used







The file itself from which we will take test cases needs to be structured using a markup language that is convenient for human understanding. In my case, YAML was chosen (because it solves the additional task of choosing a human-readable format) . In fact, what kind of markup language for files with data sets you need depends on the requirements presented on the project.










Implementation



Since the main pillar of the universe in programming is agreement, we will have to introduce several of them for our solution.







Intercept



To begin with, this solution uses the interception function pytest_generate_tests



( wiki ), which is launched at the stage of generating test cases, and its argument metafunc



, which allows us to parameterize the function. At this point, pytest iterates over each test function and executes the subsequent generation code for it.







Arguments



You must define an exhaustive list of parameters for test functions. In my case, the dictionary is test_input



and any data type (most often a string or an integer) in expected_result



. We need these parameters for use in metafunc.parametrize(...)



.







Parameterization



This function completely repeats the operation of the parameterization @pytest.mark.parametrize



, which takes as the first argument a string listing the arguments of the test function (in our case "test_input, expected_result"



) and a list of data by which it will iterate to create our test cases (for example, [(1, 2), (2, 4), (3, 6)]



).







In battle, it will look like this:







 @pytest.mark.parametrize("test_input, expected_result", [(1, 2), (2, 4), (3, 6)]) def test_multiplication(test_input, expected_result): assert test_input * 2 == expected_result
      
      





And in our case, we will indicate this in advance:







 #  ... return metafunc.parametrize("test_input, expected", test_cases) #  `[(1, 2), (2, 4), (3, 6)]`
      
      





Filtration



This also leads to the selection of those test functions where data from the file are to be downloaded, from those that use static / dynamic data. We will apply this filtering before parsing the information from the file.







The filters themselves can be any, for example:









 #     -  if not hasattr(metafunc.function, 'pytestmark'): return #            mark_names = [ mark.name for mark in metafunc.function.pytestmark ] #   ,        if 'yaml' not in mark_names: return
      
      





Otherwise, the same filter can be implemented like this:







 #           if Mark(name='yaml', args=(), kwargs={}) not in metafunc.function.pytestmark: return
      
      







 #   ,     test_input if 'test_input' not in metafunc.fixturenames: return
      
      





This option suited me the most.










Result



We need to add only the part where we parse the data from the file. This will not be difficult in the case of yaml (as well as json, xml, etc.) , so we collect everything to the heap.







 # conftest.py import os import yaml import pytest def pytest_generate_tests(metafunc): #   ,     test_input if 'test_input' not in metafunc.fixturenames: return #     dir_path = os.path.dirname(os.path.abspath(metafunc.module.__file__)) #       file_path = os.path.join(dir_path, metafunc.function.__name__ + '.yaml') #    with open(file_path) as f: test_cases = yaml.full_load(f) #       if not test_cases: raise ValueError("Test cases not loaded") return metafunc.parametrize("test_input, expected_result", test_cases)
      
      





We write a test script like this:







 # test_script.py import pytest def test_multiplication(test_input, expected_result): assert test_input * 2 == expected_result
      
      





A data file:







 # test_multiplication.yaml - !!python/tuple [1,2] - !!python/tuple [1,3] - !!python/tuple [1,5] - !!python/tuple [2,4] - !!python/tuple [3,4] - !!python/tuple [5,4]
      
      





We get the following list of test cases:







  pytest /test_script.py --collect-only ======================== test session starts ======================== platform linux -- Python 3.7.4, pytest-5.2.1, py-1.8.0, pluggy-0.13.0 rootdir: /pytest_habr collected 6 items <Module test_script.py> <Function test_multiplication[1-2]> <Function test_multiplication[1-3]> <Function test_multiplication[1-5]> <Function test_multiplication[2-4]> <Function test_multiplication[3-4]> <Function test_multiplication[5-4]> ======================== no tests ran in 0.04s ========================
      
      





And by running the script, this result: 4 failed, 2 passed, 1 warnings in 0.11s












Add. tasks



This could end the article, but for the greater complexity, I will add more convenient identifiers to our function, another data parsing and marking of each individual test case.







So, right away, the code:







 # conftest.py import os import yaml import pytest def pytest_generate_tests(metafunc): def generate_id(input_data, level): level += 1 #      INDENTS = { # level: (levelmark, addition_indent) 1: ('_', ['', '']), 2: ('-', ['[', ']']) } COMMON_INDENT = ('-', ['[', ']']) levelmark, additional_indent = INDENTS.get(level, COMMON_INDENT) #     -     if level > 3: return additional_indent[0] + type(input_data).__name__ + additional_indent[1] #    elif isinstance(input_data, (str, bool, float, int)): return str(input_data) #   elif isinstance(input_data, (list, set, tuple)): #   ,    ,   list_repr = levelmark.join( [ generate_id(input_value, level=level) \ for input_value in input_data ]) return additional_indent[0] + list_repr + additional_indent[1] #      elif isinstance(input_data, dict): return '{' + levelmark.join(input_data.keys()) + '}' #     else: return None #   ,     test_input if 'test_input' not in metafunc.fixturenames: return #     dir_path = os.path.dirname(os.path.abspath(metafunc.module.__file__)) #       file_path = os.path.join(dir_path, metafunc.function.__name__ + '.yaml') #    with open(file_path) as f: raw_test_cases = yaml.full_load(f) #       if not raw_test_cases: raise ValueError("Test cases not loaded") #    - test_cases = [] #      for case_id, test_case in enumerate(raw_test_cases): #    marks = [ getattr(pytest.mark, name) for name in test_case.get("marks", []) ] #    ,   case_id = test_case.get("id", generate_id(test_case["test_data"], level=0)) #         pytest.param test_cases.append(pytest.param(*test_case["test_data"], marks=marks, id=case_id)) return metafunc.parametrize("test_input, expected_result", test_cases)
      
      





Accordingly, we change how our YAML file will look:







 # test_multiplication.yaml - test_data: [1, 2] id: 'one_two' - test_data: [1,3] marks: ['xfail'] - test_data: [1,5] marks: ['skip'] - test_data: [2,4] id: "it's good" marks: ['xfail'] - test_data: [3,4] marks: ['negative'] - test_data: [5,4] marks: ['more_than']
      
      





Then the description will change to:







 <Module test_script.py> <Function test_multiplication[one_two]> <Function test_multiplication[1_3]> <Function test_multiplication[1_5]> <Function test_multiplication[it's good]> <Function test_multiplication[3_4]> <Function test_multiplication[5_4]>
      
      





And the launch will be: 2 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 2 warnings in 0.12s









PS: warnings - because self-written markers are not written to pytest.ini







In development of the topic



Ready to discuss in the comments questions about the type:










All Articles