Tracking Failure Origins¶

The question of "Where does this value come from?" is fundamental for debugging. Which earlier variables could possibly have influenced the current erroneous state? And how did their values come to be?

When programmers read code during debugging, they scan it for potential origins of given values. This can be a tedious experience, notably, if the origins spread across multiple separate locations, possibly even in different modules. In this chapter, we thus investigate means to determine such origins automatically – by collecting data and control dependencies during program execution.

Prerequisites

  • You should have read the Introduction to Debugging.
  • To understand how to compute dependencies automatically (the second half of this chapter), you will need
    • advanced knowledge of Python semantics
    • knowledge on how to instrument and transform code
    • knowledge on how an interpreter works
In [5]:
# ignore
from typing import Set, List, Tuple, Any, Callable, Dict, Optional
from typing import Union, Type, Generator, cast

Dependencies¶

In the Introduction to debugging, we have seen how faults in a program state propagate to eventually become visible as failures. This induces a debugging strategy called tracking origins:

  1. We start with a single faulty state f – the failure.
  2. We determine f's origins – the parts of earlier states that could have caused the faulty state f.
  3. For each of these origins e, we determine whether they are faulty or not.
  4. For each of the faulty origins, we in turn determine their origins.
  5. If we find a part of the state that is faulty, yet has only correct origins, we have found the defect.

In all generality, a "part of the state" can be anything that can influence the program – some configuration setting, some database content, or the state of a device. Almost always, though, it is through individual variables that a part of the state manifests itself.

The good news is that variables do not take arbitrary values at arbitrary times – instead, they are set and accessed at precise moments in time, as determined by the program's semantics. This allows us to determine their origins by reading program code.

Let us assume you have a piece of code that reads as follows. The middle() function is supposed to return the "middle" number of three values x, y, and z – that is, the one number that neither is the minimum nor the maximum.

In [6]:
def middle(x, y, z):  # type: ignore
    if y < z:
        if x < y:
            return y
        elif x < z:
            return y
    else:
        if x > y:
            return y
        elif x > z:
            return x
    return z

In most cases, middle() runs just fine:

In [7]:
m = middle(1, 2, 3)
m
Out[7]:
2

In others, however, it returns the wrong value:

In [8]:
m = middle(2, 1, 3)
m
Out[8]:
1

This is a typical debugging situation: You see a value that is erroneous; and you want to find out where it came from.

  • In our case, we see that the erroneous value was returned from middle(), so we identify the five return statements in middle() that the value could have come from.
  • The value returned is the value of y, and neither x, y, nor z are altered during the execution of middle(). Hence, it must be one of the three return y statements that is the origin of m. But which one?

For our small example, we can fire up an interactive debugger and simply step through the function; this reveals us the conditions evaluated and the return statement executed.

In [10]:
# ignore
next_inputs(["step", "step", "step", "step", "quit"]);
In [11]:
with Debugger.Debugger():
    middle(2, 1, 3)
Calling middle(x = 2, y = 1, z = 3)
(debugger) step
2     if y < z:
(debugger) step
3         if x < y:
(debugger) step
5         elif x < z:
(debugger) step
6             return y
(debugger) quit

We now see that it was the second return statement that returned the incorrect value. But why was it executed after all? To this end, we can resort to the middle() source code and have a look at those conditions that caused the return y statement to be executed. Indeed, the conditions y < z, x > y, and finally x < z again are origins of the returned value – and in turn have x, y, and z as origins.

In our above reasoning about origins, we have encountered two kinds of origins:

  • earlier data values (such as the value of y being returned) and
  • earlier control conditions (such as the if conditions governing the return y statement).

The later parts of the state that can be influenced by such origins are said to be dependent on these origins. Speaking of variables, a variable $x$ depends on the value of a variable $y$ (written as $x \leftarrow y$) if a change in $y$ could affect the value of $x$.

We distinguish two kinds of dependencies $x \leftarrow y$, aligned with the two kinds of origins as outlined above:

  • Data dependency: $x$ is assigned a value computed from $y$. In our example, m is data dependent on the return value of middle().
  • Control dependency: A statement involving $x$ is executed only because a condition involving $y$ was evaluated, influencing the execution path. In our example, the value returned by return y is control dependent on the several conditions along its path, which involve x, y, and z.

Let us examine these dependencies in more detail.

Data Dependencies¶

Here is an example of a data dependency in our middle() program. The value y returned by middle() comes from the value y as originally passed as argument. We use arrows $x \leftarrow y$ to indicate that a variable $x$ depends on an earlier variable $y$:

In [32]:
# ignore
middle_deps().backward_slice('<middle() return value>', mode='d')  # type: ignore
Out[32]:
No description has been provided for this image

Here, we can see that the value y in the return statement is data dependent on the value of y as passed to middle(). An alternate interpretation of this graph is a data flow: The value of y in the upper node flows into the value of y in the lower node.

Since we consider the values of variables at specific locations in the program, such data dependencies can also be interpreted as dependencies between statements – the above return statement thus is data dependent on the initialization of y in the upper node.

Control Dependencies¶

Here is an example of a control dependency. The execution of the above return statement is controlled by the earlier test x < z. We use gray dashed lines to indicate control dependencies:

In [33]:
# ignore
middle_deps().backward_slice('<middle() return value>', mode='c', depth=1)  # type: ignore
Out[33]:
No description has been provided for this image

This test in turn is controlled by earlier tests, so the full chain of control dependencies looks like this:

In [34]:
# ignore
middle_deps().backward_slice('<middle() return value>', mode='c')  # type: ignore
Out[34]:
No description has been provided for this image

Dependency Graphs¶

The above <test> values (and their statements) are in turn also dependent on earlier data, namely the x, y, and z values as originally passed. We can draw all data and control dependencies in a single graph, called a program dependency graph:

In [35]:
# ignore
middle_deps()
Out[35]:
No description has been provided for this image

This graph now gives us an idea on how to proceed to track the origins of the middle() return value at the bottom. Its value can come from any of the origins – namely the initialization of y at the function call, or from the <test> that controls it. This test in turn depends on x and z and their associated statements, which we now can check one after the other.

Note that all these dependencies in the graph are dynamic dependencies – that is, they refer to statements actually evaluated in the run at hand, as well as the decisions made in that very run. There also are static dependency graphs coming from static analysis of the code; but for debugging, dynamic dependencies specific to the failing run are more useful.

Showing Dependencies with Code¶

While a graph gives us a representation about which possible data and control flows to track, integrating dependencies with actual program code results in a compact representation that is easy to reason about.

The following listing shows such an integration. For each executed line (*), we see its data (<=) and control (<-) dependencies, listing the associated variables and line numbers. The comment

# <= y (1); <- <test> (5)

for Line 6, for instance, states that the return value is data dependent on the value of y in Line 1, and control dependent on the test in Line 5.

Again, one can easily follow these dependencies back to track where a value came from (data dependencies) and why a statement was executed (control dependency).

In [42]:
# ignore
middle_deps().code()  # type: ignore
*    1 def middle(x, y, z):  # type: ignore
*    2     if y < z:  # <= z (1), y (1)
*    3         if x < y:  # <= x (1), y (1); <- <test> (2)
     4             return y
*    5         elif x < z:  # <= z (1), x (1); <- <test> (3)
*    6             return y  # <= y (1); <- <test> (5)
     7     else:
     8         if x > y:
     9             return y
    10         elif x > z:
    11             return x
    12 
    return z

One important aspect of dependencies is that they not only point to specific sources and causes of failures – but that they also rule out parts of program and state as failures.

  • In the above code, Lines 8 and later have no influence on the output, simply because they were not executed.
  • Furthermore, we see that we can start our investigation with Line 6, because that is the last one executed.
  • The data dependencies tell us that no statement has interfered with the value of y between the function call and its return.
  • Hence, the error must be in the conditions or the final return statement.

With this in mind, recall that our original invocation was middle(2, 1, 3). Why and how is the above code wrong?

In [43]:
quiz("Which of the following `middle()` code lines should be fixed?",
    [
        "Line 2: `if y < z:`",
        "Line 3: `if x < y:`",
        "Line 5: `elif x < z:`",
        "Line 6: `return z`",
    ], '(1 ** 0 + 1 ** 1) ** (1 ** 2 + 1 ** 3)')
Out[43]:

Quiz

Which of the following middle() code lines should be fixed?





Indeed, from the controlling conditions, we see that y < z, x >= y, and x < z all hold. Hence, y <= x < z holds, and it is x, not y, that should be returned.

Slices¶

Given a dependency graph for a particular variable, we can identify the subset of the program that could have influenced it – the so-called slice. In the above code listing, these code locations are highlighted with * characters. Only these locations are part of the slice.

Slices are central to debugging for two reasons:

  • First, they rule out those locations of the program that could not have an effect on the failure. Hence, these locations need not be investigated as it comes to searching for the defect. Nor do they need to be considered for a fix, as any change outside the program slice by construction cannot affect the failure.
  • Second, they bring together possible origins that may be scattered across the code. Many dependencies in program code are non-local, with references to functions, classes, and modules defined in other locations, files, or libraries. A slice brings together all those locations in a single whole.

Here is an example of a slice – this time for our well-known remove_html_markup() function from the introduction to debugging:

In [45]:
print_content(inspect.getsource(remove_html_markup), '.py')
def remove_html_markup(s):  # type: ignore
    tag = False
    quote = False
    out = ""

    for c in s:
        assert tag or not quote

        if c == '<' and not quote:
            tag = True
        elif c == '>' and not quote:
            tag = False
        elif (c == '"' or c == "'") and tag:
            quote = not quote
        elif not tag:
            out = out + c

    return out

When we invoke remove_html_markup() as follows...

In [46]:
remove_html_markup('<foo>bar</foo>')
Out[46]:
'bar'

... we obtain the following dependencies:

In [47]:
# ignore
def remove_html_markup_deps() -> Dependencies:
    return Dependencies({('s', (remove_html_markup, 136)): set(), ('tag', (remove_html_markup, 137)): set(), ('quote', (remove_html_markup, 138)): set(), ('out', (remove_html_markup, 139)): set(), ('c', (remove_html_markup, 141)): {('s', (remove_html_markup, 136))}, ('<test>', (remove_html_markup, 144)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('tag', (remove_html_markup, 145)): set(), ('<test>', (remove_html_markup, 146)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('<test>', (remove_html_markup, 148)): {('c', (remove_html_markup, 141))}, ('<test>', (remove_html_markup, 150)): {('tag', (remove_html_markup, 147)), ('tag', (remove_html_markup, 145))}, ('tag', (remove_html_markup, 147)): set(), ('out', (remove_html_markup, 151)): {('out', (remove_html_markup, 151)), ('c', (remove_html_markup, 141)), ('out', (remove_html_markup, 139))}, ('<remove_html_markup() return value>', (remove_html_markup, 153)): {('<test>', (remove_html_markup, 146)), ('out', (remove_html_markup, 151))}}, {('s', (remove_html_markup, 136)): set(), ('tag', (remove_html_markup, 137)): set(), ('quote', (remove_html_markup, 138)): set(), ('out', (remove_html_markup, 139)): set(), ('c', (remove_html_markup, 141)): set(), ('<test>', (remove_html_markup, 144)): set(), ('tag', (remove_html_markup, 145)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 146)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 148)): {('<test>', (remove_html_markup, 146))}, ('<test>', (remove_html_markup, 150)): {('<test>', (remove_html_markup, 148))}, ('tag', (remove_html_markup, 147)): {('<test>', (remove_html_markup, 146))}, ('out', (remove_html_markup, 151)): {('<test>', (remove_html_markup, 150))}, ('<remove_html_markup() return value>', (remove_html_markup, 153)): set()})
In [48]:
# ignore
remove_html_markup_deps().graph()
Out[48]:
No description has been provided for this image

Again, we can read such a graph forward (starting from, say, s) or backward (starting from the return value). Starting forward, we see how the passed string s flows into the for loop, breaking s into individual characters c that are then checked on various occasions, before flowing into the out return value. We also see how the various if conditions are all influenced by c, tag, and quote.

In [49]:
quiz("Why does the first line `tag = False` not influence anything?",
    [
        "Because the input contains only tags",
        "Because `tag` is set to True with the first character",
        "Because `tag` is not read by any variable",
        "Because the input contains no tags",
    ], '(1 << 1 + 1 >> 1)')
Out[49]:

Quiz

Why does the first line tag = False not influence anything?





Which are the locations that set tag to True? To this end, we compute the slice of tag at tag = True:

In [50]:
# ignore
tag_deps = Dependencies({('tag', (remove_html_markup, 145)): set(), ('<test>', (remove_html_markup, 144)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('quote', (remove_html_markup, 138)): set(), ('c', (remove_html_markup, 141)): {('s', (remove_html_markup, 136))}, ('s', (remove_html_markup, 136)): set()}, {('tag', (remove_html_markup, 145)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 144)): set(), ('quote', (remove_html_markup, 138)): set(), ('c', (remove_html_markup, 141)): set(), ('s', (remove_html_markup, 136)): set()})
tag_deps
Out[50]:
No description has been provided for this image

We see where the value of tag comes from: from the characters c in s as well as quote, which all cause it to be set. Again, we can combine these dependencies and the listing in a single, compact view. Note, again, that there are no other locations in the code that could possibly have affected tag in our run.

In [51]:
# ignore
tag_deps.code()
   238 def remove_html_markup(s):  # type: ignore
   239     tag = False
   240     quote = False
   241     out = ""
   242 
   243     for c in s:
   244         assert tag or not quote
   245 
   246         if c == '<' and not quote:
   247             tag = True
   248         elif c == '>' and not quote:
   249             tag = False
   250         elif (c == '"' or c == "'") and tag:
   251             quote = not quote
   252         elif not tag:
   253             out = out + c
   254 
   255     return out
In [52]:
quiz("How does the slice of `tag = True` change "
     "for a different value of `s`?",
    [
        "Not at all",
        "If `s` contains a quote, the `quote` slice is included, too",
        "If `s` contains no HTML tag, the slice will be empty"
    ], '[1, 2, 3][1:]')
Out[52]:

Quiz

How does the slice of tag = True change for a different value of s?




Indeed, our dynamic slices reflect dependencies as they occurred within a single execution. As the execution changes, so do the dependencies.

Tracking Techniques¶

For the remainder of this chapter, let us investigate means to determine such dependencies automatically – by collecting them during program execution. The idea is that with a single Python call, we can collect the dependencies for some computation, and present them to programmers – as graphs or as code annotations, as shown above.

To track dependencies, for every variable, we need to keep track of its origins – where it obtained its value, and which tests controlled its assignments. There are two ways to do so:

  • Wrapping Data Objects
  • Wrapping Data Accesses

Wrapping Data Objects¶

One way to track origins is to wrap each value in a class that stores both a value and the origin of the value. If a variable x is initialized to zero in Line 3, for instance, we could store it as

x = (value=0, origin=<Line 3>)

and if it is copied in, say, Line 5 to another variable y, we could store this as

y = (value=0, origin=<Line 3, Line 5>)

Such a scheme would allow us to track origins and dependencies right within the variable.

In a language like Python, it is actually possibly to subclass from basic types. Here's how we create a MyInt subclass of int:

In [53]:
class MyInt(int):
    def __new__(cls: Type, value: Any, *args: Any, **kwargs: Any) -> Any:
        return super(cls, cls).__new__(cls, value)

    def __repr__(self) -> str:
        return f"{int(self)}"
In [54]:
n: MyInt = MyInt(5)

We can access n just like any integer:

In [55]:
n, n + 1
Out[55]:
(5, 6)

However, we can also add extra attributes to it:

In [56]:
n.origin = "Line 5"  # type: ignore
In [57]:
n.origin  # type: ignore
Out[57]:
'Line 5'

Such a "wrapping" scheme has the advantage of leaving program code untouched – simply pass "wrapped" objects instead of the original values. However, it also has a number of drawbacks.

  • First, we must make sure that the "wrapper" objects are still compatible with the original values – notably by converting them back whenever needed. (What happens if an internal Python function expects an int and gets a MyInt instead?)
  • Second, we have to make sure that origins do not get lost during computations – which involves overloading operators such as +, -, *, and so on. (Right now, MyInt(1) + 1 gives us an int object, not a MyInt.)
  • Third, we have to do this for all data types of a language, which is pretty tedious.
  • Fourth and last, however, we want to track whenever a value is assigned to another variable. Python has no support for this, and thus our dependencies will necessarily be incomplete.

Wrapping Data Accesses¶

An alternate way of tracking origins is to instrument the source code such that all data read and write operations are tracked. That is, the original data stays unchanged, but we change the code instead.

In essence, for every occurrence of a variable x being read, we replace it with

_data.get('x', x)  # returns x

and for every occurrence of a value being written to x, we replace the value with

_data.set('x', value)  # returns value

and let the _data object track these reads and writes.

Hence, an assignment such as

a = b + c

would get rewritten to

a = _data.set('a', _data.get('b', b) + _data.get('c', c))

and with every access to _data, we would track

  1. the current location in the code, and
  2. whether the respective variable was read or written.

For the above statement, we could deduce that b and c were read, and a was written – which makes a data dependent on b and c.

The advantage of such instrumentation is that it works with arbitrary objects (in Python, that is) – we do not care whether a, b, and c would be integers, floats, strings, lists or any other type for which + would be defined. Also, the code semantics remain entirely unchanged.

The disadvantage, however, is that it takes a bit of effort to exactly separate reads and writes into individual groups, and that a number of language features have to be handled separately. This is what we do in the remainder of this chapter.

A Data Tracker¶

To implement _data accesses as shown above, we introduce the DataTracker class. As its name suggests, it keeps track of variables being read and written, and provides methods to determine the code location where this took place.

In [58]:
class DataTracker(StackInspector):
    """Track data accesses during execution"""

    def __init__(self, log: bool = False) -> None:
        """Constructor. If `log` is set, turn on logging."""
        self.log = log

set() is invoked when a variable is set, as in

pi = _data.set('pi', 3.1415)

By default, we simply log the access using name and value. (loads will be used later.)

In [59]:
class DataTracker(DataTracker):
    def set(self, name: str, value: Any, loads: Optional[Set[str]] = None) -> Any:
        """Track setting `name` to `value`."""
        if self.log:
            caller_func, lineno = self.caller_location()
            print(f"{caller_func.__name__}:{lineno}: setting {name}")

        return value

get() is invoked when a variable is retrieved, as in

print(_data.get('pi', pi))

By default, we simply log the access.

In [60]:
class DataTracker(DataTracker):
    def get(self, name: str, value: Any) -> Any:
        """Track getting `value` from `name`."""

        if self.log:
            caller_func, lineno = self.caller_location()
            print(f"{caller_func.__name__}:{lineno}: getting {name}")

        return value

Here's an example of a logging DataTracker:

In [61]:
_test_data = DataTracker(log=True)
x = _test_data.set('x', 1)
<module>:2: setting x
In [62]:
_test_data.get('x', x)
<module>:1: getting x
Out[62]:
1

Instrumenting Source Code¶

How do we transform source code such that read and write accesses to variables would be automatically rewritten? To this end, we inspect the internal representation of source code, namely the abstract syntax trees (ASTs). An AST represents the code as a tree, with specific node types for each syntactical element.

Here is the tree representation for our middle() function. It starts with a FunctionDef node at the top (with the name "middle" and the three arguments x, y, z as children), followed by a subtree for each of the If statements, each of which contains a branch for when their condition evaluates to True and a branch for when their condition evaluates to False.

In [65]:
middle_tree = ast.parse(inspect.getsource(middle))
show_ast(middle_tree)
No description has been provided for this image

At the very bottom of the tree, you can see a number of Name nodes, referring individual variables. These are the ones we want to transform.

Tracking Variable Accesses¶

Our goal is to traverse the tree, identify all Name nodes, and convert them to respective _data accesses. To this end, we manipulate the AST through the ast Python module ast. The official Python ast reference is complete, but a bit brief; the documentation "Green Tree Snakes - the missing Python AST docs" provides an excellent introduction.

The Python ast module provides a class NodeTransformer that allows such transformations. By subclassing from it, we provide a method visit_Name() that will be invoked for all Name nodes – and replace it by a new subtree from make_get_data():

In [68]:
DATA_TRACKER = '_data'
In [69]:
def is_internal(id: str) -> bool:
    """Return True if `id` is a built-in function or type"""
    return (id in dir(__builtins__) or id in dir(typing))
In [70]:
assert is_internal('int')
assert is_internal('None')
assert is_internal('Tuple')
In [71]:
class TrackGetTransformer(NodeTransformer):
    def visit_Name(self, node: Name) -> AST:
        self.generic_visit(node)

        if is_internal(node.id):
            # Do not change built-in names and types
            return node

        if node.id == DATA_TRACKER:
            # Do not change own accesses
            return node

        if not isinstance(node.ctx, Load):
            # Only change loads (not stores, not deletions)
            return node

        new_node = make_get_data(node.id)
        ast.copy_location(new_node, node)
        return new_node

Our function make_get_data(id, method) returns a new subtree equivalent to the Python code _data.method('id', id).

In [73]:
# Starting with Python 3.8, these will become Constant.
# from ast import Num, Str, NameConstant
# Use `ast.Num`, `ast.Str`, and `ast.NameConstant` for compatibility
In [74]:
def make_get_data(id: str, method: str = 'get') -> Call:
    return Call(func=Attribute(value=Name(id=DATA_TRACKER, ctx=Load()), 
                               attr=method, ctx=Load()),
                args=[ast.Str(id), Name(id=id, ctx=Load())],
                keywords=[])

This is the tree that make_get_data() produces:

In [75]:
show_ast(Module(body=[make_get_data("x")], type_ignores=[]))  # type: ignore
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
No description has been provided for this image

How do we know that this is a correct subtree? We can carefully read the official Python ast reference and then proceed by trial and error (and apply delta debugging to determine error causes). Or – pro tip! – we can simply take a piece of Python code, parse it and use ast.dump() to print out how to construct the resulting AST:

In [76]:
print(ast.dump(ast.parse("_data.get('x', x)")))
Module(body=[Expr(value=Call(func=Attribute(value=Name(id='_data', ctx=Load()), attr='get', ctx=Load()), args=[Constant(value='x'), Name(id='x', ctx=Load())], keywords=[]))], type_ignores=[])

If you compare the above output with the code of make_get_data(), above, you will find out where the source of make_get_data() comes from.

Let us put TrackGetTransformer to action. Its visit() method calls visit_Name(), which then in turn transforms the Name nodes as we want it. This happens in place.

In [77]:
TrackGetTransformer().visit(middle_tree);
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],

To see the effect of our transformations, we introduce a method dump_tree() which outputs the tree – and also compiles it to check for any inconsistencies.

In [78]:
def dump_tree(tree: AST) -> None:
    print_content(ast.unparse(tree), '.py')
    ast.fix_missing_locations(tree)  # Must run this before compiling
    _ = compile(cast(ast.Module, tree), '<dump_tree>', 'exec')

We see that our transformer has properly replaced all variable accesses:

In [79]:
dump_tree(middle_tree)
def middle(x, y, z):
    if _data.get('y', y) < _data.get('z', z):
        if _data.get('x', x) < _data.get('y', y):
            return _data.get('y', y)
        elif _data.get('x', x) < _data.get('z', z):
            return _data.get('y', y)
    elif _data.get('x', x) > _data.get('y', y):
        return _data.get('y', y)
    elif _data.get('x', x) > _data.get('z', z):
        return _data.get('x', x)
    return _data.get('z', z)

Let us now execute this code together with the DataTracker() class we previously introduced. The class DataTrackerTester() takes a (transformed) tree and a function. Using it as

with DataTrackerTester(tree, func):
    func(...)

first executes the code in tree (possibly instrumenting func) and then the with body. At the end, func is restored to its previous (non-instrumented) version.

In [81]:
class DataTrackerTester:
    def __init__(self, tree: AST, func: Callable, log: bool = True) -> None:
        """Constructor. Execute the code in `tree` while instrumenting `func`."""
        # We pass the source file of `func` such that we can retrieve it
        # when accessing the location of the new compiled code
        source = cast(str, inspect.getsourcefile(func))
        self.code = compile(cast(ast.Module, tree), source, 'exec')
        self.func = func
        self.log = log

    def make_data_tracker(self) -> Any:
        return DataTracker(log=self.log)

    def __enter__(self) -> Any:
        """Rewrite function"""
        tracker = self.make_data_tracker()
        globals()[DATA_TRACKER] = tracker
        exec(self.code, globals())
        return tracker

    def __exit__(self, exc_type: Type, exc_value: BaseException,
                 traceback: TracebackType) -> Optional[bool]:
        """Restore function"""
        globals()[self.func.__name__] = self.func
        del globals()[DATA_TRACKER]
        return None

Here is our middle() function:

In [82]:
print_content(inspect.getsource(middle), '.py', start_line_number=1)
 1  def middle(x, y, z):  # type: ignore
 2      if y < z:
 3          if x < y:
 4              return y
 5          elif x < z:
 6              return y
 7      else:
 8          if x > y:
 9              return y
10          elif x > z:
11              return x
12      return z

And here is our instrumented middle_tree executed with a DataTracker object. We see how the middle() tests access one argument after another.

In [83]:
with DataTrackerTester(middle_tree, middle):
    middle(2, 1, 3)
middle:2: getting y
middle:2: getting z
middle:3: getting x
middle:3: getting y
middle:5: getting x
middle:5: getting z
middle:6: getting y

After DataTrackerTester is done, middle is reverted to its non-instrumented version:

In [84]:
middle(2, 1, 3)
Out[84]:
1

For a complete picture of what happens during executions, we implement a number of additional code transformers.

For each assignment statement x = y, we change it to x = _data.set('x', y). This allows tracking assignments.

Each return statement return x is transformed to return _data.set('<return_value>', x). This allows tracking return values.

To track control dependencies, for every block controlled by an if, while, or for:

  1. We wrap their tests in a _data.test() wrapper. This allows us to assign pseudo-variables like <test> which hold the conditions.
  2. We wrap their controlled blocks in a with statement. This allows us to track the variables read right before the with (= the controlling variables), and to restore the current controlling variables when the block is left.

A statement

if cond:
    body

thus becomes

if _data.test(cond):
    with _data:
        body

We also want to be able to track calls across multiple functions. To this end, we wrap each call

func(arg1, arg2, ...)

into

_data.ret(_data.call(func)(_data.arg(arg1), _data.arg(arg2), ...))

each of which simply pass through their given argument, but which allow tracking the beginning of calls (call()), the computation of arguments (arg()), and the return of the call (ret()), respectively.

On the receiving end, for each function argument x, we insert a call _data.param('x', x, [position info]) to initialize x. This is useful for tracking parameters across function calls.

What do we obtain after we have applied all these transformers on middle()? We see that the code now contains quite a load of instrumentation.

In [140]:
dump_tree(middle_tree)
def middle(x, y, z):
    _data.param('x', x, pos=1)
    _data.param('y', y, pos=2)
    _data.param('z', z, pos=3, last=True)
    if _data.test(_data.get('y', y) < _data.get('z', z)):
        with _data:
            if _data.test(_data.get('x', x) < _data.get('y', y)):
                with _data:
                    return _data.set('<middle() return value>', _data.get('y', y))
            else:
                with _data:
                    if _data.test(_data.get('x', x) < _data.get('z', z)):
                        with _data:
                            return _data.set('<middle() return value>', _data.get('y', y))
    else:
        with _data:
            if _data.test(_data.get('x', x) > _data.get('y', y)):
                with _data:
                    return _data.set('<middle() return value>', _data.get('y', y))
            else:
                with _data:
                    if _data.test(_data.get('x', x) > _data.get('z', z)):
                        with _data:
                            return _data.set('<middle() return value>', _data.get('x', x))
    return _data.set('<middle() return value>', _data.get('z', z))

And when we execute this code, we see that we can track quite a number of events, while the code semantics stay unchanged.

In [141]:
with DataTrackerTester(middle_tree, middle):
    m = middle(2, 1, 3)
m
middle:12: initializing x #1
middle:12: setting x
middle:12: initializing y #2
middle:12: setting y
middle:12: initializing z #3
middle:12: setting z
middle:2: getting y
middle:2: getting z
middle:2: testing condition
middle:3: entering block
middle:3: getting x
middle:3: getting y
middle:3: testing condition
middle:5: entering block
middle:5: getting x
middle:5: getting z
middle:5: testing condition
middle:6: entering block
middle:6: getting y
middle:6: setting <middle() return value>
middle:6: exiting block
middle:5: exiting block
middle:3: exiting block
Out[141]:
1

Our next step will now be not only to log these events, but to actually construct dependencies from them.

Tracking Dependencies¶

To construct dependencies from variable accesses, we subclass DataTracker into DependencyTracker – a class that actually keeps track of all these dependencies. Its constructor initializes a number of variables we will discuss below.

In [144]:
class DependencyTracker(DataTracker):
    """Track dependencies during execution"""

    def __init__(self, *args: Any, **kwargs: Any) -> None:
        """Constructor. Arguments are passed to DataTracker.__init__()"""
        super().__init__(*args, **kwargs)

        self.origins: Dict[str, Location] = {}  # Where current variables were last set
        self.data_dependencies: Dependency = {}  # As with Dependencies, above
        self.control_dependencies: Dependency = {}

        self.last_read: List[str] = []  # List of last read variables
        self.last_checked_location = (StackInspector.unknown, 1)
        self._ignore_location_change = False

        self.data: List[List[str]] = [[]]  # Data stack
        self.control: List[List[str]] = [[]]  # Control stack

        self.frames: List[Dict[Union[int, str], Any]] = [{}]  # Argument stack
        self.args: Dict[Union[int, str], Any] = {}  # Current args

Data Dependencies¶

The first job of our DependencyTracker is to construct dependencies between read and written variables.

Reading Variables¶

As in DataTracker, the key method of DependencyTracker again is get(), invoked as _data.get('x', x) whenever a variable x is read. First and foremost, it appends the name of the read variable to the list last_read.

In [145]:
class DependencyTracker(DependencyTracker):
    def get(self, name: str, value: Any) -> Any:
        """Track a read access for variable `name` with value `value`"""
        self.check_location()
        self.last_read.append(name)
        return super().get(name, value)

    def check_location(self) -> None:
        pass  # More on that below
In [146]:
x = 5
y = 3
In [147]:
_test_data = DependencyTracker(log=True)
_test_data.get('x', x) + _test_data.get('y', y)
<module>:2: getting x
<module>:2: getting y
Out[147]:
8
In [148]:
_test_data.last_read
Out[148]:
['x', 'y']

Checking Locations¶

However, before appending the read variable to last_read, _data.get() does one more thing. By invoking check_location(), it clears the last_read list if we have reached a new line in the execution. This avoids situations such as

x
y
z = a + b

where x and y are, well, read, but do not affect the last line. Therefore, with every new line, the list of last read lines is cleared.

In [149]:
class DependencyTracker(DependencyTracker):
    def clear_read(self) -> None:
        """Clear set of read variables"""
        if self.log:
            direct_caller = inspect.currentframe().f_back.f_code.co_name  # type: ignore
            caller_func, lineno = self.caller_location()
            print(f"{caller_func.__name__}:{lineno}: "
                  f"clearing read variables {self.last_read} "
                  f"(from {direct_caller})")

        self.last_read = []

    def check_location(self) -> None:
        """If we are in a new location, clear set of read variables"""
        location = self.caller_location()
        func, lineno = location
        last_func, last_lineno = self.last_checked_location

        if self.last_checked_location != location:
            if self._ignore_location_change:
                self._ignore_location_change = False
            elif func.__name__.startswith('<'):
                # Entering list comprehension, eval(), exec(), ...
                pass
            elif last_func.__name__.startswith('<'):
                # Exiting list comprehension, eval(), exec(), ...
                pass
            else:
                # Standard case
                self.clear_read()

        self.last_checked_location = location

Two methods can suppress this reset of the last_read list:

  • ignore_next_location_change() suppresses the reset for the next line. This is useful when returning from a function, when the return value is still in the list of "read" variables.
  • ignore_location_change() suppresses the reset for the current line. This is useful if we already have returned from a function call.
In [150]:
class DependencyTracker(DependencyTracker):
    def ignore_next_location_change(self) -> None:
        self._ignore_location_change = True

    def ignore_location_change(self) -> None:
        self.last_checked_location = self.caller_location()

Watch how DependencyTracker resets last_read when a new line is executed:

In [151]:
_test_data = DependencyTracker()
In [152]:
_test_data.get('x', x) + _test_data.get('y', y)
Out[152]:
8
In [153]:
_test_data.last_read
Out[153]:
['x', 'y']
In [154]:
a = 42
b = -1
_test_data.get('a', a) + _test_data.get('b', b)
Out[154]:
41
In [155]:
_test_data.last_read
Out[155]:
['x', 'y', 'a', 'b']

Setting Variables¶

The method set() creates dependencies. It is invoked as _data.set('x', value) whenever a variable x is set.

First and foremost, it takes the list of variables read last_read, and for each of the variables $v$, it takes their origin $o$ (the place where they were last set) and appends the pair ($v$, $o$) to the list of data dependencies. It then does a similar thing with control dependencies (more on these below), and finally marks (in self.origins) the current location of $v$.

In [157]:
class DependencyTracker(DependencyTracker):
    TEST = '<test>'  # Name of pseudo-variables for testing conditions

    def set(self, name: str, value: Any, loads: Optional[Set[str]] = None) -> Any:
        """Add a dependency for `name` = `value`"""

        def add_dependencies(dependencies: Set[Node], 
                             vars_read: List[str], tp: str) -> None:
            """Add origins of `vars_read` to `dependencies`."""
            for var_read in vars_read:
                if var_read in self.origins:
                    if var_read == self.TEST and tp == "data":
                        # Can't have data dependencies on conditions
                        continue

                    origin = self.origins[var_read]
                    dependencies.add((var_read, origin))

                    if self.log:
                        origin_func, origin_lineno = origin
                        caller_func, lineno = self.caller_location()
                        print(f"{caller_func.__name__}:{lineno}: "
                              f"new {tp} dependency: "
                              f"{name} <= {var_read} "
                              f"({origin_func.__name__}:{origin_lineno})")

        self.check_location()
        ret = super().set(name, value)
        location = self.caller_location()

        add_dependencies(self.data_dependencies.setdefault
                         ((name, location), set()),
                         self.last_read, tp="data")
        add_dependencies(self.control_dependencies.setdefault
                         ((name, location), set()),
                         cast(List[str], itertools.chain.from_iterable(self.control)),
                         tp="control")

        self.origins[name] = location

        # Reset read info for next line
        self.last_read = [name]

        # Next line is a new location
        self._ignore_location_change = False

        return ret

    def dependencies(self) -> Dependencies:
        """Return dependencies"""
        return Dependencies(self.data_dependencies,
                            self.control_dependencies)

Let us illustrate set() by example. Here's a set of variables read and written:

In [158]:
_test_data = DependencyTracker()
x = _test_data.set('x', 1)
y = _test_data.set('y', _test_data.get('x', x))
z = _test_data.set('z', _test_data.get('x', x) + _test_data.get('y', y))

The attribute origins saves for each variable where it was last written:

In [159]:
_test_data.origins
Out[159]:
{'x': (<function __main__.<module>()>, 2),
 'y': (<function __main__.<module>()>, 3),
 'z': (<function __main__.<module>()>, 4)}

The attribute data_dependencies tracks for each variable the variables it was read from:

In [160]:
_test_data.data_dependencies
Out[160]:
{('x', (<function __main__.<module>()>, 2)): set(),
 ('y',
  (<function __main__.<module>()>, 3)): {('x',
   (<function __main__.<module>()>, 2))},
 ('z',
  (<function __main__.<module>()>, 4)): {('x',
   (<function __main__.<module>()>, 2)), ('y',
   (<function __main__.<module>()>, 3))}}

Hence, the above code already gives us a small dependency graph:

In [161]:
# ignore
_test_data.dependencies().graph()
Out[161]:
No description has been provided for this image

In the remainder of this section, we define methods to

  • track control dependencies (test(), __enter__(), __exit__())
  • track function calls and returns (call(), ret())
  • track function arguments (arg(), param())
  • check the validity of our dependencies (validate()).

Like our get() and set() methods above, these work by refining the appropriate methods defined in the DataTracker class, building on our NodeTransformer transformations.

At this point, DependencyTracker is complete; we have all in place to track even complex dependencies in instrumented code.

Slicing Code¶

Let us now put all these pieces together. We have a means to instrument the source code (our various NodeTransformer classes) and a means to track dependencies (the DependencyTracker class). Now comes the time to put all these things together in a single tool, which we call Slicer.

The basic idea of Slicer is that you can use it as follows:

with Slicer(func_1, func_2, ...) as slicer:
    func(...)

which first instruments the functions given in the constructor (i.e., replaces their definitions with instrumented counterparts), and then runs the code in the body, calling instrumented functions, and allowing the slicer to collect dependencies. When the body returns, the original definition of the instrumented functions is restored.

An Instrumenter Base Class¶

The basic functionality of instrumenting a number of functions (and restoring them at the end of the with block) comes in a Instrumenter base class. It invokes instrument() on all items to instrument; this is to be overloaded in subclasses.

In [184]:
class Instrumenter(StackInspector):
    """Instrument functions for dynamic tracking"""

    def __init__(self, *items_to_instrument: Callable,
                 globals: Optional[Dict[str, Any]] = None,
                 log: Union[bool, int] = False) -> None:
        """
        Create an instrumenter.
        `items_to_instrument` is a list of items to instrument.
        `globals` is a namespace to use (default: caller's globals())
        """

        self.log = log
        self.items_to_instrument: List[Callable] = list(items_to_instrument)
        self.instrumented_items: Set[Any] = set()

        if globals is None:
            globals = self.caller_globals()
        self.globals = globals

    def __enter__(self) -> Any:
        """Instrument sources"""
        items = self.items_to_instrument
        if not items:
            items = self.default_items_to_instrument()

        for item in items:
            self.instrument(item)

        return self

    def default_items_to_instrument(self) -> List[Callable]:
        return []

    def instrument(self, item: Any) -> Any:
        """Instrument `item`. To be overloaded in subclasses."""
        if self.log:
            print("Instrumenting", item)
        self.instrumented_items.add(item)
        return item

At the end of the with block, we restore the given functions.

In [185]:
class Instrumenter(Instrumenter):
    def __exit__(self, exc_type: Type, exc_value: BaseException,
                 traceback: TracebackType) -> Optional[bool]:
        """Restore sources"""
        self.restore()
        return None

    def restore(self) -> None:
        for item in self.instrumented_items:
            self.globals[item.__name__] = item

By default, an Instrumenter simply outputs a log message:

In [186]:
with Instrumenter(middle, log=True) as ins:
    pass
Instrumenting <function middle at 0x11d0c1080>

The Slicer Class¶

The Slicer class comes as a subclass of Instrumenter. It sets its own dependency tracker (which can be overwritten by setting the dependency_tracker keyword argument).

In [187]:
class Slicer(Instrumenter):
    """Track dependencies in an execution"""

    def __init__(self, *items_to_instrument: Any,
                 dependency_tracker: Optional[DependencyTracker] = None,
                 globals: Optional[Dict[str, Any]] = None,
                 log: Union[bool, int] = False):
        """Create a slicer.
        `items_to_instrument` are Python functions or modules with source code.
        `dependency_tracker` is the tracker to be used (default: DependencyTracker).
        `globals` is the namespace to be used(default: caller's `globals()`)
        `log`=True or `log` > 0 turns on logging
        """
        super().__init__(*items_to_instrument, globals=globals, log=log)

        if dependency_tracker is None:
            dependency_tracker = DependencyTracker(log=(log > 1))
        self.dependency_tracker = dependency_tracker

        self.saved_dependencies = None

    def default_items_to_instrument(self) -> List[Callable]:
        raise ValueError("Need one or more items to instrument")

The parse() method parses a given item, returning its AST.

In [188]:
class Slicer(Slicer):
    def parse(self, item: Any) -> AST:
        """Parse `item`, returning its AST"""
        source_lines, lineno = inspect.getsourcelines(item)
        source = "".join(source_lines)

        if self.log >= 2:
            print_content(source, '.py', start_line_number=lineno)
            print()
            print()

        tree = ast.parse(source)
        ast.increment_lineno(tree, lineno - 1)
        return tree

The transform() method applies the list of transformers defined earlier in this chapter.

In [189]:
class Slicer(Slicer):
    def transformers(self) -> List[NodeTransformer]:
        """List of transformers to apply. To be extended in subclasses."""
        return [
            TrackCallTransformer(),
            TrackSetTransformer(),
            TrackGetTransformer(),
            TrackControlTransformer(),
            TrackReturnTransformer(),
            TrackParamsTransformer()
        ]

    def transform(self, tree: AST) -> AST:
        """Apply transformers on `tree`. May be extended in subclasses."""
        # Apply transformers
        for transformer in self.transformers():
            if self.log >= 3:
                print(transformer.__class__.__name__ + ':')

            transformer.visit(tree)
            ast.fix_missing_locations(tree)
            if self.log >= 3:
                print_content(ast.unparse(tree), '.py')
                print()
                print()

        if 0 < self.log < 3:
            print_content(ast.unparse(tree), '.py')
            print()
            print()

        return tree

The execute() method executes the transformed tree (such that we get the new definitions). We also make the dependency tracker available for the code in the with block.

In [190]:
class Slicer(Slicer):
    def execute(self, tree: AST, item: Any) -> None:
        """Compile and execute `tree`. May be extended in subclasses."""

        # We pass the source file of `item` such that we can retrieve it
        # when accessing the location of the new compiled code
        source = cast(str, inspect.getsourcefile(item))
        code = compile(cast(ast.Module, tree), source, 'exec')

        # Enable dependency tracker
        self.globals[DATA_TRACKER] = self.dependency_tracker

        # Execute the code, resulting in a redefinition of item
        exec(code, self.globals)

The instrument() method puts all these together, first parsing the item into a tree, then transforming and executing the tree.

In [191]:
class Slicer(Slicer):
    def instrument(self, item: Any) -> Any:
        """Instrument `item`, transforming its source code, and re-defining it."""
        if is_internal(item.__name__):
            return item  # Do not instrument `print()` and the like

        if inspect.isbuiltin(item):
            return item  # No source code

        item = super().instrument(item)
        tree = self.parse(item)
        tree = self.transform(tree)
        self.execute(tree, item)

        new_item = self.globals[item.__name__]
        return new_item

When we restore the original definition (after the with block), we save the dependency tracker again.

In [192]:
class Slicer(Slicer):
    def restore(self) -> None:
        """Restore original code."""
        if DATA_TRACKER in self.globals:
            self.saved_dependencies = self.globals[DATA_TRACKER]
            del self.globals[DATA_TRACKER]

        super().restore()

Three convenience functions allow us to see the dependencies as (well) dependencies, as code, and as graph. These simply invoke the respective functions on the saved dependencies.

In [193]:
class Slicer(Slicer):
    def dependencies(self) -> Dependencies:
        """Return collected dependencies."""
        if self.saved_dependencies is None:
            return Dependencies({}, {})
        return self.saved_dependencies.dependencies()

    def code(self, *args: Any, **kwargs: Any) -> None:
        """Show code of instrumented items, annotated with dependencies."""
        first = True
        for item in self.instrumented_items:
            if not first:
                print()
            self.dependencies().code(item, *args, **kwargs)  # type: ignore
            first = False

    def graph(self, *args: Any, **kwargs: Any) -> Digraph:
        """Show dependency graph."""
        return self.dependencies().graph(*args, **kwargs)  # type: ignore

    def _repr_mimebundle_(self, include: Any = None, exclude: Any = None) -> Any:
        """If the object is output in Jupyter, render dependencies as a SVG graph"""
        return self.graph()._repr_mimebundle_(include, exclude)

Let us put Slicer into action. We track our middle() function:

In [194]:
with Slicer(middle) as slicer:
    m = middle(2, 1, 3)
m
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2454789564.py:22: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), value],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:12: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords=[keyword(arg='pos', value=ast.Num(n + 1))]
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:25: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(child.arg),
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:19: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
  value=ast.NameConstant(value=True)))
Out[194]:
1

These are the dependencies in string form (used when printed):

In [195]:
print(slicer.dependencies())
middle():
    <test> (2) <= y (12), z (12)
    <test> (3) <= y (12), x (12); <- <test> (2)
    <test> (5) <= z (12), x (12); <- <test> (3)
    <middle() return value> (6) <= y (12); <- <test> (5)

This is the code form:

In [196]:
slicer.code()
     1 def middle(x, y, z):  # type: ignore
*    2     if y < z:  # <= y (12), z (12)
*    3         if x < y:  # <= y (12), x (12); <- <test> (2)
     4 
            return y
*    5         elif x < z:  # <= z (12), x (12); <- <test> (3)
*    6             return y  # <= y (12); <- <test> (5)
     7     else:
     8 
        if x > y:
     9             return y
    10         elif x > z:
    11             return x
*   12     return z

And this is the graph form:

In [197]:
slicer
Out[197]:
No description has been provided for this image

You can also access the raw repr() form, which allows you to reconstruct dependencies at any time. (This is how we showed off dependencies at the beginning of this chapter, before even introducing the code that computes them.)

In [198]:
print(repr(slicer.dependencies()))
Dependencies(
    data={
        ('x', (middle, 12)): set(),
        ('y', (middle, 12)): set(),
        ('z', (middle, 12)): set(),
        ('<test>', (middle, 2)): {('y', (middle, 12)), ('z', (middle, 12))},
        ('<test>', (middle, 3)): {('y', (middle, 12)), ('x', (middle, 12))},
        ('<test>', (middle, 5)): {('z', (middle, 12)), ('x', (middle, 12))},
        ('<middle() return value>', (middle, 6)): {('y', (middle, 12))}},
 control={
        ('x', (middle, 12)): set(),
        ('y', (middle, 12)): set(),
        ('z', (middle, 12)): set(),
        ('<test>', (middle, 2)): set(),
        ('<test>', (middle, 3)): {('<test>', (middle, 2))},
        ('<test>', (middle, 5)): {('<test>', (middle, 3))},
        ('<middle() return value>', (middle, 6)): {('<test>', (middle, 5))}})

Diagnostics¶

The Slicer constructor accepts a log argument (default: False), which can be set to show various intermediate results:

  • log=True (or log=1): Show instrumented source code
  • log=2: Also log execution
  • log=3: Also log individual transformer steps
  • log=4: Also log source line numbers

More Examples¶

Let us demonstrate our Slicer class on a few more examples.

Square Root¶

The square_root() function from the chapter on assertions demonstrates a nice interplay between data and control dependencies.

Here is the original source code:

In [201]:
print_content(inspect.getsource(square_root), '.py')
def square_root(x):  # type: ignore
    assert x >= 0  # precondition

    approx = None
    guess = x / 2
    while approx != guess:
        approx = guess
        guess = (approx + x / approx) / 2

    assert math.isclose(approx * approx, x)
    return approx

Turning on logging shows the instrumented version:

In [202]:
with Slicer(square_root, log=True) as root_slicer:
    y = square_root(2.0)
Instrumenting <function square_root at 0x11d6c9bc0>
def square_root(x):
    _data.param('x', x, pos=1, last=True)
    assert _data.set('<assertion>', _data.get('x', x) >= 0, loads=(_data.get('x', x),))
    approx = _data.set('approx', None)
    guess = _data.set('guess', _data.get('x', x) / 2)
    while _data.test(_data.get('approx', approx) != _data.get('guess', guess)):
        with _data:
            approx = _data.set('approx', _data.get('guess', guess))
            guess = _data.set('guess', (_data.get('approx', approx) + _data.get('x', x) / _data.get('approx', approx)) / 2)
    assert _data.set('<assertion>', _data.ret(_data.call(_data.get('math', math).isclose)(_data.arg(_data.get('approx', approx) * _data.get('approx', approx), pos=1), _data.arg(_data.get('x', x), pos=2))), loads=(_data, _data.get('math', math), _data.get('x', x), _data.get('approx', approx)))
    return _data.set('<square_root() return value>', _data.get('approx', approx))

/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2498757139.py:9: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords.append(keyword(arg='pos', value=ast.Num(pos)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2454789564.py:22: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), value],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:12: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords=[keyword(arg='pos', value=ast.Num(n + 1))]
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:19: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
  value=ast.NameConstant(value=True)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:25: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(child.arg),

The dependency graph shows how guess and approx flow into each other until they are the same.

In [203]:
root_slicer
Out[203]:
No description has been provided for this image

Again, we can show the code annotated with dependencies:

In [204]:
root_slicer.code()
    54 def square_root(x):  # type: ignore
*   55     assert x >= 0  # precondition  # <= x (64)
    56 
*   57     approx = None
*   58     guess = x / 2  # <= x (64)
*   59     while approx != guess:  # <= guess (61), guess (58), approx (60), approx (57)
*   60         approx = guess  # <= guess (61), guess (58); <- <test> (59)
*   61         guess = (approx + x / approx) / 2  # <= approx (60), x (64); <- <test> (59)
    62 
*   63     assert math.isclose(approx * approx, x)  # <= approx (60), x (64)
*   64     return approx  # <= approx (60)

The astute reader may find that a statement assert p does not control the following code, although it would be equivalent to if not p: raise Exception. Why is that?

In [205]:
quiz("Why don't `assert` statements induce control dependencies?",
     [
         "We have no special handling of `raise` statements",
         "We have no special handling of exceptions",
         "Assertions are not supposed to act as controlling mechanisms",
         "All of the above",
     ], '(1 * 1 << 1 * 1 << 1 * 1)')
Out[205]:

Quiz

Why don't assert statements induce control dependencies?





Indeed: we treat assertions as "neutral" in the sense that they do not affect the remainder of the program – if they are turned off, they have no effect; and if they are turned on, the remaining program logic should not depend on them. (Our instrumentation also has no special treatment of raise or even return statements; they should be handled by our with blocks, though.)

In [206]:
# print(repr(root_slicer))

Removing HTML Markup¶

Let us come to our ongoing example, remove_html_markup(). This is how its instrumented code looks like:

In [207]:
with Slicer(remove_html_markup) as rhm_slicer:
    s = remove_html_markup("<foo>bar</foo>")
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2454789564.py:22: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), value],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:12: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords=[keyword(arg='pos', value=ast.Num(n + 1))]
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:19: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
  value=ast.NameConstant(value=True)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:25: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(child.arg),

The graph is as discussed in the introduction to this chapter:

In [208]:
rhm_slicer
Out[208]:
No description has been provided for this image
In [209]:
# print(repr(rhm_slicer.dependencies()))
In [210]:
rhm_slicer.code()
   238 def remove_html_markup(s):  # type: ignore
*  239     tag = False
*  240     quote = False
*  241     out = ""
   242 
*  243     for c in s:  # <= s (255)
*  244         assert tag or not quote  # <= tag (239), quote (240), tag (247), tag (249)
   245 
*  246         if c == '<' and not quote:  # <= quote (240), c (243)
*  247             tag = True  # <- <test> (246)
*  248         elif c == '>' and not quote:  # <= quote (240), c (243); <- <test> (246)
*  249             tag = False  # <- <test> (248)
*  250         elif (c == '"' or c == "'") and tag:  # <= c (243); <- <test> (248)
   251             quote = not quote
*  252         elif not tag:  # <= tag (247), tag (249); <- <test> (250)
*  253             out = out + c  # <= c (243), out (241), out (253); <- <test> (252)
   254 
*  255     return out  # <= out (253)

We can also compute slices over the dependencies:

In [211]:
_, start_remove_html_markup = inspect.getsourcelines(remove_html_markup)
start_remove_html_markup
Out[211]:
238
In [212]:
slicing_criterion = ('tag', (remove_html_markup,
                             start_remove_html_markup + 9))
tag_deps = rhm_slicer.dependencies().backward_slice(slicing_criterion)  # type: ignore
tag_deps
Out[212]:
No description has been provided for this image
In [213]:
# repr(tag_deps)

Calls and Augmented Assign¶

Our last example covers augmented assigns and data flow across function calls. We introduce two simple functions add_to() and mul_with():

In [214]:
def add_to(n, m):  # type: ignore
    n += m
    return n
In [215]:
def mul_with(x, y):  # type: ignore
    x *= y
    return x

And we put these two together in a single call:

In [216]:
def test_math() -> None:
    return mul_with(1, add_to(2, 3))
In [217]:
with Slicer(add_to, mul_with, test_math) as math_slicer:
    test_math()
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2454789564.py:22: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), value],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:12: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords=[keyword(arg='pos', value=ast.Num(n + 1))]
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:25: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(child.arg),
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:19: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
  value=ast.NameConstant(value=True)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2498757139.py:9: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords.append(keyword(arg='pos', value=ast.Num(pos)))

The resulting dependence graph nicely captures the data flow between these calls, notably arguments and parameters:

In [218]:
math_slicer
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3262393251.py:21: UserWarning: Warning: y (mul_with:3) depends on <add_to() return value> (add_to:3), but 'return x' does not seem to have a call
  warnings.warn(f"Warning: {self.format_var(var)} "
Out[218]:
No description has been provided for this image

These are also reflected in the code view:

In [219]:
math_slicer.code()
     1 def test_math() -> None:
*    2     return mul_with(1, add_to(2, 3))  # <= <mul_with() return value> (mul_with:3)

     1 def add_to(n, m):  # type: ignore
*    2     n += m  # <= n (3), m (3)
*    3     return n  # <= n (2)

     1 def mul_with(x, y):  # type: ignore
*    2     x *= y  # <= y (3), x (3)
*    3     return x  # <= <add_to() return value> (add_to:3), x (2)
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3262393251.py:21: UserWarning: Warning: y (mul_with:3) depends on <add_to() return value> (add_to:3), but 'return x' does not seem to have a call
  warnings.warn(f"Warning: {self.format_var(var)} "

Dynamic Instrumentation¶

When initializing Slicer(), one has to provide the set of functions to be instrumented. This is because the instrumentation has to take place before the code in the with block is executed. Can we determine this list on the fly – while Slicer() is executed?

The answer is: Yes, but the solution is a bit hackish – even more so than what we have seen above. In essence, we proceed in two steps:

  1. When DynamicSlicer.__init__() is called:
    • Use the inspect module to determine the source code of the call
    • Analyze the enclosed with block for function calls
    • Instrument these functions
  2. Whenever a function is about to be called (DataTracker.call())
    • Create an instrumented version of that function
    • Have the call() method return the instrumented function instead

Both these hacks are effective, as shown in the following example. We use the Slicer() constructor without arguments; it automatically identifies fun_2() as a function in the with block. As the instrumented fun2() is invoked, its _data.call() method instruments the call to fun_1() (and ensures the instrumented version is called).

In [226]:
def fun_1(x: int) -> int:
    return x
In [227]:
def fun_2(x: int) -> int:
    return fun_1(x)
In [228]:
with Slicer(log=True) as slicer:
    fun_2(10)
Instrumenting <function fun_2 at 0x11d322840>
def fun_2(x: int) -> int:
    _data.param('x', x, pos=1, last=True)
    return _data.set('<fun_2() return value>', _data.ret(_data.call(_data.get('fun_1', fun_1))(_data.arg(_data.get('x', x), pos=1))))

Instrumenting <function fun_1 at 0x11d322160>
def fun_1(x: int) -> int:
    _data.param('x', x, pos=1, last=True)
    return _data.set('<fun_1() return value>', _data.get('x', x))

/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2498757139.py:9: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords.append(keyword(arg='pos', value=ast.Num(pos)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2454789564.py:22: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), value],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:12: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords=[keyword(arg='pos', value=ast.Num(n + 1))]
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:19: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
  value=ast.NameConstant(value=True)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:25: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(child.arg),
In [229]:
slicer
Out[229]:
No description has been provided for this image

More Applications¶

The main use of dynamic slices is for debugging tools, where they show the origins of individual values. However, beyond facilitating debugging, tracking information flows has a number of additional applications, some of which we briefly sketch here.

Verifying Information Flows¶

Using dynamic slices, we can check all the locations where (potentially sensitive) information is used. As an example, consider the following function password_checker(), which requests a password from the user and returns True if it is the correct one:

In [232]:
SECRET_HASH_DIGEST = '59f2da35bcc39525b87932b4cc1f3d68'
In [233]:
def password_checker() -> bool:
    """Request a password. Return True if correct."""
    secret_password = input("Enter secret password: ")
    password_digest = hashlib.md5(secret_password.encode('utf-8')).hexdigest()

    if password_digest == SECRET_HASH_DIGEST:
        return True
    else:
        return False

(Note that this is a very naive implementation: A true password checker would use the Python getpass module to read in a password without echoing it in the clear, and possibly also use a more sophisticated hash function than md5.)

From a security perspective, the interesting question we can ask using slicing is: Is the entered password stored in the clear somewhere? For this, we can simply run our slicer to see where the inputs are going:

In [234]:
# ignore
next_inputs(['secret123'])
Out[234]:
['secret123']
In [235]:
with Slicer() as slicer:
    valid_pwd = password_checker()
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2498757139.py:9: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords.append(keyword(arg='pos', value=ast.Num(pos)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2454789564.py:22: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), value],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
Enter secret password: secret123
In [236]:
slicer
Out[236]:
No description has been provided for this image

We see that the password only flows into password_digest, where it is already encrypted. If the password were flowing into some other function or variable, we would see this in our slice.

(Note that an attacker may still be able to find out which password was entered, for instance, by checking memory contents.)

In [237]:
# ignore
secret_answers = [
    'automated',
    'debugging',
    'is',
    'fun'
]

quiz("What is the secret password, actually?", 
     [f"`{repr(s)}`" for s in secret_answers],
     min([i + 1 for i, ans in enumerate(secret_answers) 
          if hashlib.md5(ans.encode('utf-8')).hexdigest() == 
              SECRET_HASH_DIGEST])
    )
Out[237]:

Quiz

What is the secret password, actually?





Assessing Test Quality¶

Another interesting usage of dynamic slices is to assess test quality. With our square_root() function, we have seen that the included assertions well test the arguments and the result for correctness:

In [238]:
# ignore
_, start_square_root = inspect.getsourcelines(square_root)
In [239]:
# ignore
print_content(inspect.getsource(square_root), '.py',
              start_line_number=start_square_root)
54  def square_root(x):  # type: ignore
55      assert x >= 0  # precondition
56  
57      approx = None
58      guess = x / 2
59      while approx != guess:
60          approx = guess
61          guess = (approx + x / approx) / 2
62  
63      assert math.isclose(approx * approx, x)
64      return approx

However, a lazy programmer could also omit these tests – or worse yet, include tests that always pass:

In [240]:
def square_root_unchecked(x):  # type: ignore
    assert True  # <-- new "precondition"

    approx = None
    guess = x / 2
    while approx != guess:
        approx = guess
        guess = (approx + x / approx) / 2

    assert True  # <-- new "postcondition"
    return approx

How can one check that the tests supplied actually are effective? This is a problem of "Who watches the watchmen" – we need to find a way to ensure that the tests do their job.

The "classical" way of testing tests is so-called mutation testing – that is, introducing artificial errors into the code to see whether the tests catch them. Mutation testing is effective: The above "weak" tests would not catch any change to the square_root() computation code, and hence quickly be determined as ineffective. However, mutation testing is also costly, as tests have to be ran again and again for every small code mutation.

Slices offer a cost-effective alternative to determine the quality of tests. The idea is that if there are statements in the code whose result does not flow into an assertion, then any errors in these statements will go unnoticed. In consequence, the larger the backward slice of an assertion, the higher its ability to catch errors.

We can easily validate this assumption using the two examples, above. Here is the backward slice for the "full" postcondition in square_root(). We see that the entire computation code flows into the final postcondition:

In [241]:
postcondition_lineno = start_square_root + 9
postcondition_lineno
Out[241]:
63
In [242]:
with Slicer() as slicer:
    y = square_root(4)

slicer.dependencies().backward_slice((square_root, postcondition_lineno))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2498757139.py:9: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords.append(keyword(arg='pos', value=ast.Num(pos)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2454789564.py:22: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), value],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:12: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords=[keyword(arg='pos', value=ast.Num(n + 1))]
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:19: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
  value=ast.NameConstant(value=True)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:25: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(child.arg),
Out[242]:
No description has been provided for this image

In contrast, the "lazy" assertion in square_root_unchecked() has an empty backward slice, showing that it depends on no other value at all:

In [243]:
with Slicer() as slicer:
    y = square_root_unchecked(4)

slicer.dependencies().backward_slice((square_root, postcondition_lineno))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2454789564.py:22: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), value],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:12: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords=[keyword(arg='pos', value=ast.Num(n + 1))]
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:19: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
  value=ast.NameConstant(value=True)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:25: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(child.arg),
Out[243]:
No description has been provided for this image

In \cite{Schuler2011}, Schuler et al. have tried out this technique and found their "checked coverage" to be a sure indicator for the quality of the checks in tests. Using our dynamic slices, you may wish to try this out on Python code.

Use in Statistical Debugging¶

Collecting dynamic slices over several runs allows for correlating dependencies with other execution features, notably failures: "The program fails whenever the value of weekday comes from calendar()." We will revisit this idea in the chapter on statistical debugging.

Synopsis¶

This chapter provides a Slicer class to automatically determine and visualize dynamic flows and dependencies. When we say that a variable $x$ depends on a variable $y$ (and that $y$ flows into $x$), we distinguish two kinds of dependencies:

  • Data dependency: $x$ is assigned a value computed from $y$.
  • Control dependency: A statement involving $x$ is executed only because a condition involving $y$ was evaluated, influencing the execution path.

Such dependencies are crucial for debugging, as they allow determininh the origins of individual values (and notably incorrect values).

To determine dynamic dependencies in a function func, use

with Slicer() as slicer:
    <Some call to func()>

and then slicer.graph() or slicer.code() to examine dependencies.

You can also explicitly specify the functions to be instrumented, as in

with Slicer(func, func_1, func_2) as slicer:
    <Some call to func()>

Here is an example. The demo() function computes some number from x:

In [244]:
def demo(x: int) -> int:
    z = x
    while x <= z <= 64:
        z *= 2
    return z

By using with Slicer(), we first instrument demo() and then execute it:

In [245]:
with Slicer() as slicer:
    demo(10)
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2454789564.py:22: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), value],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:12: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords=[keyword(arg='pos', value=ast.Num(n + 1))]
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:19: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
  value=ast.NameConstant(value=True)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:25: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(child.arg),

After execution is complete, you can output slicer to visualize the dependencies and flows as graph. Data dependencies are shown as black solid edges; control dependencies are shown as grey dashed edges. The arrows indicate influence: If $y$ depends on $x$ (and thus $x$ flows into $y$), then we have an arrow $x \rightarrow y$. We see how the parameter x flows into z, which is returned after some computation that is control dependent on a <test> involving z.

In [246]:
slicer
Out[246]:
No description has been provided for this image

An alternate representation is slicer.code(), annotating the instrumented source code with (backward) dependencies. Data dependencies are shown with <=, control dependencies with <-; locations (lines) are shown in parentheses.

In [247]:
slicer.code()
     1 def demo(x: int) -> int:
*    2     z = x  # <= x (5)
*    3     while x <= z <= 64:  # <= x (5), z (2), z (4)
*    4         z *= 2  # <= z (2), z (4); <- <test> (3)
*    5     return z  # <= z (4)

Dependencies can also be retrieved programmatically. The dependencies() method returns a Dependencies object encapsulating the dependency graph.

The method all_vars() returns all variables in the dependency graph. Each variable is encoded as a pair (name, location) where location is a pair (codename, lineno).

In [248]:
slicer.dependencies().all_vars()
Out[248]:
{('<demo() return value>', (<function __main__.demo(x: int) -> int>, 5)),
 ('<test>', (<function __main__.demo(x: int) -> int>, 3)),
 ('x', (<function __main__.demo(x: int) -> int>, 5)),
 ('z', (<function __main__.demo(x: int) -> int>, 2)),
 ('z', (<function __main__.demo(x: int) -> int>, 4))}

code() and graph() methods can also be applied on dependencies. The method backward_slice(var) returns a backward slice for the given variable (again given as a pair (name, location)). To retrieve where z in Line 2 came from, use:

In [249]:
_, start_demo = inspect.getsourcelines(demo)
start_demo
Out[249]:
1
In [250]:
slicer.dependencies().backward_slice(('z', (demo, start_demo + 1))).graph()  # type: ignore
Out[250]:
No description has been provided for this image

Here are the classes defined in this chapter. A Slicer instruments a program, using a DependencyTracker at run time to collect Dependencies.

In [251]:
# ignore
from ClassDiagram import display_class_hierarchy, class_tree
In [252]:
# ignore
assert class_tree(Slicer)[0][0] == Slicer
In [253]:
# ignore
display_class_hierarchy([Slicer, DependencyTracker, 
                         StackInspector, Dependencies],
                        abstract_classes=[
                            StackInspector,
                            Instrumenter
                        ],
                        public_methods=[
                            StackInspector.caller_frame,
                            StackInspector.caller_function,
                            StackInspector.caller_globals,
                            StackInspector.caller_locals,
                            StackInspector.caller_location,
                            StackInspector.search_frame,
                            StackInspector.search_func,
                            Instrumenter.__init__,
                            Instrumenter.__enter__,
                            Instrumenter.__exit__,
                            Instrumenter.instrument,
                            Slicer.__init__,
                            Slicer.code,
                            Slicer.dependencies,
                            Slicer.graph,
                            Slicer._repr_mimebundle_,
                            DataTracker.__init__,
                            DataTracker.__enter__,
                            DataTracker.__exit__,
                            DataTracker.arg,
                            DataTracker.augment,
                            DataTracker.call,
                            DataTracker.get,
                            DataTracker.param,
                            DataTracker.ret,
                            DataTracker.set,
                            DataTracker.test,
                            DataTracker.__repr__,
                            DependencyTracker.__init__,
                            DependencyTracker.__enter__,
                            DependencyTracker.__exit__,
                            DependencyTracker.arg,
                            # DependencyTracker.augment,
                            DependencyTracker.call,
                            DependencyTracker.get,
                            DependencyTracker.param,
                            DependencyTracker.ret,
                            DependencyTracker.set,
                            DependencyTracker.test,
                            DependencyTracker.__repr__,
                            Dependencies.__init__,
                            Dependencies.__repr__,
                            Dependencies.__str__,
                            Dependencies._repr_mimebundle_,
                            Dependencies.code,
                            Dependencies.graph,
                            Dependencies.backward_slice,
                            Dependencies.all_functions,
                            Dependencies.all_vars,
                        ],
                        project='debuggingbook')
Out[253]:
Slicer Slicer __init__() _repr_mimebundle_() code() dependencies() graph() calls_in_our_with_block() default_items_to_instrument() execute() funcs_in_our_with_block() instrument() our_with_block() parse() restore() transform() transformers() Instrumenter Instrumenter __enter__() __exit__() __init__() instrument() default_items_to_instrument() restore() Slicer->Instrumenter StackInspector StackInspector _generated_function_cache caller_frame() caller_function() caller_globals() caller_locals() caller_location() search_frame() search_func() Instrumenter->StackInspector DependencyTracker DependencyTracker TEST __enter__() __exit__() __init__() arg() call() get() param() ret() set() test() call_generator() check_location() clear_read() dependencies() ignore_location_change() ignore_next_location_change() in_generator() ret_generator() DataTracker DataTracker __enter__() __exit__() __init__() arg() augment() call() get() param() ret() set() test() instrument_call() DependencyTracker->DataTracker DataTracker->StackInspector Dependencies Dependencies FONT_NAME NODE_COLOR __init__() __repr__() __str__() _repr_mimebundle_() all_functions() all_vars() backward_slice() code() graph() _code() _source() add_hierarchy() draw_dependencies() draw_edge() expand_criteria() format_var() id() label() make_graph() repr_dependencies() repr_deps() repr_var() source() tooltip() validate() Dependencies->StackInspector Legend Legend •  public_method() •  private_method() •  overloaded_method() Hover over names to see doc

Things that do not Work¶

Our slicer (and especially the underlying dependency tracker) is still a proof of concept. A number of Python features are not or only partially supported, and/or hardly tested:

  • Exceptions can lead to missing or erroneous dependencies. The code assumes that for every call(), there is a matching ret(); when exceptions break this, dependencies across function calls and arguments may be missing or be assigned incorrectly.
  • Multiple definitions on a single line as in x = y; x = 1 can lead to missing or erroneous dependencies. Our implementation assumes that there is one statement per line.
  • If-Expressions (y = 1 if x else 0) do not create control dependencies, as there are no statements to control. Neither do if clauses in comprehensions.
  • Asynchronous functions (async, await) are not tested.

In these cases, the instrumentation and the underlying dependency tracker may fail to identify control and/or data flows. The semantics of the code, however, should always stay unchanged.

Lessons Learned¶

  • To track the origin of some incorrect value, follow back its dependencies:
    • Data dependencies indicate where the value came from.
    • Control dependencies show why a statement was executed.
  • A slice is a subset of the code that could have influenced a specific value. It can be computed by transitively following all dependencies.
  • Instrument code to automatically determine and visualize dependencies.

Next Steps¶

In the next chapter, we will explore how to make use of multiple passing and failing executions.

Background¶

Slicing as computing a subset of a program by means of data and control dependencies was invented by Mark Weiser \cite{Weiser1981}. In his seminal work "Programmers use Slices when Debugging", \cite{Weiser1982}, Weiser demonstrated how such dependencies are crucial for systematic debugging:

When debugging unfamiliar programs programmers use program pieces called slices which are sets of statements related by their flow of data. The statements in a slice are not necessarily textually contiguous, but may be scattered through a program.

Weiser's slices (and dependencies) were determined statically from program code. Both Korel and Laski \cite{Korel1988} as well as Agrawal and Horgan \cite{Agrawal1990} introduced dynamic program slicing, building on dynamic dependencies, which would be more specific to a given (failing) run. (The Slicer we implement in this chapter is a dynamic slicer.) Tip \cite{Tip1995} gives a survey on program slicing techniques. Chen et al. \cite{Chen2014} describe and evaluate the first dynamic slicer for Python programs (which is independent of our implementation).

One exemplary application of program slices is the Whyline by Ko and Myers \cite{Ko2004}. The Whyline is a debugging interface for asking questions about program behavior. It allows querying interactively where a particular variable came from (a data dependency) and why or why not specific things took place (control dependencies).

In \cite{Soremekun2021}, Soremekun et al. evaluated the performance of slicing as a fault localization mechanism and found that following dependencies was one of the most successful strategies to determine fault locations. Notably, if programmers first examine at most the top five most suspicious locations from statistical debugging, and then switch to dynamic slices, on average, they will need to examine only 15% (12 lines) of the code.

Exercises¶

Exercise 1: Control Slices¶

Augment the Slicer class with two keyword arguments, include and exclude, each taking a list of functions to instrument or not to instrument, respectively. These can be helpful when using "automatic" instrumentation.

Exercise 2: Incremental Exploration¶

This is more of a programming project than a simple exercise. Rather than showing all dependencies as a whole, as we do, build a system that allows the user to explore dependencies interactively.

Exercise 3: Forward Slicing¶

Extend Dependencies with a variant of backward_slice() named forward_slice() that, instead of computing the dependencies that go into a location, computes the dependencies that go out of a location.

Exercise 4: Code with Forward Dependencies¶

Create a variant of Dependencies.code() that, for each statement s, instead of showing a "passive" view (which variables and locations influenced s?), shows an "active" view (which variables and locations were influenced by s?). For middle(), for instance, the first line should show which lines are influenced by x, y, and z, respectively. Use -> for control flows and => for data flows.

Exercise 5: Flow Assertions¶

In line with Verifying Flows at Runtime, above, implement a function assert_flow(target, source) that checks at runtime that the data flowing into target only comes from the variables named in source.

In [254]:
def assert_flow(target: Any, source: List[Any]) -> bool:
    """
    Raise an `AssertionError` if the dependencies of `target`
    are not equal to `source`.
    """
    ...
    return True

assert_flow() would be used in conjunction with Slicer() as follows:

In [255]:
def demo4() -> int:
    x = 25
    y = 26
    assert_flow(y, [x])  # ensures that `y` depends on `x` only
    return y
In [256]:
with Slicer() as slicer:
    demo4()
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2498757139.py:9: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords.append(keyword(arg='pos', value=ast.Num(pos)))
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/2454789564.py:22: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), value],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/3554319793.py:4: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(id), Name(id=id, ctx=Load())],
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:12: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
  keywords=[keyword(arg='pos', value=ast.Num(n + 1))]
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:25: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
  args=[ast.Str(child.arg),
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_94980/882995308.py:19: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
  value=ast.NameConstant(value=True)))

To check dependencies, have assert_flow() check the contents of the _data dependency collector as set up by the slicer.

Exercise 6: Checked Coverage¶

Implement checked coverage, as sketched in Assessing Test Quality above. For every assert statement encountered during a run, produce the number of statements it depends upon.