Tracking Failure Origins¶

The question of "Where does this value come from?" is fundamental for debugging. Which earlier variables could possibly have influenced the current erroneous state? And how did their values come to be?

When programmers read code during debugging, they scan it for potential origins of given values. This can be a tedious experience, notably, if the origins spread across multiple separate locations, possibly even in different modules. In this chapter, we thus investigate means to determine such origins automatically – by collecting data and control dependencies during program execution.

Prerequisites

  • You should have read the Introduction to Debugging.
  • To understand how to compute dependencies automatically (the second half of this chapter), you will need
    • advanced knowledge of Python semantics
    • knowledge on how to instrument and transform code
    • knowledge on how an interpreter works
In [5]:
# ignore
from typing import Set, List, Tuple, Any, Callable, Dict, Optional
from typing import Union, Type, Generator, cast

Dependencies¶

In the Introduction to debugging, we have seen how faults in a program state propagate to eventually become visible as failures. This induces a debugging strategy called tracking origins:

  1. We start with a single faulty state f – the failure.
  2. We determine f's origins – the parts of earlier states that could have caused the faulty state f.
  3. For each of these origins e, we determine whether they are faulty or not.
  4. For each of the faulty origins, we in turn determine their origins.
  5. If we find a part of the state that is faulty, yet has only correct origins, we have found the defect.

In all generality, a "part of the state" can be anything that can influence the program – some configuration setting, some database content, or the state of a device. Almost always, though, it is through individual variables that a part of the state manifests itself.

The good news is that variables do not take arbitrary values at arbitrary times – instead, they are set and accessed at precise moments in time, as determined by the program's semantics. This allows us to determine their origins by reading program code.

Let us assume you have a piece of code that reads as follows. The middle() function is supposed to return the "middle" number of three values x, y, and z – that is, the one number that neither is the minimum nor the maximum.

In [6]:
def middle(x, y, z):  # type: ignore
    if y < z:
        if x < y:
            return y
        elif x < z:
            return y
    else:
        if x > y:
            return y
        elif x > z:
            return x
    return z

In most cases, middle() runs just fine:

In [7]:
m = middle(1, 2, 3)
m
Out[7]:
2

In others, however, it returns the wrong value:

In [8]:
m = middle(2, 1, 3)
m
Out[8]:
1

This is a typical debugging situation: You see a value that is erroneous; and you want to find out where it came from.

  • In our case, we see that the erroneous value was returned from middle(), so we identify the five return statements in middle() that the value could have come from.
  • The value returned is the value of y, and neither x, y, nor z are altered during the execution of middle(). Hence, it must be one of the three return y statements that is the origin of m. But which one?

For our small example, we can fire up an interactive debugger and simply step through the function; this reveals us the conditions evaluated and the return statement executed.

In [10]:
# ignore
next_inputs(["step", "step", "step", "step", "quit"]);
In [11]:
with Debugger.Debugger():
    middle(2, 1, 3)
Calling middle(x = 2, y = 1, z = 3)
(debugger) step
2     if y < z:
(debugger) step
3         if x < y:
(debugger) step
5         elif x < z:
(debugger) step
6             return y
(debugger) quit

We now see that it was the second return statement that returned the incorrect value. But why was it executed after all? To this end, we can resort to the middle() source code and have a look at those conditions that caused the return y statement to be executed. Indeed, the conditions y < z, x > y, and finally x < z again are origins of the returned value – and in turn have x, y, and z as origins.

In our above reasoning about origins, we have encountered two kinds of origins:

  • earlier data values (such as the value of y being returned) and
  • earlier control conditions (such as the if conditions governing the return y statement).

The later parts of the state that can be influenced by such origins are said to be dependent on these origins. Speaking of variables, a variable $x$ depends on the value of a variable $y$ (written as $x \leftarrow y$) if a change in $y$ could affect the value of $x$.

We distinguish two kinds of dependencies $x \leftarrow y$, aligned with the two kinds of origins as outlined above:

  • Data dependency: $x$ is assigned a value computed from $y$. In our example, m is data dependent on the return value of middle().
  • Control dependency: A statement involving $x$ is executed only because a condition involving $y$ was evaluated, influencing the execution path. In our example, the value returned by return y is control dependent on the several conditions along its path, which involve x, y, and z.

Let us examine these dependencies in more detail.

Data Dependencies¶

Here is an example of a data dependency in our middle() program. The value y returned by middle() comes from the value y as originally passed as argument. We use arrows $x \leftarrow y$ to indicate that a variable $x$ depends on an earlier variable $y$:

In [32]:
# ignore
middle_deps().backward_slice('<middle() return value>', mode='d')  # type: ignore
Out[32]:
dependencies middlereturnvalue_functionmiddleat0x105b94e50_6 <middle() return value> return y y_functionmiddleat0x105b94e50_1 y def middle(x, y, z):  # type: ignore y_functionmiddleat0x105b94e50_1->middlereturnvalue_functionmiddleat0x105b94e50_6

Here, we can see that the value y in the return statement is data dependent on the value of y as passed to middle(). An alternate interpretation of this graph is a data flow: The value of y in the upper node flows into the value of y in the lower node.

Since we consider the values of variables at specific locations in the program, such data dependencies can also be interpreted as dependencies between statements – the above return statement thus is data dependent on the initialization of y in the upper node.

Control Dependencies¶

Here is an example of a control dependency. The execution of the above return statement is controlled by the earlier test x < z. We use gray dashed lines to indicate control dependencies:

In [33]:
# ignore
middle_deps().backward_slice('<middle() return value>', mode='c', depth=1)  # type: ignore
Out[33]:
dependencies middlereturnvalue_functionmiddleat0x105b94e50_6 <middle() return value> return y test_functionmiddleat0x105b94e50_5 <test> elif x < z: test_functionmiddleat0x105b94e50_5->middlereturnvalue_functionmiddleat0x105b94e50_6

This test in turn is controlled by earlier tests, so the full chain of control dependencies looks like this:

In [34]:
# ignore
middle_deps().backward_slice('<middle() return value>', mode='c')  # type: ignore
Out[34]:
dependencies middlereturnvalue_functionmiddleat0x105b94e50_6 <middle() return value> return y test_functionmiddleat0x105b94e50_5 <test> elif x < z: test_functionmiddleat0x105b94e50_5->middlereturnvalue_functionmiddleat0x105b94e50_6 test_functionmiddleat0x105b94e50_2 <test> if y < z: test_functionmiddleat0x105b94e50_3 <test> if x < y: test_functionmiddleat0x105b94e50_2->test_functionmiddleat0x105b94e50_3 test_functionmiddleat0x105b94e50_3->test_functionmiddleat0x105b94e50_5

Dependency Graphs¶

The above <test> values (and their statements) are in turn also dependent on earlier data, namely the x, y, and z values as originally passed. We can draw all data and control dependencies in a single graph, called a program dependency graph:

In [35]:
# ignore
middle_deps()
Out[35]:
dependencies z_functionmiddleat0x105b94e50_1 z def middle(x, y, z):  # type: ignore test_functionmiddleat0x105b94e50_2 <test> if y < z: z_functionmiddleat0x105b94e50_1->test_functionmiddleat0x105b94e50_2 test_functionmiddleat0x105b94e50_5 <test> elif x < z: z_functionmiddleat0x105b94e50_1->test_functionmiddleat0x105b94e50_5 y_functionmiddleat0x105b94e50_1 y def middle(x, y, z):  # type: ignore y_functionmiddleat0x105b94e50_1->test_functionmiddleat0x105b94e50_2 test_functionmiddleat0x105b94e50_3 <test> if x < y: y_functionmiddleat0x105b94e50_1->test_functionmiddleat0x105b94e50_3 middlereturnvalue_functionmiddleat0x105b94e50_6 <middle() return value> return y y_functionmiddleat0x105b94e50_1->middlereturnvalue_functionmiddleat0x105b94e50_6 test_functionmiddleat0x105b94e50_2->test_functionmiddleat0x105b94e50_3 x_functionmiddleat0x105b94e50_1 x def middle(x, y, z):  # type: ignore x_functionmiddleat0x105b94e50_1->test_functionmiddleat0x105b94e50_3 x_functionmiddleat0x105b94e50_1->test_functionmiddleat0x105b94e50_5 test_functionmiddleat0x105b94e50_3->test_functionmiddleat0x105b94e50_5 test_functionmiddleat0x105b94e50_5->middlereturnvalue_functionmiddleat0x105b94e50_6

This graph now gives us an idea on how to proceed to track the origins of the middle() return value at the bottom. Its value can come from any of the origins – namely the initialization of y at the function call, or from the <test> that controls it. This test in turn depends on x and z and their associated statements, which we now can check one after the other.

Note that all these dependencies in the graph are dynamic dependencies – that is, they refer to statements actually evaluated in the run at hand, as well as the decisions made in that very run. There also are static dependency graphs coming from static analysis of the code; but for debugging, dynamic dependencies specific to the failing run are more useful.

Showing Dependencies with Code¶

While a graph gives us a representation about which possible data and control flows to track, integrating dependencies with actual program code results in a compact representation that is easy to reason about.

The following listing shows such an integration. For each executed line (*), we see its data (<=) and control (<-) dependencies, listing the associated variables and line numbers. The comment

# <= y (1); <- <test> (5)

for Line 6, for instance, states that the return value is data dependent on the value of y in Line 1, and control dependent on the test in Line 5.

Again, one can easily follow these dependencies back to track where a value came from (data dependencies) and why a statement was executed (control dependency).

In [42]:
# ignore
middle_deps().code()  # type: ignore
*    1 def middle(x, y, z):  # type: ignore
*    2     if y < z:  # <= z (1), y (1)
*    3         if x < y:  # <= x (1), y (1); <- <test> (2)
     4             return y
*    5         elif x < z:  # <= x (1), z (1); <- <test> (3)
*    6             return y  # <= y (1); <- <test> (5)
     7     else:
     8         if x > y:
     9             return y
    10         elif x > z:
    11             return x
    12     return z

One important aspect of dependencies is that they not only point to specific sources and causes of failures – but that they also rule out parts of program and state as failures.

  • In the above code, Lines 8 and later have no influence on the output, simply because they were not executed.
  • Furthermore, we see that we can start our investigation with Line 6, because that is the last one executed.
  • The data dependencies tell us that no statement has interfered with the value of y between the function call and its return.
  • Hence, the error must be in the conditions or the final return statement.

With this in mind, recall that our original invocation was middle(2, 1, 3). Why and how is the above code wrong?

In [43]:
quiz("Which of the following `middle()` code lines should be fixed?",
    [
        "Line 2: `if y < z:`",
        "Line 3: `if x < y:`",
        "Line 5: `elif x < z:`",
        "Line 6: `return z`",
    ], '(1 ** 0 + 1 ** 1) ** (1 ** 2 + 1 ** 3)')
Out[43]:

Quiz

Which of the following middle() code lines should be fixed?





Indeed, from the controlling conditions, we see that y < z, x >= y, and x < z all hold. Hence, y <= x < z holds, and it is x, not y, that should be returned.

Slices¶

Given a dependency graph for a particular variable, we can identify the subset of the program that could have influenced it – the so-called slice. In the above code listing, these code locations are highlighted with * characters. Only these locations are part of the slice.

Slices are central to debugging for two reasons:

  • First, they rule out those locations of the program that could not have an effect on the failure. Hence, these locations need not be investigated as it comes to searching for the defect. Nor do they need to be considered for a fix, as any change outside the program slice by construction cannot affect the failure.
  • Second, they bring together possible origins that may be scattered across the code. Many dependencies in program code are non-local, with references to functions, classes, and modules defined in other locations, files, or libraries. A slice brings together all those locations in a single whole.

Here is an example of a slice – this time for our well-known remove_html_markup() function from the introduction to debugging:

In [45]:
print_content(inspect.getsource(remove_html_markup), '.py')
def remove_html_markup(s):  # type: ignore
    tag = False
    quote = False
    out = ""

    for c in s:
        assert tag or not quote

        if c == '<' and not quote:
            tag = True
        elif c == '>' and not quote:
            tag = False
        elif (c == '"' or c == "'") and tag:
            quote = not quote
        elif not tag:
            out = out + c

    return out

When we invoke remove_html_markup() as follows...

In [46]:
remove_html_markup('<foo>bar</foo>')
Out[46]:
'bar'

... we obtain the following dependencies:

In [47]:
# ignore
def remove_html_markup_deps() -> Dependencies:
    return Dependencies({('s', (remove_html_markup, 136)): set(), ('tag', (remove_html_markup, 137)): set(), ('quote', (remove_html_markup, 138)): set(), ('out', (remove_html_markup, 139)): set(), ('c', (remove_html_markup, 141)): {('s', (remove_html_markup, 136))}, ('<test>', (remove_html_markup, 144)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('tag', (remove_html_markup, 145)): set(), ('<test>', (remove_html_markup, 146)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('<test>', (remove_html_markup, 148)): {('c', (remove_html_markup, 141))}, ('<test>', (remove_html_markup, 150)): {('tag', (remove_html_markup, 147)), ('tag', (remove_html_markup, 145))}, ('tag', (remove_html_markup, 147)): set(), ('out', (remove_html_markup, 151)): {('out', (remove_html_markup, 151)), ('c', (remove_html_markup, 141)), ('out', (remove_html_markup, 139))}, ('<remove_html_markup() return value>', (remove_html_markup, 153)): {('<test>', (remove_html_markup, 146)), ('out', (remove_html_markup, 151))}}, {('s', (remove_html_markup, 136)): set(), ('tag', (remove_html_markup, 137)): set(), ('quote', (remove_html_markup, 138)): set(), ('out', (remove_html_markup, 139)): set(), ('c', (remove_html_markup, 141)): set(), ('<test>', (remove_html_markup, 144)): set(), ('tag', (remove_html_markup, 145)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 146)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 148)): {('<test>', (remove_html_markup, 146))}, ('<test>', (remove_html_markup, 150)): {('<test>', (remove_html_markup, 148))}, ('tag', (remove_html_markup, 147)): {('<test>', (remove_html_markup, 146))}, ('out', (remove_html_markup, 151)): {('<test>', (remove_html_markup, 150))}, ('<remove_html_markup() return value>', (remove_html_markup, 153)): set()})
In [48]:
# ignore
remove_html_markup_deps().graph()
Out[48]:
dependencies test_functionremove_html_markupat0x105d39d80_144 <test> <remove_html_markup()> test_functionremove_html_markupat0x105d39d80_146 <test> <remove_html_markup()> test_functionremove_html_markupat0x105d39d80_144->test_functionremove_html_markupat0x105d39d80_146 tag_functionremove_html_markupat0x105d39d80_145 tag <remove_html_markup()> test_functionremove_html_markupat0x105d39d80_144->tag_functionremove_html_markupat0x105d39d80_145 quote_functionremove_html_markupat0x105d39d80_138 quote <remove_html_markup()> quote_functionremove_html_markupat0x105d39d80_138->test_functionremove_html_markupat0x105d39d80_144 quote_functionremove_html_markupat0x105d39d80_138->test_functionremove_html_markupat0x105d39d80_146 out_functionremove_html_markupat0x105d39d80_139 out <remove_html_markup()> c_functionremove_html_markupat0x105d39d80_141 c <remove_html_markup()> c_functionremove_html_markupat0x105d39d80_141->test_functionremove_html_markupat0x105d39d80_144 c_functionremove_html_markupat0x105d39d80_141->test_functionremove_html_markupat0x105d39d80_146 test_functionremove_html_markupat0x105d39d80_148 <test> <remove_html_markup()> c_functionremove_html_markupat0x105d39d80_141->test_functionremove_html_markupat0x105d39d80_148 out_functionremove_html_markupat0x105d39d80_151 out <remove_html_markup()> c_functionremove_html_markupat0x105d39d80_141->out_functionremove_html_markupat0x105d39d80_151 tag_functionremove_html_markupat0x105d39d80_147 tag <remove_html_markup()> test_functionremove_html_markupat0x105d39d80_150 <test> <remove_html_markup()> tag_functionremove_html_markupat0x105d39d80_147->test_functionremove_html_markupat0x105d39d80_150 test_functionremove_html_markupat0x105d39d80_146->tag_functionremove_html_markupat0x105d39d80_147 test_functionremove_html_markupat0x105d39d80_146->test_functionremove_html_markupat0x105d39d80_148 remove_html_markupreturnvalue_functionremove_html_markupat0x105d39d80_153 <remove_html_markup() return value> <remove_html_markup()> test_functionremove_html_markupat0x105d39d80_146->remove_html_markupreturnvalue_functionremove_html_markupat0x105d39d80_153 s_functionremove_html_markupat0x105d39d80_136 s <remove_html_markup()> s_functionremove_html_markupat0x105d39d80_136->c_functionremove_html_markupat0x105d39d80_141 tag_functionremove_html_markupat0x105d39d80_137 tag <remove_html_markup()> test_functionremove_html_markupat0x105d39d80_148->test_functionremove_html_markupat0x105d39d80_150 out_functionremove_html_markupat0x105d39d80_151->out_functionremove_html_markupat0x105d39d80_151 out_functionremove_html_markupat0x105d39d80_151->remove_html_markupreturnvalue_functionremove_html_markupat0x105d39d80_153 out_functionremove_html_markupat0x105d39d80_139->out_functionremove_html_markupat0x105d39d80_151 test_functionremove_html_markupat0x105d39d80_150->out_functionremove_html_markupat0x105d39d80_151 tag_functionremove_html_markupat0x105d39d80_145->test_functionremove_html_markupat0x105d39d80_150

Again, we can read such a graph forward (starting from, say, s) or backward (starting from the return value). Starting forward, we see how the passed string s flows into the for loop, breaking s into individual characters c that are then checked on various occasions, before flowing into the out return value. We also see how the various if conditions are all influenced by c, tag, and quote.

In [49]:
quiz("Why does the first line `tag = False` not influence anything?",
    [
        "Because the input contains only tags",
        "Because `tag` is set to True with the first character",
        "Because `tag` is not read by any variable",
        "Because the input contains no tags",
    ], '(1 << 1 + 1 >> 1)')
Out[49]:

Quiz

Why does the first line tag = False not influence anything?





Which are the locations that set tag to True? To this end, we compute the slice of tag at tag = True:

In [50]:
# ignore
tag_deps = Dependencies({('tag', (remove_html_markup, 145)): set(), ('<test>', (remove_html_markup, 144)): {('quote', (remove_html_markup, 138)), ('c', (remove_html_markup, 141))}, ('quote', (remove_html_markup, 138)): set(), ('c', (remove_html_markup, 141)): {('s', (remove_html_markup, 136))}, ('s', (remove_html_markup, 136)): set()}, {('tag', (remove_html_markup, 145)): {('<test>', (remove_html_markup, 144))}, ('<test>', (remove_html_markup, 144)): set(), ('quote', (remove_html_markup, 138)): set(), ('c', (remove_html_markup, 141)): set(), ('s', (remove_html_markup, 136)): set()})
tag_deps
Out[50]:
dependencies test_functionremove_html_markupat0x105d39d80_144 <test> <remove_html_markup()> tag_functionremove_html_markupat0x105d39d80_145 tag <remove_html_markup()> test_functionremove_html_markupat0x105d39d80_144->tag_functionremove_html_markupat0x105d39d80_145 quote_functionremove_html_markupat0x105d39d80_138 quote <remove_html_markup()> quote_functionremove_html_markupat0x105d39d80_138->test_functionremove_html_markupat0x105d39d80_144 c_functionremove_html_markupat0x105d39d80_141 c <remove_html_markup()> c_functionremove_html_markupat0x105d39d80_141->test_functionremove_html_markupat0x105d39d80_144 s_functionremove_html_markupat0x105d39d80_136 s <remove_html_markup()> s_functionremove_html_markupat0x105d39d80_136->c_functionremove_html_markupat0x105d39d80_141

We see where the value of tag comes from: from the characters c in s as well as quote, which all cause it to be set. Again, we can combine these dependencies and the listing in a single, compact view. Note, again, that there are no other locations in the code that could possibly have affected tag in our run.

In [51]:
# ignore
tag_deps.code()
   238 def remove_html_markup(s):  # type: ignore
   239     tag = False
   240     quote = False
   241     out = ""
   242 
   243     for c in s:
   244         assert tag or not quote
   245 
   246         if c == '<' and not quote:
   247             tag = True
   248         elif c == '>' and not quote:
   249             tag = False
   250         elif (c == '"' or c == "'") and tag:
   251             quote = not quote
   252         elif not tag:
   253             out = out + c
   254 
   255     return out
In [52]:
quiz("How does the slice of `tag = True` change "
     "for a different value of `s`?",
    [
        "Not at all",
        "If `s` contains a quote, the `quote` slice is included, too",
        "If `s` contains no HTML tag, the slice will be empty"
    ], '[1, 2, 3][1:]')
Out[52]:

Quiz

How does the slice of tag = True change for a different value of s?




Indeed, our dynamic slices reflect dependencies as they occurred within a single execution. As the execution changes, so do the dependencies.

Tracking Techniques¶

For the remainder of this chapter, let us investigate means to determine such dependencies automatically – by collecting them during program execution. The idea is that with a single Python call, we can collect the dependencies for some computation, and present them to programmers – as graphs or as code annotations, as shown above.

To track dependencies, for every variable, we need to keep track of its origins – where it obtained its value, and which tests controlled its assignments. There are two ways to do so:

  • Wrapping Data Objects
  • Wrapping Data Accesses

Wrapping Data Objects¶

One way to track origins is to wrap each value in a class that stores both a value and the origin of the value. If a variable x is initialized to zero in Line 3, for instance, we could store it as

x = (value=0, origin=<Line 3>)

and if it is copied in, say, Line 5 to another variable y, we could store this as

y = (value=0, origin=<Line 3, Line 5>)

Such a scheme would allow us to track origins and dependencies right within the variable.

In a language like Python, it is actually possibly to subclass from basic types. Here's how we create a MyInt subclass of int:

In [53]:
class MyInt(int):
    def __new__(cls: Type, value: Any, *args: Any, **kwargs: Any) -> Any:
        return super(cls, cls).__new__(cls, value)

    def __repr__(self) -> str:
        return f"{int(self)}"
In [54]:
n: MyInt = MyInt(5)

We can access n just like any integer:

In [55]:
n, n + 1
Out[55]:
(5, 6)

However, we can also add extra attributes to it:

In [56]:
n.origin = "Line 5"  # type: ignore
In [57]:
n.origin  # type: ignore
Out[57]:
'Line 5'

Such a "wrapping" scheme has the advantage of leaving program code untouched – simply pass "wrapped" objects instead of the original values. However, it also has a number of drawbacks.

  • First, we must make sure that the "wrapper" objects are still compatible with the original values – notably by converting them back whenever needed. (What happens if an internal Python function expects an int and gets a MyInt instead?)
  • Second, we have to make sure that origins do not get lost during computations – which involves overloading operators such as +, -, *, and so on. (Right now, MyInt(1) + 1 gives us an int object, not a MyInt.)
  • Third, we have to do this for all data types of a language, which is pretty tedious.
  • Fourth and last, however, we want to track whenever a value is assigned to another variable. Python has no support for this, and thus our dependencies will necessarily be incomplete.

Wrapping Data Accesses¶

An alternate way of tracking origins is to instrument the source code such that all data read and write operations are tracked. That is, the original data stays unchanged, but we change the code instead.

In essence, for every occurrence of a variable x being read, we replace it with

_data.get('x', x)  # returns x

and for every occurrence of a value being written to x, we replace the value with

_data.set('x', value)  # returns value

and let the _data object track these reads and writes.

Hence, an assignment such as

a = b + c

would get rewritten to

a = _data.set('a', _data.get('b', b) + _data.get('c', c))

and with every access to _data, we would track

  1. the current location in the code, and
  2. whether the respective variable was read or written.

For the above statement, we could deduce that b and c were read, and a was written – which makes a data dependent on b and c.

The advantage of such instrumentation is that it works with arbitrary objects (in Python, that is) – we do not care whether a, b, and c would be integers, floats, strings, lists or any other type for which + would be defined. Also, the code semantics remain entirely unchanged.

The disadvantage, however, is that it takes a bit of effort to exactly separate reads and writes into individual groups, and that a number of language features have to be handled separately. This is what we do in the remainder of this chapter.

A Data Tracker¶

To implement _data accesses as shown above, we introduce the DataTracker class. As its name suggests, it keeps track of variables being read and written, and provides methods to determine the code location where this took place.

In [58]:
class DataTracker(StackInspector):
    """Track data accesses during execution"""

    def __init__(self, log: bool = False) -> None:
        """Constructor. If `log` is set, turn on logging."""
        self.log = log

set() is invoked when a variable is set, as in

pi = _data.set('pi', 3.1415)

By default, we simply log the access using name and value. (loads will be used later.)

In [59]:
class DataTracker(DataTracker):
    def set(self, name: str, value: Any, loads: Optional[Set[str]] = None) -> Any:
        """Track setting `name` to `value`."""
        if self.log:
            caller_func, lineno = self.caller_location()
            print(f"{caller_func.__name__}:{lineno}: setting {name}")

        return value

get() is invoked when a variable is retrieved, as in

print(_data.get('pi', pi))

By default, we simply log the access.

In [60]:
class DataTracker(DataTracker):
    def get(self, name: str, value: Any) -> Any:
        """Track getting `value` from `name`."""

        if self.log:
            caller_func, lineno = self.caller_location()
            print(f"{caller_func.__name__}:{lineno}: getting {name}")

        return value

Here's an example of a logging DataTracker:

In [61]:
_test_data = DataTracker(log=True)
x = _test_data.set('x', 1)
<cell line: 2>:2: setting x
In [62]:
_test_data.get('x', x)
<cell line: 1>:1: getting x
Out[62]:
1

Instrumenting Source Code¶

How do we transform source code such that read and write accesses to variables would be automatically rewritten? To this end, we inspect the internal representation of source code, namely the abstract syntax trees (ASTs). An AST represents the code as a tree, with specific node types for each syntactical element.

Here is the tree representation for our middle() function. It starts with a FunctionDef node at the top (with the name "middle" and the three arguments x, y, z as children), followed by a subtree for each of the If statements, each of which contains a branch for when their condition evaluates to True and a branch for when their condition evaluates to False.

In [65]:
middle_tree = ast.parse(inspect.getsource(middle))
show_ast(middle_tree)
0 FunctionDef 1 "middle" 0--1 2 arguments 0--2 9 If 0--9 70 Return 0--70 3 arg 2--3 5 arg 2--5 7 arg 2--7 4 "x" 3--4 6 "y" 5--6 8 "z" 7--8 10 Compare 9--10 18 If 9--18 44 If 9--44 11 Name 10--11 14 Lt 10--14 15 Name 10--15 12 "y" 11--12 13 Load 11--13 16 "z" 15--16 17 Load 15--17 19 Compare 18--19 27 Return 18--27 31 If 18--31 20 Name 19--20 23 Lt 19--23 24 Name 19--24 21 "x" 20--21 22 Load 20--22 25 "y" 24--25 26 Load 24--26 28 Name 27--28 29 "y" 28--29 30 Load 28--30 32 Compare 31--32 40 Return 31--40 33 Name 32--33 36 Lt 32--36 37 Name 32--37 34 "x" 33--34 35 Load 33--35 38 "z" 37--38 39 Load 37--39 41 Name 40--41 42 "y" 41--42 43 Load 41--43 45 Compare 44--45 53 Return 44--53 57 If 44--57 46 Name 45--46 49 Gt 45--49 50 Name 45--50 47 "x" 46--47 48 Load 46--48 51 "y" 50--51 52 Load 50--52 54 Name 53--54 55 "y" 54--55 56 Load 54--56 58 Compare 57--58 66 Return 57--66 59 Name 58--59 62 Gt 58--62 63 Name 58--63 60 "x" 59--60 61 Load 59--61 64 "z" 63--64 65 Load 63--65 67 Name 66--67 68 "x" 67--68 69 Load 67--69 71 Name 70--71 72 "z" 71--72 73 Load 71--73

At the very bottom of the tree, you can see a number of Name nodes, referring individual variables. These are the ones we want to transform.

Tracking Variable Accesses¶

Our goal is to traverse the tree, identify all Name nodes, and convert them to respective _data accesses. To this end, we manipulate the AST through the ast Python module ast. The official Python ast reference is complete, but a bit brief; the documentation "Green Tree Snakes - the missing Python AST docs" provides an excellent introduction.

The Python ast module provides a class NodeTransformer that allows such transformations. By subclassing from it, we provide a method visit_Name() that will be invoked for all Name nodes – and replace it by a new subtree from make_get_data():

In [68]:
DATA_TRACKER = '_data'
In [69]:
def is_internal(id: str) -> bool:
    """Return True if `id` is a built-in function or type"""
    return (id in dir(__builtins__) or id in dir(typing))
In [70]:
assert is_internal('int')
assert is_internal('None')
assert is_internal('Tuple')
In [71]:
class TrackGetTransformer(NodeTransformer):
    def visit_Name(self, node: Name) -> AST:
        self.generic_visit(node)

        if is_internal(node.id):
            # Do not change built-in names and types
            return node

        if node.id == DATA_TRACKER:
            # Do not change own accesses
            return node

        if not isinstance(node.ctx, Load):
            # Only change loads (not stores, not deletions)
            return node

        new_node = make_get_data(node.id)
        ast.copy_location(new_node, node)
        return new_node

Our function make_get_data(id, method) returns a new subtree equivalent to the Python code _data.method('id', id).

In [73]:
# Starting with Python 3.8, these will become Constant.
# from ast import Num, Str, NameConstant
# Use `ast.Num`, `ast.Str`, and `ast.NameConstant` for compatibility
In [74]:
def make_get_data(id: str, method: str = 'get') -> Call:
    return Call(func=Attribute(value=Name(id=DATA_TRACKER, ctx=Load()), 
                               attr=method, ctx=Load()),
                args=[ast.Str(s=id), Name(id=id, ctx=Load())],
                keywords=[])

This is the tree that make_get_data() produces:

In [75]:
show_ast(Module(body=[make_get_data("x")]))
0 Call 1 Attribute 0--1 7 Constant 0--7 9 Name 0--9 2 Name 1--2 5 "get" 1--5 6 Load 1--6 3 "_data" 2--3 4 Load 2--4 8 "x" 7--8 10 "x" 9--10 11 Load 9--11

How do we know that this is a correct subtree? We can carefully read the official Python ast reference and then proceed by trial and error (and apply delta debugging to determine error causes). Or – pro tip! – we can simply take a piece of Python code, parse it and use ast.dump() to print out how to construct the resulting AST:

In [76]:
print(ast.dump(ast.parse("_data.get('x', x)")))
Module(body=[Expr(value=Call(func=Attribute(value=Name(id='_data', ctx=Load()), attr='get', ctx=Load()), args=[Constant(value='x'), Name(id='x', ctx=Load())], keywords=[]))], type_ignores=[])

If you compare the above output with the code of make_get_data(), above, you will find out where the source of make_get_data() comes from.

Let us put TrackGetTransformer to action. Its visit() method calls visit_Name(), which then in turn transforms the Name nodes as we want it. This happens in place.

In [77]:
TrackGetTransformer().visit(middle_tree);

To see the effect of our transformations, we introduce a method dump_tree() which outputs the tree – and also compiles it to check for any inconsistencies.

In [78]:
def dump_tree(tree: AST) -> None:
    print_content(ast.unparse(tree), '.py')
    ast.fix_missing_locations(tree)  # Must run this before compiling
    _ = compile(cast(ast.Module, tree), '<dump_tree>', 'exec')

We see that our transformer has properly replaced all variable accesses:

In [79]:
dump_tree(middle_tree)
def middle(x, y, z):
    if _data.get('y', y) < _data.get('z', z):
        if _data.get('x', x) < _data.get('y', y):
            return _data.get('y', y)
        elif _data.get('x', x) < _data.get('z', z):
            return _data.get('y', y)
    elif _data.get('x', x) > _data.get('y', y):
        return _data.get('y', y)
    elif _data.get('x', x) > _data.get('z', z):
        return _data.get('x', x)
    return _data.get('z', z)

Let us now execute this code together with the DataTracker() class we previously introduced. The class DataTrackerTester() takes a (transformed) tree and a function. Using it as

with DataTrackerTester(tree, func):
    func(...)

first executes the code in tree (possibly instrumenting func) and then the with body. At the end, func is restored to its previous (non-instrumented) version.

In [81]:
class DataTrackerTester:
    def __init__(self, tree: AST, func: Callable, log: bool = True) -> None:
        """Constructor. Execute the code in `tree` while instrumenting `func`."""
        # We pass the source file of `func` such that we can retrieve it
        # when accessing the location of the new compiled code
        source = cast(str, inspect.getsourcefile(func))
        self.code = compile(cast(ast.Module, tree), source, 'exec')
        self.func = func
        self.log = log

    def make_data_tracker(self) -> Any:
        return DataTracker(log=self.log)

    def __enter__(self) -> Any:
        """Rewrite function"""
        tracker = self.make_data_tracker()
        globals()[DATA_TRACKER] = tracker
        exec(self.code, globals())
        return tracker

    def __exit__(self, exc_type: Type, exc_value: BaseException,
                 traceback: TracebackType) -> Optional[bool]:
        """Restore function"""
        globals()[self.func.__name__] = self.func
        del globals()[DATA_TRACKER]
        return None

Here is our middle() function:

In [82]:
print_content(inspect.getsource(middle), '.py', start_line_number=1)
 1  def middle(x, y, z):  # type: ignore
 2      if y < z:
 3          if x < y:
 4              return y
 5          elif x < z:
 6              return y
 7      else:
 8          if x > y:
 9              return y
10          elif x > z:
11              return x
12      return z

And here is our instrumented middle_tree executed with a DataTracker object. We see how the middle() tests access one argument after another.

In [83]:
with DataTrackerTester(middle_tree, middle):
    middle(2, 1, 3)
middle:2: getting y
middle:2: getting z
middle:3: getting x
middle:3: getting y
middle:5: getting x
middle:5: getting z
middle:6: getting y

After DataTrackerTester is done, middle is reverted to its non-instrumented version:

In [84]:
middle(2, 1, 3)
Out[84]:
1

For a complete picture of what happens during executions, we implement a number of additional code transformers.

For each assignment statement x = y, we change it to x = _data.set('x', y). This allows to track assignments.

Each return statement return x is transformed to return _data.set('<return_value>', x). This allows to track return values.

To track control dependencies, for every block controlled by an if, while, or for:

  1. We wrap their tests in a _data.test() wrapper. This allows us to assign pseudo-variables like <test> which hold the conditions.
  2. We wrap their controlled blocks in a with statement. This allows us to track the variables read right before the with (= the controlling variables), and to restore the current controlling variables when the block is left.

A statement

if cond:
    body

thus becomes

if _data.test(cond):
    with _data:
        body

We also want to be able to track calls across multiple functions. To this end, we wrap each call

func(arg1, arg2, ...)

into

_data.ret(_data.call(func)(_data.arg(arg1), _data.arg(arg2), ...))

each of which simply pass through their given argument, but which allow to track the beginning of calls (call()), the computation of arguments (arg()), and the return of the call (ret()), respectively.

On the receiving end, for each function argument x, we insert a call _data.param('x', x, [position info]) to initialize x. This is useful for tracking parameters across function calls.

What do we obtain after we have applied all these transformers on middle()? We see that the code now contains quite a load of instrumentation.

In [140]:
dump_tree(middle_tree)
def middle(x, y, z):
    _data.param('x', x, pos=1)
    _data.param('y', y, pos=2)
    _data.param('z', z, pos=3, last=True)
    if _data.test(_data.get('y', y) < _data.get('z', z)):
        with _data:
            if _data.test(_data.get('x', x) < _data.get('y', y)):
                with _data:
                    return _data.set('<middle() return value>', _data.get('y', y))
            else:
                with _data:
                    if _data.test(_data.get('x', x) < _data.get('z', z)):
                        with _data:
                            return _data.set('<middle() return value>', _data.get('y', y))
    else:
        with _data:
            if _data.test(_data.get('x', x) > _data.get('y', y)):
                with _data:
                    return _data.set('<middle() return value>', _data.get('y', y))
            else:
                with _data:
                    if _data.test(_data.get('x', x) > _data.get('z', z)):
                        with _data:
                            return _data.set('<middle() return value>', _data.get('x', x))
    return _data.set('<middle() return value>', _data.get('z', z))

And when we execute this code, we see that we can track quite a number of events, while the code semantics stay unchanged.

In [141]:
with DataTrackerTester(middle_tree, middle):
    m = middle(2, 1, 3)
m
middle:1: initializing x #1
middle:1: setting x
middle:1: initializing y #2
middle:1: setting y
middle:1: initializing z #3
middle:1: setting z
middle:2: getting y
middle:2: getting z
middle:2: testing condition
middle:3: entering block
middle:3: getting x
middle:3: getting y
middle:3: testing condition
middle:5: entering block
middle:5: getting x
middle:5: getting z
middle:5: testing condition
middle:6: entering block
middle:6: getting y
middle:6: setting <middle() return value>
middle:6: exiting block
middle:5: exiting block
middle:3: exiting block
Out[141]:
1

Our next step will now be not only to log these events, but to actually construct dependencies from them.

Tracking Dependencies¶

To construct dependencies from variable accesses, we subclass DataTracker into DependencyTracker – a class that actually keeps track of all these dependencies. Its constructor initializes a number of variables we will discuss below.

In [144]:
class DependencyTracker(DataTracker):
    """Track dependencies during execution"""

    def __init__(self, *args: Any, **kwargs: Any) -> None:
        """Constructor. Arguments are passed to DataTracker.__init__()"""
        super().__init__(*args, **kwargs)

        self.origins: Dict[str, Location] = {}  # Where current variables were last set
        self.data_dependencies: Dependency = {}  # As with Dependencies, above
        self.control_dependencies: Dependency = {}

        self.last_read: List[str] = []  # List of last read variables
        self.last_checked_location = (StackInspector.unknown, 1)
        self._ignore_location_change = False

        self.data: List[List[str]] = [[]]  # Data stack
        self.control: List[List[str]] = [[]]  # Control stack

        self.frames: List[Dict[Union[int, str], Any]] = [{}]  # Argument stack
        self.args: Dict[Union[int, str], Any] = {}  # Current args

Data Dependencies¶

The first job of our DependencyTracker is to construct dependencies between read and written variables.

Reading Variables¶

As in DataTracker, the key method of DependencyTracker again is get(), invoked as _data.get('x', x) whenever a variable x is read. First and foremost, it appends the name of the read variable to the list last_read.

In [145]:
class DependencyTracker(DependencyTracker):
    def get(self, name: str, value: Any) -> Any:
        """Track a read access for variable `name` with value `value`"""
        self.check_location()
        self.last_read.append(name)
        return super().get(name, value)

    def check_location(self) -> None:
        pass  # More on that below
In [146]:
x = 5
y = 3
In [147]:
_test_data = DependencyTracker(log=True)
_test_data.get('x', x) + _test_data.get('y', y)
<cell line: 2>:2: getting x
<cell line: 2>:2: getting y
Out[147]:
8
In [148]:
_test_data.last_read
Out[148]:
['x', 'y']

Checking Locations¶

However, before appending the read variable to last_read, _data.get() does one more thing. By invoking check_location(), it clears the last_read list if we have reached a new line in the execution. This avoids situations such as

x
y
z = a + b

where x and y are, well, read, but do not affect the last line. Therefore, with every new line, the list of last read lines is cleared.

In [149]:
class DependencyTracker(DependencyTracker):
    def clear_read(self) -> None:
        """Clear set of read variables"""
        if self.log:
            direct_caller = inspect.currentframe().f_back.f_code.co_name  # type: ignore
            caller_func, lineno = self.caller_location()
            print(f"{caller_func.__name__}:{lineno}: "
                  f"clearing read variables {self.last_read} "
                  f"(from {direct_caller})")

        self.last_read = []

    def check_location(self) -> None:
        """If we are in a new location, clear set of read variables"""
        location = self.caller_location()
        func, lineno = location
        last_func, last_lineno = self.last_checked_location

        if self.last_checked_location != location:
            if self._ignore_location_change:
                self._ignore_location_change = False
            elif func.__name__.startswith('<'):
                # Entering list comprehension, eval(), exec(), ...
                pass
            elif last_func.__name__.startswith('<'):
                # Exiting list comprehension, eval(), exec(), ...
                pass
            else:
                # Standard case
                self.clear_read()

        self.last_checked_location = location

Two methods can suppress this reset of the last_read list:

  • ignore_next_location_change() suppresses the reset for the next line. This is useful when returning from a function, when the return value is still in the list of "read" variables.
  • ignore_location_change() suppresses the reset for the current line. This is useful if we already have returned from a function call.
In [150]:
class DependencyTracker(DependencyTracker):
    def ignore_next_location_change(self) -> None:
        self._ignore_location_change = True

    def ignore_location_change(self) -> None:
        self.last_checked_location = self.caller_location()

Watch how DependencyTracker resets last_read when a new line is executed:

In [151]:
_test_data = DependencyTracker()
In [152]:
_test_data.get('x', x) + _test_data.get('y', y)
Out[152]:
8
In [153]:
_test_data.last_read
Out[153]:
['x', 'y']
In [154]:
a = 42
b = -1
_test_data.get('a', a) + _test_data.get('b', b)
Out[154]:
41
In [155]:
_test_data.last_read
Out[155]:
['x', 'y', 'a', 'b']

Setting Variables¶

The method set() creates dependencies. It is invoked as _data.set('x', value) whenever a variable x is set.

First and foremost, it takes the list of variables read last_read, and for each of the variables $v$, it takes their origin $o$ (the place where they were last set) and appends the pair ($v$, $o$) to the list of data dependencies. It then does a similar thing with control dependencies (more on these below), and finally marks (in self.origins) the current location of $v$.

In [157]:
class DependencyTracker(DependencyTracker):
    TEST = '<test>'  # Name of pseudo-variables for testing conditions

    def set(self, name: str, value: Any, loads: Optional[Set[str]] = None) -> Any:
        """Add a dependency for `name` = `value`"""

        def add_dependencies(dependencies: Set[Node], 
                             vars_read: List[str], tp: str) -> None:
            """Add origins of `vars_read` to `dependencies`."""
            for var_read in vars_read:
                if var_read in self.origins:
                    if var_read == self.TEST and tp == "data":
                        # Can't have data dependencies on conditions
                        continue

                    origin = self.origins[var_read]
                    dependencies.add((var_read, origin))

                    if self.log:
                        origin_func, origin_lineno = origin
                        caller_func, lineno = self.caller_location()
                        print(f"{caller_func.__name__}:{lineno}: "
                              f"new {tp} dependency: "
                              f"{name} <= {var_read} "
                              f"({origin_func.__name__}:{origin_lineno})")

        self.check_location()
        ret = super().set(name, value)
        location = self.caller_location()

        add_dependencies(self.data_dependencies.setdefault
                         ((name, location), set()),
                         self.last_read, tp="data")
        add_dependencies(self.control_dependencies.setdefault
                         ((name, location), set()),
                         cast(List[str], itertools.chain.from_iterable(self.control)),
                         tp="control")

        self.origins[name] = location

        # Reset read info for next line
        self.last_read = [name]

        # Next line is a new location
        self._ignore_location_change = False

        return ret

    def dependencies(self) -> Dependencies:
        """Return dependencies"""
        return Dependencies(self.data_dependencies,
                            self.control_dependencies)

Let us illustrate set() by example. Here's a set of variables read and written:

In [158]:
_test_data = DependencyTracker()
x = _test_data.set('x', 1)
y = _test_data.set('y', _test_data.get('x', x))
z = _test_data.set('z', _test_data.get('x', x) + _test_data.get('y', y))

The attribute origins saves for each variable where it was last written:

In [159]:
_test_data.origins
Out[159]:
{'x': (<function __main__.<cell line: 2>()>, 2),
 'y': (<function __main__.<cell line: 3>()>, 3),
 'z': (<function __main__.<cell line: 4>()>, 4)}

The attribute data_dependencies tracks for each variable the variables it was read from:

In [160]:
_test_data.data_dependencies
Out[160]:
{('x', (<function __main__.<cell line: 2>()>, 2)): set(),
 ('y',
  (<function __main__.<cell line: 3>()>, 3)): {('x',
   (<function __main__.<cell line: 2>()>, 2))},
 ('z',
  (<function __main__.<cell line: 4>()>, 4)): {('x',
   (<function __main__.<cell line: 2>()>, 2)), ('y',
   (<function __main__.<cell line: 3>()>, 3))}}

Hence, the above code already gives us a small dependency graph:

In [161]:
# ignore
_test_data.dependencies().graph()
Out[161]:
dependencies y_functioncellline_3at0x10688f130_3 y <cell line: 3> z_functioncellline_4at0x10688d360_4 z <cell line: 4> y_functioncellline_3at0x10688f130_3->z_functioncellline_4at0x10688d360_4 x_functioncellline_2at0x105f691b0_2 x <cell line: 2> x_functioncellline_2at0x105f691b0_2->y_functioncellline_3at0x10688f130_3 x_functioncellline_2at0x105f691b0_2->z_functioncellline_4at0x10688d360_4

In the remainder of this section, we define methods to

  • track control dependencies (test(), __enter__(), __exit__())
  • track function calls and returns (call(), ret())
  • track function arguments (arg(), param())
  • check the validity of our dependencies (validate()).

Like our get() and set() methods above, these work by refining the appropriate methods defined in the DataTracker class, building on our NodeTransformer transformations.

At this point, DependencyTracker is complete; we have all in place to track even complex dependencies in instrumented code.

Slicing Code¶

Let us now put all these pieces together. We have a means to instrument the source code (our various NodeTransformer classes) and a means to track dependencies (the DependencyTracker class). Now comes the time to put all these things together in a single tool, which we call Slicer.

The basic idea of Slicer is that you can use it as follows:

with Slicer(func_1, func_2, ...) as slicer:
    func(...)

which first instruments the functions given in the constructor (i.e., replaces their definitions with instrumented counterparts), and then runs the code in the body, calling instrumented functions, and allowing the slicer to collect dependencies. When the body returns, the original definition of the instrumented functions is restored.

An Instrumenter Base Class¶

The basic functionality of instrumenting a number of functions (and restoring them at the end of the with block) comes in a Instrumenter base class. It invokes instrument() on all items to instrument; this is to be overloaded in subclasses.

In [184]:
class Instrumenter(StackInspector):
    """Instrument functions for dynamic tracking"""

    def __init__(self, *items_to_instrument: Callable,
                 globals: Optional[Dict[str, Any]] = None,
                 log: Union[bool, int] = False) -> None:
        """
        Create an instrumenter.
        `items_to_instrument` is a list of items to instrument.
        `globals` is a namespace to use (default: caller's globals())
        """

        self.log = log
        self.items_to_instrument: List[Callable] = list(items_to_instrument)
        self.instrumented_items: Set[Any] = set()

        if globals is None:
            globals = self.caller_globals()
        self.globals = globals

    def __enter__(self) -> Any:
        """Instrument sources"""
        items = self.items_to_instrument
        if not items:
            items = self.default_items_to_instrument()

        for item in items:
            self.instrument(item)

        return self

    def default_items_to_instrument(self) -> List[Callable]:
        return []

    def instrument(self, item: Any) -> Any:
        """Instrument `item`. To be overloaded in subclasses."""
        if self.log:
            print("Instrumenting", item)
        self.instrumented_items.add(item)
        return item

At the end of the with block, we restore the given functions.

In [185]:
class Instrumenter(Instrumenter):
    def __exit__(self, exc_type: Type, exc_value: BaseException,
                 traceback: TracebackType) -> Optional[bool]:
        """Restore sources"""
        self.restore()
        return None

    def restore(self) -> None:
        for item in self.instrumented_items:
            self.globals[item.__name__] = item

By default, an Instrumenter simply outputs a log message:

In [186]:
with Instrumenter(middle, log=True) as ins:
    pass
Instrumenting <function middle at 0x105b94e50>

The Slicer Class¶

The Slicer class comes as a subclass of Instrumenter. It sets its own dependency tracker (which can be overwritten by setting the dependency_tracker keyword argument).

In [187]:
class Slicer(Instrumenter):
    """Track dependencies in an execution"""

    def __init__(self, *items_to_instrument: Any,
                 dependency_tracker: Optional[DependencyTracker] = None,
                 globals: Optional[Dict[str, Any]] = None,
                 log: Union[bool, int] = False):
        """Create a slicer.
        `items_to_instrument` are Python functions or modules with source code.
        `dependency_tracker` is the tracker to be used (default: DependencyTracker).
        `globals` is the namespace to be used(default: caller's `globals()`)
        `log`=True or `log` > 0 turns on logging
        """
        super().__init__(*items_to_instrument, globals=globals, log=log)

        if dependency_tracker is None:
            dependency_tracker = DependencyTracker(log=(log > 1))
        self.dependency_tracker = dependency_tracker

        self.saved_dependencies = None

    def default_items_to_instrument(self) -> List[Callable]:
        raise ValueError("Need one or more items to instrument")

The parse() method parses a given item, returning its AST.

In [188]:
class Slicer(Slicer):
    def parse(self, item: Any) -> AST:
        """Parse `item`, returning its AST"""
        source_lines, lineno = inspect.getsourcelines(item)
        source = "".join(source_lines)

        if self.log >= 2:
            print_content(source, '.py', start_line_number=lineno)
            print()
            print()

        tree = ast.parse(source)
        ast.increment_lineno(tree, lineno - 1)
        return tree

The transform() method applies the list of transformers defined earlier in this chapter.

In [189]:
class Slicer(Slicer):
    def transformers(self) -> List[NodeTransformer]:
        """List of transformers to apply. To be extended in subclasses."""
        return [
            TrackCallTransformer(),
            TrackSetTransformer(),
            TrackGetTransformer(),
            TrackControlTransformer(),
            TrackReturnTransformer(),
            TrackParamsTransformer()
        ]

    def transform(self, tree: AST) -> AST:
        """Apply transformers on `tree`. May be extended in subclasses."""
        # Apply transformers
        for transformer in self.transformers():
            if self.log >= 3:
                print(transformer.__class__.__name__ + ':')

            transformer.visit(tree)
            ast.fix_missing_locations(tree)
            if self.log >= 3:
                print_content(ast.unparse(tree), '.py')
                print()
                print()

        if 0 < self.log < 3:
            print_content(ast.unparse(tree), '.py')
            print()
            print()

        return tree

The execute() method executes the transformed tree (such that we get the new definitions). We also make the dependency tracker available for the code in the with block.

In [190]:
class Slicer(Slicer):
    def execute(self, tree: AST, item: Any) -> None:
        """Compile and execute `tree`. May be extended in subclasses."""

        # We pass the source file of `item` such that we can retrieve it
        # when accessing the location of the new compiled code
        source = cast(str, inspect.getsourcefile(item))
        code = compile(cast(ast.Module, tree), source, 'exec')

        # Enable dependency tracker
        self.globals[DATA_TRACKER] = self.dependency_tracker

        # Execute the code, resulting in a redefinition of item
        exec(code, self.globals)

The instrument() method puts all these together, first parsing the item into a tree, then transforming and executing the tree.

In [191]:
class Slicer(Slicer):
    def instrument(self, item: Any) -> Any:
        """Instrument `item`, transforming its source code, and re-defining it."""
        if is_internal(item.__name__):
            return item  # Do not instrument `print()` and the like

        if inspect.isbuiltin(item):
            return item  # No source code

        item = super().instrument(item)
        tree = self.parse(item)
        tree = self.transform(tree)
        self.execute(tree, item)

        new_item = self.globals[item.__name__]
        return new_item

When we restore the original definition (after the with block), we save the dependency tracker again.

In [192]:
class Slicer(Slicer):
    def restore(self) -> None:
        """Restore original code."""
        if DATA_TRACKER in self.globals:
            self.saved_dependencies = self.globals[DATA_TRACKER]
            del self.globals[DATA_TRACKER]

        super().restore()

Three convenience functions allow us to see the dependencies as (well) dependencies, as code, and as graph. These simply invoke the respective functions on the saved dependencies.

In [193]:
class Slicer(Slicer):
    def dependencies(self) -> Dependencies:
        """Return collected dependencies."""
        if self.saved_dependencies is None:
            return Dependencies({}, {})
        return self.saved_dependencies.dependencies()

    def code(self, *args: Any, **kwargs: Any) -> None:
        """Show code of instrumented items, annotated with dependencies."""
        first = True
        for item in self.instrumented_items:
            if not first:
                print()
            self.dependencies().code(item, *args, **kwargs)  # type: ignore
            first = False

    def graph(self, *args: Any, **kwargs: Any) -> Digraph:
        """Show dependency graph."""
        return self.dependencies().graph(*args, **kwargs)  # type: ignore

    def _repr_mimebundle_(self, include: Any = None, exclude: Any = None) -> Any:
        """If the object is output in Jupyter, render dependencies as a SVG graph"""
        return self.graph()._repr_mimebundle_(include, exclude)

Let us put Slicer into action. We track our middle() function:

In [194]:
with Slicer(middle) as slicer:
    m = middle(2, 1, 3)
m
Out[194]:
1

These are the dependencies in string form (used when printed):

In [195]:
print(slicer.dependencies())
middle():
    <test> (2) <= z (1), y (1)
    <test> (3) <= x (1), y (1); <- <test> (2)
    <test> (5) <= x (1), z (1); <- <test> (3)
    <middle() return value> (6) <= y (1); <- <test> (5)

This is the code form:

In [196]:
slicer.code()
*    1 def middle(x, y, z):  # type: ignore
*    2     if y < z:  # <= z (1), y (1)
*    3         if x < y:  # <= x (1), y (1); <- <test> (2)
     4             return y
*    5         elif x < z:  # <= x (1), z (1); <- <test> (3)
*    6             return y  # <= y (1); <- <test> (5)
     7     else:
     8         if x > y:
     9             return y
    10         elif x > z:
    11             return x
    12     return z

And this is the graph form:

In [197]:
slicer
Out[197]:
dependencies test_functionmiddleat0x106bd5240_5 <test> elif x < z: middlereturnvalue_functionmiddleat0x106bd5240_6 <middle() return value> return y test_functionmiddleat0x106bd5240_5->middlereturnvalue_functionmiddleat0x106bd5240_6 x_functionmiddleat0x106bd5240_1 x def middle(x, y, z):  # type: ignore x_functionmiddleat0x106bd5240_1->test_functionmiddleat0x106bd5240_5 test_functionmiddleat0x106bd5240_3 <test> if x < y: x_functionmiddleat0x106bd5240_1->test_functionmiddleat0x106bd5240_3 z_functionmiddleat0x106bd5240_1 z def middle(x, y, z):  # type: ignore z_functionmiddleat0x106bd5240_1->test_functionmiddleat0x106bd5240_5 test_functionmiddleat0x106bd5240_2 <test> if y < z: z_functionmiddleat0x106bd5240_1->test_functionmiddleat0x106bd5240_2 test_functionmiddleat0x106bd5240_3->test_functionmiddleat0x106bd5240_5 y_functionmiddleat0x106bd5240_1 y def middle(x, y, z):  # type: ignore y_functionmiddleat0x106bd5240_1->test_functionmiddleat0x106bd5240_3 y_functionmiddleat0x106bd5240_1->test_functionmiddleat0x106bd5240_2 y_functionmiddleat0x106bd5240_1->middlereturnvalue_functionmiddleat0x106bd5240_6 test_functionmiddleat0x106bd5240_2->test_functionmiddleat0x106bd5240_3

You can also access the raw repr() form, which allows you to reconstruct dependencies at any time. (This is how we showed off dependencies at the beginning of this chapter, before even introducing the code that computes them.)

In [198]:
print(repr(slicer.dependencies()))
Dependencies(
    data={
        ('x', (middle, 1)): set(),
        ('y', (middle, 1)): set(),
        ('z', (middle, 1)): set(),
        ('<test>', (middle, 2)): {('z', (middle, 1)), ('y', (middle, 1))},
        ('<test>', (middle, 3)): {('x', (middle, 1)), ('y', (middle, 1))},
        ('<test>', (middle, 5)): {('x', (middle, 1)), ('z', (middle, 1))},
        ('<middle() return value>', (middle, 6)): {('y', (middle, 1))}},
 control={
        ('x', (middle, 1)): set(),
        ('y', (middle, 1)): set(),
        ('z', (middle, 1)): set(),
        ('<test>', (middle, 2)): set(),
        ('<test>', (middle, 3)): {('<test>', (middle, 2))},
        ('<test>', (middle, 5)): {('<test>', (middle, 3))},
        ('<middle() return value>', (middle, 6)): {('<test>', (middle, 5))}})

Diagnostics¶

The Slicer constructor accepts a log argument (default: False), which can be set to show various intermediate results:

  • log=True (or log=1): Show instrumented source code
  • log=2: Also log execution
  • log=3: Also log individual transformer steps
  • log=4: Also log source line numbers

More Examples¶

Let us demonstrate our Slicer class on a few more examples.

Square Root¶

The square_root() function from the chapter on assertions demonstrates a nice interplay between data and control dependencies.

Here is the original source code:

In [201]:
print_content(inspect.getsource(square_root), '.py')
def square_root(x):  # type: ignore
    assert x >= 0  # precondition

    approx = None
    guess = x / 2
    while approx != guess:
        approx = guess
        guess = (approx + x / approx) / 2

    assert math.isclose(approx * approx, x)
    return approx

Turning on logging shows the instrumented version:

In [202]:
with Slicer(square_root, log=True) as root_slicer:
    y = square_root(2.0)
Instrumenting <function square_root at 0x10688cdc0>
def square_root(x):
    _data.param('x', x, pos=1, last=True)
    assert _data.set('<assertion>', _data.get('x', x) >= 0, loads=(_data.get('x', x),))
    approx = _data.set('approx', None)
    guess = _data.set('guess', _data.get('x', x) / 2)
    while _data.test(_data.get('approx', approx) != _data.get('guess', guess)):
        with _data:
            approx = _data.set('approx', _data.get('guess', guess))
            guess = _data.set('guess', (_data.get('approx', approx) + _data.get('x', x) / _data.get('approx', approx)) / 2)
    assert _data.set('<assertion>', _data.ret(_data.call(_data.get('math', math).isclose)(_data.arg(_data.get('approx', approx) * _data.get('approx', approx), pos=1), _data.arg(_data.get('x', x), pos=2))), loads=(_data, _data.get('approx', approx), _data.get('x', x), _data.get('math', math)))
    return _data.set('<square_root() return value>', _data.get('approx', approx))

The dependency graph shows how guess and approx flow into each other until they are the same.

In [203]:
root_slicer
Out[203]:
dependencies guess_functionsquare_rootat0x106bd5900_58 guess guess = x / 2 approx_functionsquare_rootat0x106bd5900_60 approx approx = guess guess_functionsquare_rootat0x106bd5900_58->approx_functionsquare_rootat0x106bd5900_60 test_functionsquare_rootat0x106bd5900_59 <test> while approx != guess: guess_functionsquare_rootat0x106bd5900_58->test_functionsquare_rootat0x106bd5900_59 x_functionsquare_rootat0x106bd5900_54 x def square_root(x):  # type: ignore x_functionsquare_rootat0x106bd5900_54->guess_functionsquare_rootat0x106bd5900_58 guess_functionsquare_rootat0x106bd5900_61 guess guess = (approx + x / approx) / 2 x_functionsquare_rootat0x106bd5900_54->guess_functionsquare_rootat0x106bd5900_61 assertion_functionsquare_rootat0x106bd5900_55 <assertion> assert x >= 0  # precondition x_functionsquare_rootat0x106bd5900_54->assertion_functionsquare_rootat0x106bd5900_55 assertion_functionsquare_rootat0x106bd5900_63 <assertion> assert math.isclose(approx * approx, x) x_functionsquare_rootat0x106bd5900_54->assertion_functionsquare_rootat0x106bd5900_63 approx_functionsquare_rootat0x106bd5900_60->guess_functionsquare_rootat0x106bd5900_61 approx_functionsquare_rootat0x106bd5900_60->test_functionsquare_rootat0x106bd5900_59 approx_functionsquare_rootat0x106bd5900_60->assertion_functionsquare_rootat0x106bd5900_63 square_rootreturnvalue_functionsquare_rootat0x106bd5900_64 <square_root() return value> return approx approx_functionsquare_rootat0x106bd5900_60->square_rootreturnvalue_functionsquare_rootat0x106bd5900_64 guess_functionsquare_rootat0x106bd5900_61->approx_functionsquare_rootat0x106bd5900_60 guess_functionsquare_rootat0x106bd5900_61->test_functionsquare_rootat0x106bd5900_59 test_functionsquare_rootat0x106bd5900_59->approx_functionsquare_rootat0x106bd5900_60 test_functionsquare_rootat0x106bd5900_59->guess_functionsquare_rootat0x106bd5900_61 approx_functionsquare_rootat0x106bd5900_57 approx approx = None approx_functionsquare_rootat0x106bd5900_57->test_functionsquare_rootat0x106bd5900_59

Again, we can show the code annotated with dependencies:

In [204]:
root_slicer.code()
*   54 def square_root(x):  # type: ignore
*   55     assert x >= 0  # precondition  # <= x (54)
    56 
*   57     approx = None
*   58     guess = x / 2  # <= x (54)
*   59     while approx != guess:  # <= guess (61), guess (58), approx (60), approx (57)
*   60         approx = guess  # <= guess (58), guess (61); <- <test> (59)
*   61         guess = (approx + x / approx) / 2  # <= approx (60), x (54); <- <test> (59)
    62 
*   63     assert math.isclose(approx * approx, x)  # <= approx (60), x (54)
*   64     return approx  # <= approx (60)

The astute reader may find that a statement assert p does not control the following code, although it would be equivalent to if not p: raise Exception. Why is that?

In [205]:
quiz("Why don't `assert` statements induce control dependencies?",
     [
         "We have no special handling of `raise` statements",
         "We have no special handling of exceptions",
         "Assertions are not supposed to act as controlling mechanisms",
         "All of the above",
     ], '(1 * 1 << 1 * 1 << 1 * 1)')
Out[205]:

Quiz

Why don't assert statements induce control dependencies?





Indeed: we treat assertions as "neutral" in the sense that they do not affect the remainder of the program – if they are turned off, they have no effect; and if they are turned on, the remaining program logic should not depend on them. (Our instrumentation also has no special treatment of raise or even return statements; they should be handled by our with blocks, though.)

In [206]:
# print(repr(root_slicer))

Removing HTML Markup¶

Let us come to our ongoing example, remove_html_markup(). This is how its instrumented code looks like:

In [207]:
with Slicer(remove_html_markup) as rhm_slicer:
    s = remove_html_markup("<foo>bar</foo>")

The graph is as discussed in the introduction to this chapter:

In [208]:
rhm_slicer
Out[208]:
dependencies c_functionremove_html_markupat0x106bd5990_243 c for c in s: test_functionremove_html_markupat0x106bd5990_246 <test> if c == '<' and not quote: c_functionremove_html_markupat0x106bd5990_243->test_functionremove_html_markupat0x106bd5990_246 out_functionremove_html_markupat0x106bd5990_253 out out = out + c c_functionremove_html_markupat0x106bd5990_243->out_functionremove_html_markupat0x106bd5990_253 test_functionremove_html_markupat0x106bd5990_248 <test> elif c == '>' and not quote: c_functionremove_html_markupat0x106bd5990_243->test_functionremove_html_markupat0x106bd5990_248 test_functionremove_html_markupat0x106bd5990_250 <test> elif (c == '"' or c == "'") and tag: c_functionremove_html_markupat0x106bd5990_243->test_functionremove_html_markupat0x106bd5990_250 assertion_functionremove_html_markupat0x106bd5990_244 <assertion> assert tag or not quote s_functionremove_html_markupat0x106bd5990_238 s def remove_html_markup(s):  # type: ignore s_functionremove_html_markupat0x106bd5990_238->c_functionremove_html_markupat0x106bd5990_243 tag_functionremove_html_markupat0x106bd5990_239 tag tag = False tag_functionremove_html_markupat0x106bd5990_247 tag tag = True test_functionremove_html_markupat0x106bd5990_252 <test> elif not tag: tag_functionremove_html_markupat0x106bd5990_247->test_functionremove_html_markupat0x106bd5990_252 tag_functionremove_html_markupat0x106bd5990_247->assertion_functionremove_html_markupat0x106bd5990_244 test_functionremove_html_markupat0x106bd5990_246->tag_functionremove_html_markupat0x106bd5990_247 test_functionremove_html_markupat0x106bd5990_246->test_functionremove_html_markupat0x106bd5990_248 remove_html_markupreturnvalue_functionremove_html_markupat0x106bd5990_255 <remove_html_markup() return value> return out out_functionremove_html_markupat0x106bd5990_253->remove_html_markupreturnvalue_functionremove_html_markupat0x106bd5990_255 out_functionremove_html_markupat0x106bd5990_253->out_functionremove_html_markupat0x106bd5990_253 quote_functionremove_html_markupat0x106bd5990_240 quote quote = False quote_functionremove_html_markupat0x106bd5990_240->test_functionremove_html_markupat0x106bd5990_246 quote_functionremove_html_markupat0x106bd5990_240->test_functionremove_html_markupat0x106bd5990_248 out_functionremove_html_markupat0x106bd5990_241 out out = "" quote_functionremove_html_markupat0x106bd5990_240->assertion_functionremove_html_markupat0x106bd5990_244 tag_functionremove_html_markupat0x106bd5990_249 tag tag = False tag_functionremove_html_markupat0x106bd5990_249->test_functionremove_html_markupat0x106bd5990_252 tag_functionremove_html_markupat0x106bd5990_249->assertion_functionremove_html_markupat0x106bd5990_244 test_functionremove_html_markupat0x106bd5990_248->tag_functionremove_html_markupat0x106bd5990_249 test_functionremove_html_markupat0x106bd5990_248->test_functionremove_html_markupat0x106bd5990_250 test_functionremove_html_markupat0x106bd5990_250->test_functionremove_html_markupat0x106bd5990_252 out_functionremove_html_markupat0x106bd5990_241->out_functionremove_html_markupat0x106bd5990_253 test_functionremove_html_markupat0x106bd5990_252->out_functionremove_html_markupat0x106bd5990_253 tag_functionremove_html_markupat0x106bd5990_239->assertion_functionremove_html_markupat0x106bd5990_244
In [209]:
# print(repr(rhm_slicer.dependencies()))
In [210]:
rhm_slicer.code()
*  238 def remove_html_markup(s):  # type: ignore
*  239     tag = False
*  240     quote = False
*  241     out = ""
   242 
*  243     for c in s:  # <= s (238)
*  244         assert tag or not quote  # <= tag (239), tag (247), quote (240), tag (249)
   245 
*  246         if c == '<' and not quote:  # <= c (243), quote (240)
*  247             tag = True  # <- <test> (246)
*  248         elif c == '>' and not quote:  # <= c (243), quote (240); <- <test> (246)
*  249             tag = False  # <- <test> (248)
*  250         elif (c == '"' or c == "'") and tag:  # <= c (243); <- <test> (248)
   251             quote = not quote
*  252         elif not tag:  # <= tag (247), tag (249); <- <test> (250)
*  253             out = out + c  # <= c (243), out (253), out (241); <- <test> (252)
   254 
*  255     return out  # <= out (253)

We can also compute slices over the dependencies:

In [211]:
_, start_remove_html_markup = inspect.getsourcelines(remove_html_markup)
start_remove_html_markup
Out[211]:
238
In [212]:
slicing_criterion = ('tag', (remove_html_markup,
                             start_remove_html_markup + 9))
tag_deps = rhm_slicer.dependencies().backward_slice(slicing_criterion)  # type: ignore
tag_deps
Out[212]:
dependencies tag_functionremove_html_markupat0x106bd5990_247 tag tag = True test_functionremove_html_markupat0x106bd5990_246 <test> if c == '<' and not quote: test_functionremove_html_markupat0x106bd5990_246->tag_functionremove_html_markupat0x106bd5990_247 c_functionremove_html_markupat0x106bd5990_243 c for c in s: c_functionremove_html_markupat0x106bd5990_243->test_functionremove_html_markupat0x106bd5990_246 s_functionremove_html_markupat0x106bd5990_238 s def remove_html_markup(s):  # type: ignore s_functionremove_html_markupat0x106bd5990_238->c_functionremove_html_markupat0x106bd5990_243 quote_functionremove_html_markupat0x106bd5990_240 quote quote = False quote_functionremove_html_markupat0x106bd5990_240->test_functionremove_html_markupat0x106bd5990_246
In [213]:
# repr(tag_deps)

Calls and Augmented Assign¶

Our last example covers augmented assigns and data flow across function calls. We introduce two simple functions add_to() and mul_with():

In [214]:
def add_to(n, m):  # type: ignore
    n += m
    return n
In [215]:
def mul_with(x, y):  # type: ignore
    x *= y
    return x

And we put these two together in a single call:

In [216]:
def test_math() -> None:
    return mul_with(1, add_to(2, 3))
In [217]:
with Slicer(add_to, mul_with, test_math) as math_slicer:
    test_math()

The resulting dependence graph nicely captures the data flow between these calls, notably arguments and parameters:

In [218]:
math_slicer
Out[218]:
dependencies x_functionmul_withat0x106bd7400_1 x def mul_with(x, y):  # type: ignore x_functionmul_withat0x106bd7400_2 x x *= y x_functionmul_withat0x106bd7400_1->x_functionmul_withat0x106bd7400_2 n_functionadd_toat0x106bd5ab0_1 n def add_to(n, m):  # type: ignore n_functionadd_toat0x106bd5ab0_2 n n += m n_functionadd_toat0x106bd5ab0_1->n_functionadd_toat0x106bd5ab0_2 add_toreturnvalue_functionadd_toat0x106bd5ab0_3 <add_to() return value> return n y_functionmul_withat0x106bd7400_1 y def mul_with(x, y):  # type: ignore