Lab 2: Part 3: Testing with Optimism
As you were working on the previous part, you probably didn't get things working perfectly right away. In fact, there were probably some versions of your program that looked like they worked the first time you tried them, but when you tried a different input, they didn't work any more. This is an unavoidable part of programming, just like when writing an essay, you're bound to make some spelling and grammar mistakes. So with code, just like with an essay, we need to have an editing process. One core component of the process of editing code is testing, which you've been doing already: we gave you some examples of what you program should do, and you tried to get your program to do that. In fact, without those concrete examples, it would have been much harder to know if your program was correct or not. This implies a key strategy for programming, which research shows is quite effective: test-driven development.
In test-driven development, the process starts by defining what the "correct" behavior of the program will be. Either you'll be provided with some examples of this, or you'll have to come up with your own examples based on a verbal or written description of the goals of the program. In this class, we'll focus almost exclusively on the first scenario, where you're provided with detailed examples. One examples are defined, they are turned into automatic tests, which the computer can run on-command. Finally, the actual writing of code can begin, and when a first draft of the code is done, the automatic tests are run to see if it meets the specifications or not. If the code fails some of the tests, it will be revised until it passes all of them.
In this part of the lab, you'll learn how to set up automatic tests in order to use this test-driven development approach to your own code. We'll start by developing tests for the text in boxes code from the previous part of the lab.
The Optimism testing library
The cornerstone of the test-driven development approach is automatic
tests, because they allow tests to be defined once and repeated again and
again as your code changes. To allow you to define automatic tests, we
have provided a library called
optimism. Here is a brief explanation of
the functions in that library:
provideInputlets you specify what input should be provided during the next test case. The code being tested will behave as if this input were typed by the user.
captureOutputlets you specify that printed output of the next test case should be captured. Nothing that gets printed will show up as it usually would, but in return, you can create expectations about that printed output using
testCasedefines a test case. This is the code that will be run as a test. If you want to test the behavior of an entire program, just use
runFileas the code to test.
runFilere-runs the current file, just like what happens when you press the run button. It won't execute any testing code while running the file, so that your tests don't trip themselves up.
expectOutputContainsestablishes an expectation for the most recent test case, specifying that whatever is printed by the code in the test case must contain a certain sequence of characters. You can use this multiple times with the same test case to test for multiple character sequences.
expectResultestablishes an expectation for the result of the most recent test case, or in other words, specifies what the test case expression should evaluate to.
There are two other functions in the
optimism module which will also be
tracefunction works like
detailLevelfunction allows you to control how much detail
optimismuses when it reports on expectations. If you want more detail than the default, you can write
detailLevel(2). Conversely, you can write
detailLevel(-1)to get less information than the default. Using
detailLevel(0)will set it back to the default and the current detail level applies whenever a new expectation is defined.
Task A: Testing Text in Boxes
To test our text in boxes file, we'll need to use
expectOutputContains. Go to
the end of the file where you wrote your text in boxes code, and add the
following pieces of code:
- First, before anything else, we need to make sure that we can use
functions from the
optimismlibrary. Add the following line of code:
from optimism import *
If you get a "module not found" error, that probably means that you don't have a copy of optimism.py in the same folder as your code. There should be a copy included with the starter code for this lab.
Next, before we define our test case, we need to set things up. Since we want to test what gets printed, and because the code that we're testing would normally need to ask the user for input, we'll do the following:
(Remember that when you want to use a function, you always need parentheses afterwards to "call" it, even if there's no additional information that you need to give to the function inside the parentheses). Note that if you finished the last task in the previous part of the lab, you'll need to provide two input values, since your code will ask for two pieces of text to put in a single box. When we use
provideInput, we can use a triple-quoted string to specify multiple lines of input, which will get used one-at-a-time when the code being tested calls the
inputfunction. That would look like this:
provideInput(""" ABC DEF """) captureOutput()
If your code uses more than two inputs, you'll need to add additional lines of input here.
- Next, we'll define a test case that runs the whole file:
- Finally, we'll establish some expectations for our test: the output
should include a line of asterisks that's just the right length, and
it should also include the value we entered surrounded by spaces and
asterisks. Note that these expectations don't guarantee that the
output is exactly correct, but they're enough to catch most kinds
expectOutputContains("***************") expectOutputContains("* ABC *")
Note that the ordering of the expectations doesn't matter: each sequence of characters that's expected is searched for individually within the whole output of the test case. Also, if your code only puts two lines of text in the box, you would want an additional expectation, like this:
expectOutputContains("* DEF *")
Once you have added all three blocks of code to the end of your file and adjusted them if necessary, run it again. Note that it will still ask you for input and print a box, because adding tests to the file doesn't change the initial behavior. However, afterwards, it should print two lines that look like this (although the line numbers you have will probably be different, and if you only used three expectations, you'll see a third line of output):
✓ textInBoxes.py:72 ✓ textInBoxes.py:73
Each line of output shows a check mark (because an expectation was met), followed by the name of the file, a colon, and the line number where the expectation was established. If you see check marks for each expectation, then you know your tests are passing. Note that in Thonny, these messages will appear in red, like an error message, but they're indicating that things are working, not that they're broken!
On the other hand, if an expectation is not met, you'll see a message like this (try changing one of your expectations to see for yourself):
✗ textInBoxes.py:73 Fragment "* GHI *" was NOT present in the recorded output """Enter your first string ==> ABC Enter your second string ==> DEF...""". In expression runFile(), values were: runFile = <function runFile at 0x10...
If you see an x-mark and an error message, you'll know that your test has failed. Don't worry about the "In expression..." part; that will come into play later when we test result values insead of output.
Once you get this basic test working with your own code, it's time to define your own test case.
Task B: Your own cases
Now that you've got one test case working, add another two test cases,
which test using different inputs. In particular, it probably makes sense
to include a test where the first string is longer than the second one,
and another where thee second string is longer, to make sure things are
really working. To define these tests, you'll need to repeat much of the
code from the previous part, but change the input(s) provided and the
expectations established. Note that you won't have to call
captureOuptut again, because once you've called it, it stays in effect
until you cancel it.
If you're defining a test case that you're pretty sure is right, but the expectation keeps failing, remember that it's possible you've found a bug in your code that you need to fix!
Task C: Testing expressions and tracing
In addition to testing an entire file using inputs and outputs,
optimism can also be used to test the result of a specific expression.
This will prove more useful next week when we start to cover custom
functions, but for now, we can still use it to do some automatic testing
of the values of variables.
For this task, open the
quadratic.py starter file.
quadratic.py, there is code that is supposed to compute the value of
the quadratic formula, but it isn't working correctly. For the numbers in
that file (2, -2, and -3), the correct results should be:
The first root is: 1.5 The second root is: -0.5
However, the output we see is:
The first root is: 6.0 The second root is: -2.0
Clearly, somewhere in the complicated equation our math is wrong (note:
raising an expression to the 0.5 power is a correct way to take a square
root). But where? Our goal for this task is to do some incremental
testing with optimism to figure that out. What we want to do is
copy-paste parts of the expressions on lines 32 and 33 to create
test-cases, work out using a calculator (or perhaps calculator program)
what their correct values should be to create expectations, and then pay
attention to which parts of the expression are working correctly and
which aren't. As a trivial example, the first part of the expression is
-b, so we can create a test case like this (note that we're not
capturing output or providing input, and we're testing the value, not the
If for some reason that part of the equation contained an error, this test would fail. Assuming that case succeeds, you might next define a test case for the denominator of the expression, like this:
Your job is to continue defining test cases like these (define at least 3
more) which test different parts of the equation, up to the entire
equation (for which you can use the correct output specified above as
your expectation). For example, the next expression you might want to
test could be the part inside the square root:
b**2 - 2*a*c, which
should have a value of 2 squared minus 2 times 2 times -3, which is 4
minus -12, or (positive) 16. Since there could be a typo or unexpected
result in any part of the equation, we want to test larger and larger
sub-expressions until a test fails. Each time you add a test case, if it
succeeds, you'll know that that part of the equation is error-free. If it
fails, then whatever part of the equation is in that part, but not in any
previous test, must contain an error.
When you figure out where the errors are, you can fix the equations in the file (and your tests cases), so that all of your expectations are met.
Note that for debugging purposes, the
trace function defined in the
optimism library can also be useful, and it can be added directly into
an equation, like this:
root1 = -b + trace(b**2 - 2*a*c)**0.5 / 2*a
Just be careful to only add the
trace function in places where
parentheses already exist (or are implied) because otherwise it might
change the meaning of what you're testing.
If you're really stuck finding the issue, feel free to ask for help, and you could also try using the debugger to step through the code and watch what Python does with it.
OPTIONAL Task D: Testing functions
This task looks ahead in the class a bit to deal with functions. Only work on it if you have extra time.
You may have noticed that our tests for the quadratic formula were all
based on the specific values of
c defined in the file.
Shouldn't we test whether the formula still works with other values? We
should, but there's no easy way to do that, since to change the values,
we have to edit the file, and our
testCase can't do that. However, if
the formula were defined as part of a "function," we could test with
A function is a way of creating code that will give a different result
for different input values, which can be re-used easily. We'll talk about
them in lecture soon, and will have a lab on them next week. You've
already been using built-in functions, but we can also define custom
functions. The starter file
quadraticFunctions.py contains two custom
functions, one for each root of the quadratic formula. Just like the
other functions you've been using, we can call one of these functions by
using parentheses and supplying values in between (in this case, 3 values
c). We can even call the function multiple times with
different input values to have it compute different results.
Your job is to add at least 3 tests for each of the two functions in the
quadraticFunctions.py file. For each test, use
testCase to set up the
test and use
quadraticRoot2 as the expression to
test, supplying values. That will look like this:
testCase(quadraticRoot1(2, -2, -3))
Remember to use
expectResult for each
testCase you define, and use a
calculator (or calculator program) to figure out what the correct result
should be in each case. Run the file, to makes sure your expectations are
Table of Contents
- Lab 2 Home
- Part 0: Warm up worksheet
- Part 1: Debugging in Thonny
- Part 2: Text in boxes
- Part 3: Testing with
- Knowledge Check