@extends('template') @section('title') Lab 8: Part 2. Testing with Optimism @stop @section('content') # Lab 8: Part 2. Testing with Optimism A **test case** is a piece of code that we will run while observing what happens (including what it prints and/or what it evaluates to). Often, this will be a single function call, and we'll be interested in the result value. An **expectation** is an idea about what the correct result value (or printed output) should be when we run a test case. We can simply have expectations in our head, and check them by observing a test case ourselves, but we can also dictate them to the computer and have it check them for us. ## Using `optimism` Back in [lab 2](/labs/lab02) we covered the use of `optimism` ([the `optimism` reference is here](/reference/optimism)), but that was before we had even covered custom functions. We're now revisiting this library because it offers us a way to do automatic testing which can improve the quality of our code as well as reducing the time it takes us to solve problems. **The provided file `tracing.py`**, which you've just used to test out the debugger, includes three broken definitions of `hasCGBlock`, in addition to the correct one. Your job in this part of the lab is to define test cases that distinguish the correct version from the broken versions: the correct version should pass all of your tests, while the broken versions should fail at least one of them. To define a test case in `optimism`, we use the `testCase` function. First, we need to import the library, like this: ```py from optimism import * ``` Now, we can do something like: ```py testCase(hasCGBlock('CGAGGGCCUG')) ``` For every test case we establish, we also need one or more expectations about what it should do. For this, if we're interested in a function's return value (or the value of a test expression) we can use `expectResult`. If we're interested in the printed output instead, we can use `expectOutputContains`, although we'd also have to use `captureOutput` before the test, and we'd eventually use `restoreOutput` after the test. For this lab, only `expectResult` is needed. We can use it like this: ```py expectResult(True) ``` Note that when we define test cases, if they crash, or forget to restore output, or cause other problems, this will affect the entire file's correctness. To isolate them, we can put them in a `test` function, and just call that function from the shell. So we can test when we want to, but any issues with the tests won't cause problems with the normal operation of other things in the file. The `tracing.py` file already has a test function in it, and that's where you should add your tests. Putting that all together, we should have a test function that looks like this (the import could also go at the top of your file if you prefer): ```py from optimism import * def test(): """ This function is designed to be used to set up and run tests. If you put them here, they won't interfere with anything else you might want to do in the rest of the file, until you call this function. """ # Put your test cases and expectations here testCase(hasCGBlock('CGAGGGCCUG')) expectResult(True) ``` If you run the file, nothing should happen, although your functions will be defined. If you then call `test` in the shell, the results should look like this:
>>> %Run tracing.py >>> test() ✓ tracing.py:100The check-mark indicates that your test passed, and it reports the line number where the expectation was established. The red color is because it's part of the error log rather than normal printed output (although in this case, it's not actually an error). ### More testing