Week 5: Test Editor Interface Walkthrough

  • Jun 28, 2012
  • 1 Comment

 

Fictitious mentor: Sam

Fictitious mentee: Kate

Authoring Process

One day, Sam notices that Kate’s world includes redundant code. Each of Kate’s twelve characters says “Hello, nice weather we’re having today.” Kate has twelve say procedures, one for each character. Only one line including a say procedure is needed, and Sam, the domain expert, knows this. He decides to author a test to detect similar cases so that other Looking Glass users may learn to avoid this redundancy.

Sam, a novice test script author, opens the Looking Glass IDE and the welcome perspective includes an “Author a test” button. He clicks this button and the test editor interface becomes the new perspective.

Sam first notices the side panel that provides ordered steps for authoring a test script. He clicks on the first step. It drops down and instructs him to choose the construct he will be working with, or otherwise choose “statement.” A drop-down menu listing the constructs is provided under this instruction. Sam chooses “statements” since his script will deal with statements.

Sam is then directed to the second step in the process, which instructs him to author the script. This step provides Sam with example scripts and a link to the API documentation on the community site. Sam chooses to view the example scripts, and the community returns several test scripts for “statements.” He then clicks on the API documentation link. Sam is able to search for specific parts of the API or browse by construct/statement.

Sam is now ready to author his script. He intends for it to look for blocks of code in which each line includes the same procedure. With the API, he is able to compare one procedure to another. Sam’s script determines that if there are at least five lines in a row with the same procedure, the user should probably use some sort of loop (and an array, if necessary). The script informs the user of this option. Sam is content with his work and moves on to the next step.

Step three informs Sam that it is time to test the script. Sam is able to run the script on Kate’s worlds. First, he runs the script on the world that inspired this script, and the script works, correctly returning the block of twelve say procedures in the passing constructs box. Sam then decides to run the script on some of Kate’s other worlds in order to make sure that his script is generalizable. Unfortunately, he finds that there is a bug in his script. The script returns blocks of repeated procedures even when the details may differ greatly.  He goes back to step two and edits his script so that it checks for matching procedures with matching details. He realizes that he could change the script to account for repeated procedures with varying details, but decides that that sort of functionality is best left for another script.

He then goes back to step three, where he runs the script on Kate’s original redundant code world, as well as several of Kate’s other worlds, and finds that the script works.

Step four informs Sam that it is time to test his script on ten community worlds. The community returns five worlds that will pass Sam’s script and five that will fail it. He is instructed to look at the code for each world, via a drop-down box, and check to see if the results of the test of each world are correct.  Sam is also able to view his own script for reference. If he finds that the results are not as expected and that his script has another bug, he simply returns to step two, fixes the bug, and repeats steps three and four. However, Sam does not need to do this since the script runs as expected on the worlds provided by the community.

Next, Sam moves to step five, where he is informed that it is time to submit the test for community approval. He is told to give the test a title and provide a description of what the test does.  He does so and clicks the submit button.

Community Validation

The next step in the lifecycle of Sam’s script is to receive validation from the community. The script must be tested by five other documented mentors before it can be implemented.

The mentors asked to validate the test script will be chosen randomly. They will be expected to test the script on a variety of worlds provided by the community, not unlike step four in Sam’s authoring process. If the script tests well and the mentor feels the script is useful and will benefit the Looking Glass community, he or she may validate it. If not, the mentor can anonymously send the script author a message outlining problems the script may have.

Once the five mentors have validated the test, it will be implemented. If a mentor deems the test unfit or lacking in some way, the script author will have the opportunity to edit and resubmit the test, which will need to be validated by a new random selection of mentors.

Implementation

When a user is authoring a world, she should be able to click a button in order to have tests run on her world. Ideally, the running of the tests on her world should result in helpful suggestions that positively affect the user’s world as well as her programming acumen.

When a test makes a suggestion, the user will be asked if this suggestion was helpful in any way. If users overwhelmingly find a certain script unhelpful, that script should be removed from the list of implemented scripts. Duplicate scripts should be detected and removed. There may even come a time when certain scripts cease to be of any use to the Looking Glass community. Sometime years from now, every single Looking Glass user may know when to use a DoTogether, thus making the DoTogether scripts useless. In this case, these scripts should be removed from the list as well.

Comments

  • caitlin

    caitlin said:

    <p>Thanks Patrick, this is really useful. I know you're excited to start building and we can start to do that in parallel. But really thinking this through is incredibly valuable.</p>

    Posted on Jun 29, 2012

Log In or Sign Up to leave a comment.