Week 6: Test Authoring Process

  • Jul 05, 2012
  • 2 Comments

Problem

Kate’s new world includes redundant code. Each of Kate’s twelve characters says “Hello, nice weather we’re having today.” Kate has twelve say procedures, one for each character.

Solution

Sam, Kate’s mentor, decides to author a test to detect similar cases so that other Looking Glass users may learn to avoid this redundancy.

Authoring the Script

The test code editor interface provides a simple step-based test authoring process. In step one, Sam chooses the construct. In the case of this script, Sam is concerned with statements, not constructs, so he chooses “statements.”

Next, Sam authors the script. Step two provides him with example scripts and a link to API documentation. Sam authors the script and, in step three, tests it on Kate’s world.

In step four, Sam tests his script on community worlds. If, during this process, he finds that the script is returning non-optimal solutions, he merely returns to step two to edit his script.

Once Sam completes the testing process, he submits his script to the community. Step five allows him to give the test a title and a description.

Submission

Rules are naturally congruous with badges. The tests that enforce rules use the badge hierarchy in order to help users more efficiently earn their badges and learn valuable concepts.

We would like to organize the tests into specific groups so that we can provide Looking Glass users with suggestions most relevant to them. These groups comprise of sets of tests whose suggestions may lead a user to achieve a specific badge. For example, tests that may help users achieve a silver doInOrder badge are grouped together and run on the worlds of users who have accomplished the bronze doInOrder badge. This hierarchical grouping of tests helps to decrease the number of tests that will be run on a user’s world and ensures that users are provided with more appropriate suggestions.

Once a test is submitted, it initially runs on a selection of community worlds. The worlds for which the test finds its target pattern have all badge tests run on them to determine which badges these worlds earn, and are then altered to meet the test’s suggestion. The badge tests then run on these altered worlds. The difference between the badges acquired by the altered worlds and the badges acquired by the original worlds are the place(s) in the hierarchy where the test lies.

Test Script Validation

Tests that are deemed acceptable by a selection of mentors are introduced to the larger Looking Glass community.  Mentors are chosen based on the level of their mentees’ progress. If a test script is meant to introduce a novice-level DoTogether, mentors whose mentees are about to reach this stage are asked to validate the test script. This solution ensures that mentors are motivated to validate scripts, and it also helps to dissuade the submission of duplicate scripts.

When a child authors a test, their mentor is be able to view the world as well as the tests that have passed their mentee’s new world. The mentor can choose for one of the tests for their mentee. This, in essence, is the validation process. The mentor chooses which, if any, test to suggest a better programming practice to the child. After a test has been chosen a certain number of times in this process, it is introduced to the larger Looking Glass community.

Mentors choose to validate a script despite not suggesting it for their mentee, or they can choose to reject a script if it is unfit to be introduced to the Looking Glass community. If the latter is the case, the mentor can send the test script author a message detailing problems with the script.

In this process, scripts that deal with rarely occurring cases will likely not be validated very quickly. The authors of these orphan scripts will be notified if their test has not been validated after a certain amount of time. The script author will have the opportunity to resubmit a more generalized version of the script.

The process of validation is a continuous one; no script is ever completely validated. Rather, a script continues to be validated to ensure that scripts maintained by the community are up-to-date and continue to benefit Looking Glass users.

Implementation

Validated tests are introduced to the larger Looking Glass community. When users author worlds, the tests that concern concepts peripheral to their level of experience will be run on their worlds. This ensures that the user will not receive hundreds of suggestions, many of which may be completely foreign to them.

The user will receive a suggestion in the form of a tutorial. The tutorial will introduce to the user the new concept to be added to their world. The user can choose between their original world and the revised one, and can also provide feedback on the suggestions.

If the user finds the suggestion helpful and implements the suggested concept, the test script author will be notified. Contrariwise, If users consistently respond that a certain test’s suggestions were not helpful, that test should be removed from the active tests.

Comments

  • nevelsp

    nevelsp said:

    <p>This is an ongoing process. I'm not sure I've even included everything I originally meant to...</p>

    Posted on Jul 05, 2012
  • caitlin

    caitlin said:

    <p>Heh. It's always an ongoing process. You've captured a lot here. It'll be interesting to see how this continues to evolve.</p> <p>&nbsp;</p>

    Posted on Jul 06, 2012

Log In or Sign Up to leave a comment.