Towards a Rule Authoring API
- Jul 13, 2012
- 1 Comment
Michelle has recently joined us (yay!) and we've been starting to put together a formative study to explore how potential Looking Glass mentors (CS domain experts who are working with one specific young user) can write rules to author their suggestions to other kids who don't have a mentor. Patrick has been working hard to design the interface to support this, but currently the API to actually do rule authoring is a big question mark. We built a basic infrastructure that allows users to write "rules" in python that analyze the abstract syntax tree (AST). But, the internal API for that is far from the ideal thing for mentors to use. When writing rules, I find myself opening a bogus java file so I can use autocomplete to get what I want and then re-format it for python. Not ideal.
With the kind help of the St. Louis Academy of Science, we've identified some local programmers willing to help us in this effort. We're starting to bring them in, give them a quick introduction to Looking Glass and then ask them to both make suggestion modification in a world (a series of changes in Looking Glass that we can use to generate a tutorial), and then write a rule that identifies other Looking Glass programs that would benefit from this suggestion. There's a whole interesting problem in developing analogies between the original world and those identified through the rule in order to be able to generate a suggestion. That will be a fun problem for another day.
In the meantime, we're starting to get a sense of how best to get this data. Our first participant began by writing english descriptions of the rules and then we asked the participant to translate that into pseuducode in whatever language style was most comfortable. This helps us both to understand the potential space of suggestions that mentors might see - a lot of the first participant's thoughts on suggestions focused on streamlining the code in various different ways and correcting mistakes or poor style. It will be interesting to see whether that holds true as we get more participants. In some ways that's a great space for suggestions because it's something we can't really capture through tutorials based on remixes. And, we can use their suggestions and rules to potentially clean up code that gets remixed.
One of the struggles we've seen thus far is an initial desire to just fix everything in the program. For the suggestion/rule pairs to work, mentors need to make cohesive changes. For example, I found this one place where a while loop is a good choice. I'm going to remove the current code, replace it with a while loop and then author a rule that looks for code similar to the code I just removed. Our participant seemed to get the hang of it pretty quickly. But we're going to need to think about how to communicate this off the bat.
Two more today, another on Monday. And then I need to get on scheduling the next round.
Comments
jordana said:
<p>Hmm. If their first impulse is to change and fix code, do you have them immediately express why they want to change it first? Is it a personal design choice, or a rule of thumb that they can identify? I would make them record that rule of thumb, make it into a general case, and <em>then</em> apply it to the offending code as a reward. Then move on to the next code piece. But, perhaps you two are already doing that. :)</p>
Log In or Sign Up to leave a comment.