Automatically Generated Tutorials Preliminary Results

  • Sep 13, 2012
  • 1 Comment

 

I’ve just finished running participants through my automatically generated tutorials user study to test whether the tutorials improve information transfer with a similar task that was presented in the tutorial. I was fairly worried about the results early on, but now that I have all the data in I can say that I’m pretty excited about the results. I have not yet had time to analyze these results with Caitlin so I don’t have anything too exciting right now. But for my control condition these are mean scores for the users on a near transfer task: 62%, 25%, 9% for the Do Together, Count, and For Each constructs respectively. For the experimental condition (automatically generated tutorial) the scores are: 78%, 54%, and 25%!

I’m still working on writing the paper, it’s in bad shape for where it should be considering the deadline is this upcoming Wednesday. However, I think I have a fairly simple and short (less than a page) explanation of Croquet that I think just might work for this paper. There still are a few holes in the explanation, but I still think it’s progress. I managed to smash Croquet into three essential components: models, widgets, and transactions. With just these pieces I’m able to explain how we generate tutorials. I am currently struggling how to talk about the internal representation of Looking Glass programs for our CHI audience. Abstract syntax trees aren’t really important for this paper and I think it’s a bit of a distraction.

Comments

  • jordana

    jordana said:

    <p>You can do it!</p>

    Posted on Sep 14, 2012

Log In or Sign Up to leave a comment.