Paper Writing

  • Sep 06, 2012
  • 0 Comments

After finishing up user studies for the Mentoring rule-writing API, I've been working on writing the paper. The two main parts I have been struggling with are framing the study/contribution and the analysis of the data, so I'm going to attempt to describe my thoughts on these areas.

 

Basis of the Mentoring system:

We would like adults with programming experience who know students using Looking Glass to be able to edit the student's worlds with improvements. Since there are not enough of these mentors for all of the students, mentors in this system would then write a "rule" that checks the code of other kids' worlds to see if the same suggestion would be appropriate.

The Questions of this study:

1. Can adults with programming experience make meaningful changes and identify which worlds have code that could be changed the same way? 

2. If so, what information and infrastructure do mentors need to be able to code these rules so that students to receive meaningful suggestions?

Analyzing the Data

The resulting data from this study are the rules mentors wrote, both in English and in psuedo-code. We have chosen to use Grounded Theory to qualitatively analyze the meanings of the coded rules. Grounded theory is based on comparison of data in order to find patterns and meanings in data. One website that gives a good description of Grounded Theory and how to go about it states that it is really about "looking for groups of patterns of behavior" and developing a theory based around these. I have been working on developing these patterns by looking at the code and finding common structures, terms, ideas and assumptions between rules. From these reoccuring concepts, I have started to connect different themes and feelings I am having about how participants are attacking writing rules.

However, Grounded Theory also talks about having a core category and a main concern. I definitely need to look into this part more to determine how we can use this part of the process in our analysis, and, possibly more importantly, what our core category is...

In between my attempts to create this Grounded Theory, I have also been outlining some of my thoughts about the data and studies and trying to distinguish the important from the unimportant. For example, t is hard to find patterns that I can label as "laziness" in the rules (though maybe I should attempt this), but it seemed like laziness may be a factor in the correctness of these rules. Many of these studies were run after the participants' work days and many talked about having children, meaning their lives are most likely very busy. However, if mentors are likely to be lazy, they will probably want to skip the same code writing participants wanted to skip. This means that we either need to motivate correctness or build the API in a way that allows mentors to be correct and lazy. This type of observation seems like it could affect our design, but is a result of unquantifiable data  (maybe I should have noted the number of times participants raised their eyebrows and asked, "you want me to code all of that??" "yes, please").

Comments

  • No comments yet

Log In or Sign Up to leave a comment.