User Model: Tracking Understanding of Methods

  • Nov 01, 2012
  • 1 Kommentar

I have begun to think about how to track a user's understanding of methods.  The first conclusion I have reached is that there is no good automatic or manual way to measure "understanding."  Looking Glass prevents users from creating compile time errors and runtime errors, so the only "incorrect" code is code that fails to accomplish its intended function.  Since users do not build programs according to any spec and no human or computer can read minds, there is no way to tell if something is a bug or is by design (we might be able to recommend a more efficient way of accomplishing the same thing using Aaron's code tests and the mentoring system, but that does not mean the original code was "incorrect").  The best we can do is come up with a heuristic that best approximates a measure of understanding.

A good heuristic is a count of how many times a user has called a method.  We can assume that if a user has never called a method, then they do not understand how to use it.  If a user calls a method a lot, then they probably understand the method.  If a user only calls a method a few times, they probably do not understand it well.  This heuristic is partly based on the fact that people learn from experience, so the more experience a user has with a method, the more likely they are to understand it, all else equal.  The second basis for this heuristic is operant conditioning (reinforcement learning).  Operant conditioning would cause users to keep using methods that give them the desired output (reward) and avoid using methods that do not give them the desired output (punishment).  This heuristic is handicapped by the fact that users might find some methods intrinsically uninteresting and would not use them much even if they understand them, but this can be worked around.

We could pick new methods to expose to users by picking the methods that the user has called the least.  We would then find a remix from a world that uses a selected method to present to the user as a tutorial.  To ensure that we are not presenting the user with uninteresting methods, we can either explicitly exclude certain methods if we know in advance that users do not find them interesting, or we can create a rule where we only give users tutorials taken from worlds with a minimum rating in the community.  If there are no remixes containing a method or none of the remixes are from worlds with a sufficiently high rating, we can decide that the method is uninteresting and move on to the next least used method.

We also need to consider how to handle overloaded methods.  My recommendation is to keep multiple counts for overloaded methods: a count for each overload and a combined count across all overloads.  Each overload would then have two counts: the count for that particular overload and the combined count across all overloads of the method.  We would treat overloads like separate methods and assign each a call count that is a linear combination of the count for that particular overload and the combine count for the method.  This scheme accounts for the fact that overloaded methods are in some ways the same method and in other ways a completely different method.

The two count approach to overloaded methods is preferrable to keeping a single count shared by all overloads of a method and also keep counts for Enums used as parameters (e.g. Left, Right).  The enums are not particularly useful to track because Move Left and Turn Left are not clearly related (try Move Left and Turn Left from the camera's perspective and you will see the unintuitive relationship).

We can extend the tracking of single methods to tracking a set of methods called in a specific order.  We could have counts for all permutations of methods of length X where X is the largest value computationally feasible.

In general, the approach to tracking method use is to see what methods the user has called and try to expose them to things they either have not tried or have avoided due to difficulty or not knowing any interesting uses.

Kommentars

  • kyle

    kyle said:

    <p>Your heuristic sounds good but you might also want to consider frequency. So for example have they not used a method in several months? Then maybe they don't really understand it.</p>

    Posted on Nov 02, 2012

Anmelden oder Registrieren to leave a comment.