Some Thoughts on Rules for Offering Suggestions

  • Jul 06, 2012
  • 0 Comments

Sometimes experienced users can glance at what you're doing and see an opportunity that you had no idea existed before their suggestion. Currently, this is something that computing environments struggle to capture. We have tutorials; if you know what it is you want to learn and can find a tutorial that is close enough to that goal, you can be successful. We have recommender systems that compare your history and others' history to identify things that may be useful or appealing that you haven't yet found. My current sense is that these are still largely in the media recommendations space. So, they don't necessarily attempt to build a specific skill path, instead they try to narrow the set of choices down to things that are more likely to be acceptable, relevant, or appealing.

We envision a slightly different basic path. A domain expert looking at a world created by a young learner will make a specific suggestion, in this case captured as a change to the code. In our case, we can then present suggestions via tutorials to reconstruct the modified code. But that doesn't necessarily have to be the case. From the recommender system perspective, the question then becomes - we have a set of code suggestions; how do we determine which ones to offer to a particular user? And this is where rule authoring comes in. The probability that a user other than the one who created the world the domain expert originally looks at will have exactly the same code seems low. So, we need a way for domain experts to author a way to generalize their rule such that we can identify other contexts in which it applies. We have the first few participants scheduled in a study to explore how to enable experts to generalize. 

Once we have a rule, we intend to run the badge tests on the pre- and post-modification world to identify which skill that badge introduces. Then, rules to offer suggestions that don't correspond to a current badge skill can be used to drive the development of new badge trails. I find it interesting to think about what kinds of things we'll capture in suggestions. I see several different potential clusters:

1) identifying different opportunities to use a given skill. The same basic programming ideas often appear in many different contexts. While experienced programmers may recognize these contexts as opportunities to use a given idea, inexperienced programmers may not. So, even once a user has used a given skill once or twice, it may be interesting to point out places where they can use an existing skill in their toolset to do something new.

2) different action domains. This is probably the most similar to traditional recommender systems. Kids who write stories about soccer probably want different kinds of actions than kids who write stories about pirates. It remains to be seen how much users stay in a similar domain across stories, but my guess is that it will happen to at least some extent. So, there's value to two suggestions that introduce the same content but within different domain styles.

3) identifying error conditions. When learning a new programming skill, users are likely to create ill-formed code in a variety of ways. The rules could also look for these fledgling attempts to use a new skill and demonstrate how to go from incorrect to correct. This is also interesting from a capturing the real mistakes of novices perspective. The number of times a given error-correction suggestion matches vs doesn't match code provides a quantitative measure of the common errors in learning that new skill. I'm not yet sure what we'll do with that information. But perhaps as we think about some of the more advanced levels of personalized tutorials we can draw on slightly incorrect code that needs to be fixed as something we can offer to more advanced users.

While I'm typing this it occurs to me that perhaps the code itself with structured options might be an interesting interface to explore for rule authoring. I'm not entirely sure how this should play out, but if we gave them the original code they selected to modify - perhaps they could indicate how to generalize it by showing which elements needed to match exactly, and which ones could have other options. I wonder if we should do that as part of the pseudo-code process. Ask them to draw out on the code itself what can be generalized and what can't and then write pseudo code that captures that.

At the core, it seems like our rule authoring to offer coding suggestions should be seen as a new kind of recommender system; one in which the rules themselves are crowdsourced. My sense is that traditional recommender systems develop suggestions primarily through data statistics: if the fact that you liked this book and that book suggests that there's a high probability you will like some new book, that should appear as a recommendation. On some level, perhaps we should maintain some of the characteristics of that space. It seems reasonable that a given author might find a subset of some kinds of actions and animations more appealing than others. Traditional recommender system ideas might help us to capture that. Which suggests that we should also be starting to do the thorough literature search on recommender systems. 

Comments

  • No comments yet

Log In or Sign Up to leave a comment.