Evaluating Data

  • Jan 31, 2013
  • 0 Comments

Over the break, I finished running participants in my challenges study, which looks at how challenges affect first-time users in their overall enjoyment and engagement with the system. I'm looking at what users tend to do during their first hour with Looking Glass based on whether they started with a challenge or were required to build a custom scene.

 

I'm trying to look at this from every angle, including how distracting fiddling with the scene was (which both groups were able to do), what other parts of the interface were explored, and the quality of the worlds created. The hope is that, with more time to dive right into the coding portion, users in the challenges group took that opportunity and used it to pour more thought and detail into their Looking Glass stories. The hope is also that they had a great time doing it, even though they weren't able to pick their own story, or perhaps even because they didn't have to think up a story on their own.

 

I'm leaving data out of this post, but I'm starting to unwrap some trends and make some sense of this data. I'm eager to pinpoint the interesting points to be found, but this stage of the study has been a huge learning experience, and so I'm trodding carefully. I'm hoping that by next week I'll have enough captured that I can start writing about it for submission to an upcoming conference.

Comments

  • No comments yet

Log In or Sign Up to leave a comment.