Interfaces In Looking Glass

  • Nov 02, 2012
  • 1 Comment

This week I sketched out how I think Interfaces could fit into Looking Glass.  I refer to interfaces as "Roles" in the sketches and the rest of this blog post.  I'm not in love with that word and I think it might collide with the language used for remixing, but  I think it conveys what an interface does in a user friendly way.  I also don't want to call it an interface at the moment since I am still wrestling with whether Roles should provide a means of sharing functionality.

Defining Roles

I thought we could define Roles much in the same way we do event listeners at the world level.  Here's a sketch of how that might look:

I'm imagining using the normal procedure and function creation dialog, but without having method bodies.


Adding Roles

Here's where we could put the button to make a particular object play a role.  I wrestled with the language here and provided a couple options for what that button could say.  I like "Play the Role of" because I think it captures the behavior in a storytelling way.

 

Event Handlers with Roles

Event listeners could work with Roles by having them appear in the drop down for options for one of the groups being listened for.  In this sketch, I have the object playing the "Bad Guy" role that fired the event being assigned to a variable called "baddie".

 

Individual Event Listeners vs. Groups

In a previous blog post I talked about how I liked the old event handling language.  The therapy IDE had something where it would be each object would have "collidesWith [other object]".  This made more sense in terms of presenting the user with an easy to read about model, but after reading the Therapy IDE paper, I have a better understanding of why the change was made to the new lists colliding with lists model.  Efficiency was a key concern of that study since therapists don't have a lot of time to devote towards the creating the games.  In the old model, you would need to manually create a "collidesWith" for each object.  One participant even said something a long the lines of  "Oh, you mean I have to do that for all of them?".  This made me appreciate the new model more and it is why I included the "groupOne" "groupTwo" collision model in these sketches.  I think I think interfaces and one other change could make the new model much easier to work with and think about.

Accessing The Colliding Objects

The other change I'm referring to is being able to access the objects which triggered the event.  Currently you can get them with the "getModels" method provided to all collision listeners.  A problem with this is that the "Models" are actually MovableTurnables, not Models.  If you are trying to assign the result of getModels to a variable, it's not clear at all that the type should be MovableTurnable.  Further confusing is the fact that the groupOne and groupTwo are arrays of SThing, so you might assume that the models would be Things.  They're also not in a consistent order.  The first object in the list could be part of groupOne or groupTwo. I think being able to assign specific types to groupOne and groupTwo would be awesome, and then you could get to them through variables that are preassigned to the colliders.

 

Publish and Subscribe

I also played around with "Scratch" this week to see if I could draw any inspiration from their events model.    They follow the same sort of collision model as the old IDE where individuals have their own code.  However, they have provide the ability to publish and subscribe to messages which makes for some cool possibilities.  You can broadcast a message when something happens, and then have another object somewhere listening for it and reacting to when it hears the message.   This could replace a lot of busy while loops that continuously check a condition.  It could also reduce the need for global variables to store the state of something.  I need to think about if it fits enough needs to be worth pursuing, but I think it has potential.

 

Comments

  • caitlin

    caitlin said:

    <p>Full interfaces comes with a lot of conceptual and UI mechanics. Which leads me to wonder how much we could get from implicit interfaces. Say that when you make a list of objects that gets used in an event or a list-based loop we look for the shared calls and allow you to use any of them. Are there cases where that breaks down? Or is that enough a lot of the time?</p>

    Posted on Nov 02, 2012

Log In or Sign Up to leave a comment.