Evolving Interface Thoughts

Posted on Oct 28, 2013 in Physical Computing

Has learning to build tangible interfaces changed your view of what constitutes good physical interaction, or has it strengthened your initial ideas?

As I’ve been working on my very explicit, tangible film-audio player, I probably have more in common with my initial definition (the Gilbredth’s methods) than I’d like to admit. If they cribbed various hoists and lifting techniques, I feel like I’ve fallen into the array of interactions that are associated with LP play, relying on the knob/plate interface. Just to reiterate the device interface: The large potentiometer ‘lp’ plate turns the motor control speed (not that I’ve managed to over-come the needed stall amperage for the my motor or get the belt system actually working… alas). The light necessary for the audio-photocell sensor system is tied to the on-switch, more or less acting as a ‘power’ light as well as serving an internal function. The motor-direction momentary button is a bit more ambiguous, but is somewhat audible with the motor turning, a sort of click for viewer recognition. The volume potentiometer is unmarked but sits directly adjacent to the speaker for visual effect.

It makes sense, taking those LP gestures is pretty much a straight forward example of relying on cultural ‘models,’ from Norman, and mapping a slight altered process behind them. Think of the original ipods, where a swirling finger skimmed through songs or adjusted volume, or before that, the whole minimalist Braun CD players that foregrounded the revolving disc not as interface but as visual feedback. The model that I’ve fallen into is far more ubiquitous in audio culture than one might suspect in the age of mp3’s. It is as though the LP as medium, surface, and visual indicator is still the dominant series of interfaces we anticipate with a musical experience.

That said, even if I’m borrowing an array of cultural ‘models,’ very familiar tangible controls with audible and visible feedback, I’ve not tapped the larger environmental constellation of gestures I’d really appreciated in the Gilbredth’s bricklaying machine/Victor’s rant. Given the amount of simple, hard wiring, and sensor mapping difficulty involved in producing a simple, discrete, analog reader ‘box,’ perhaps this is to be expected.  Part of the issue is that those original notions or ideas of interaction were built on several parameters of input. Conceptually, I can imagine writing the an analogRead function that grabs max and min values to feedback into a larger, coordinated program; a series of sensors that could via code self-adjust to environment and users. But perhaps that begs the question of whether that mixed of more ambient, implicit interaction and explicit interaction are desirable or good. What’s the end task, the end action that we want to accomplish and how much re-mapping of our internal models is necessary? For the Gildbredth’s and the brick-wall, that larger, gestural mapping made sense for the mechanical installation of the human hand into a production stream. For Victor’s push to move beyond ‘images behind glass,’ I guess I’m still curious what new actions we’re mapping onto existing cultural models(?) of interaction or behavior and gesture.

If anything, I think play-testing for me showed the importance of feedback and registering action, far more than the specificity of a specific tactile form,  in confirming the conceptual ‘model’ at work. In some sense (given dc motor issues), I think I could have gone for a music box mechanism, with a handcrank feel to the feeder, almost as easily as I’ve gone for the LP plate system. Really, it was the link of either sound differentiation or potential visual cues that most testers responded to in playing the with the set-up. Responsiveness and feedback, when successful, seems to do much of the strengthening of intuitive responses or subtle reprogramming of learned behavior. Given the importance of those cues in user testing (and the interactions in general), it seems like the goal for an everyday interaction would be to close the gap or lower the threshold between implicit and explicit interaction, to find a form and visible response that, if not immediately matching with people’s models requires very few uses for habituation.