⟵ back to


Iteration 1





My objective with Iteration 1 was to connect both Haplys into one program and use their positions to control pitch (left Haply) and volume (right Haply). I was to make this data audible through Pure Data, a music synthesis software. I was responsible for connecting and producing sound from the Haplys while the other teammates focused on haptic rendering. My motivation was to fully connect the components and setup a foundation our team could build off of for the next iterations.

My approach was to clean up the Processing code (based on Hello Wall) for our purposes, setup both Haplys, send position data to Pure Data, map this data to sound, then experiment with different interactions/mappings of movement to sound properties (pitch and volume). The image on the left is the "code" in Pure Data receiving OSC data from Processing. Processing reads position data from both Haplys, send it as OSC messages to Pure Data, which then generates sounds from it.

Link to the code


The video on the left shows both Haplys being used to play "music". The left Haply controls pitch on the y axis and the right Haply controls volume on the x axis and also vibrato on the y axis.

Right Haply: Volume

The video on the left demonstrates how the user controls volume and vibrato with the right Haply. Volume is mapped to the x axis of the end effector. The volume increases towards the right, with no volume leftwards of the x midpoint. Shaking the end effector along the y axis modulates the pitch, doing so rapidly produces a vibrato.

Left Haply: Pitch + Vibrato

The video on the left demonstrates how the user controls pitch with the left Haply. The frequency of the tone is mapped to the y axis of the end effector. The pitch increases upwards (closer to the y origin).

Reflection

From this iteration, I learned the importance of being able to test and physically use the Haply for this project. As a team, we discussed the desired interactions and behavior of the system and had an idea of what it would feel like and how the user should interact with it. But by actually setting up both Haplys and using them, I noticed a lot more. For example, I found it was difficult and uncomfortable to have both Haplys map to the same axises. Changing volume and pitch with exclusively the x or y axis was much harder than having primarily y for pitch and x for volume (different axises).

To reflect on the haptic experience itself, I found the mapping of the sound properties to the end effector movements needed refinement. To make the volume loud then soft (and vice versa) took too much movement, I should make the movement more subtle to be able to distinguish individual notes instead of a continuous varying tone. In addition, the pitch was difficult to move to exact whole notes (sounded off pitch), I think the addition of force feedback in our next iteration will help by "pulling" or "snapping" the effector to notes. I found the use of vibrato with the right hand was satisfying, it added a natural and responsive feel to the Haply and the resulting sound. On a technical note, I encountered some delay issues from Processing to Pure Data which I believe can be resolved in the code in the next iteration.

Next Iteration

The goal for my next iteration is to incorporate the haptic rendering developed my teammates into this system. I want to then refine the interactions based on feedback and observations from this current iteration. I am interested in the effect adding force feedback will have on the playing experience and sounds produced.

© Derrek Chow 2022