Percussionists inadvertently use visual information to strategically manipulate audience perception of note duration. Videos of long (L) and short (S) notes performed by a world renowned percussionist were separated into visual (Lv, Sv) and auditory (La, Sa) components. Visual components contained only the gesture used to perform the note, auditory components the acoustic note itself. Audio and visual components were then crossed to create realistic musical stimuli.
The role of causality
Our first step was to split about the moment of impact, and recombine them to create two new "hybrid" gestures:
The short-long gesture (containing motion from the short gesture up until the moment of impact, then containing motion from the long gesture), and the long-short gesture (containing motion from the long gesture up until the moment of impact, then containing motion from the short gesture).
What if the sound and video do not go together in time? How far could one "offset" the sound from the moment of impact, either before or after, until the video no longer had any influence on the perceived duration of the sound? We designed this study to answer these questions.
Which part of the gesture influences the perceived duration of the note-- the portion before impact, the portion after impact, or maybe some kind of mix of the two? This experiment addresses that question.
Could the responses we have seen so far be due to what we call "response bias?" In other words, what if people don't actually hear the note as longer or shorter, they just say it was longer or shorter because the video was long or short? Maybe they don't hear the notes differently at all! In this case, the differences in perceived note length that participants report would not mean what we think they mean. We think they mean that participants really perceive the note differently when different videos go with it. Maybe, though, they just say they did.
Single dots recorded from key joint locations are capable of accurately conveying biological motion. (Johansson, 1973). Consequently, we created psuedo point-light abstractions of the original gestures to allow for further testing of this illusion. These animations represented only the mallet head, hand, elbow, and shoulder, and were capable of accurately reproducing the original illusion.
Deconstructing Motion Paths - What are the salient properties of the gesture?
Given that this illusion is both useful for musicians and informative for psychologists, we were interested in learning which aspects of the motion in the videos accounts for it. This question has both theoretical and practical applications, as it can (1) help to explain our unusual patterns of sensory integration, and (2) provide useful information for performing musicians interested in incorporating such gestures into their performances.
To this end, we designed four experiments which ultimately demonstrate the illusion is driven primarily by the timing of the post-impact motion. In the first experiment we found that only the second half of the strike matters. In the second experiment we found that the horizontal motion of the strike is not necessary, and that a rebound must occur. In the third experiment we found that time is the only variable out of velocity, distance, and time that matters. Finally, in the fourth experiment we found that acceleration and its derivative (jerk) do not affect the duration ratings.