When you listen to rhythm, you can predict the timing of sounds by learning the pattern of longer and shorter sounds (like morse code). But how do we represent a rhythmic pattern in our minds? We used a probabilistic model of musical predictions to figure this out. Across three different tasks, we found that listeners primarily rely on abstract and imprecise representations: instead of “this interval was 400 ms and the next is 350 ms” we seem to represent rhythmic patterns more like “long, a bit longer, a bit shorter, shortish” – we rely on ratios and contour mostly!
Full paper in Cognition can be found here, all code and data here.
New paper in Cognition: Modelling processing of rhythmic patterns