School Seminar: Modeling Melodic Dictation


Despite its ubiquity in music conservatory curricula, research on topics pertaining to aural skills is limited at best. While anthologies of materials for sight singing and melodic dictation exist, how people commit melodies to memory is not well understood. This problem is difficult to tackle given the amount of factors that may contribute to the process, such as the complexity of the melody, the degree of exposure needed to commit a melody to long-term memory, and individual differences in cognitive ability that have been shown to contribute to an individual's performance on musical tasks.

In this talk I present findings from an experiment (n = 39) modeling performance on melodic dictation using both individual and musical features. Results from the experiment suggest that computationally abstracted musical features can be used in predicting task performance. While these results are useful as a descriptive model, I additionally propose the basic framework for a cognitive, computational model meant to explain how an individual takes melodic dictation. The model is inspired by from both cognitive psychology, as well as computational musicology with the aim of predicting how individuals perform on melodic dictation exercises.

David John Baker is a PhD candidate in Music Theory with a minor in Cognitive and Brain Sciences at Louisiana State University. He is a member of the Music Cognition and Computation Lab where he works under Dr. Daniel Shanahan and Dr. Emily Elliott investigating how musical perception can be influenced by musical structures, as well as individual differences. His work has been funded by the UK's Arts and Humanities Research Council, as well as the School of Music, Department of Psychology, and Graduate School at Louisiana State University. 

Report an error on this page