Encoding musical syntax
Music is a braid of multilayered information, ranging from acoustical to higher-level syntactic structures which are intertwined to ultimately carry musical meaning. However, the question of how this multi-array of information is encoded and can be tracked down into the brain remains largely unanswered. In speech, it was shown that brain signals track the energy fluctuations (i.e., envelope) of auditory inputs (Ding & Simon, 2014).
In an ongoing study, we collected EEG signal while musician and non-musician listeners were exposed to excerpts of Bach monophonic pieces. Then, a system identification technique is used to compute the channel-specific mapping between the music envelope and the recorded EEG data.
Preliminary results indicate that TRFs for both musicians and non-musicians have magnitude significantly larger than zero for components that are typical of envelope responses (P1, N1, and P2 in Fig. 2, left panel). EEG data can then be predicted using these TRF models. The best predicted electrodes emerged in one broad centro-parietal area of the scalp and, importantly predictions were far more accurate or better correlated with the acoustic envelopes for musicians than for non-musician participants (Fig.2, right panel), in line with a previous MEG study (Doelling & Poeppel, 2015). Further analysis will be dedicated to observe how higher-level information, such as musical syntax can improve EEG predictions (Di Liberti et al., in preparation).