State-dependent auditory-reward network connectivity predicts degree of pleasure to music

DOI: https://doi.org/10.21203/rs.3.rs-2725123/v1

Abstract

Music can evoke pleasurable and rewarding experiences. Past studies that examined task-related brain activity revealed individual differences in musical reward sensitivity traits, and linked them to interactions between the auditory and reward systems. However, state-dependent fluctuations in spontaneous neural activity in relation to music-driven rewarding experiences have not been studied. Here, we used functional MRI (N=49) to examine whether the coupling of auditory-reward networks during a silent period immediately before music listening can predict the degree of musical rewarding experience. We used machine learning models and showed that the functional connectivity between auditory and reward networks, but not others, could robustly predict subjective, physiological, and neurobiological aspects of the strong musical reward of chills. Specifically, the right auditory cortex-striatum/orbitofrontal connections were related to neural positive arousal responses, whereas the auditory-amygdala connection was associated with physiological arousal. Moreover, the predictive model of auditory-reward network derived from one sample of individuals replicated in an independent dataset using different music samples. The current study reveals the role of pre-task brain state in efficiently connecting sensory and reward systems leading to an intensely rewarding experience.

Introduction

Music listening is an important part of everyday life for the vast majority of society (1). One of the main reasons for music listening is to experience pleasure and reward (2, 3). Previous studies have consistently reported that when people experience pleasure in music, ໿the auditory cortices and the mesolimbic reward circuitry, especially the nucleus accumbens (NAcc) are activated (48). Furthermore, the functional interaction between these systems increases as a function of the hedonic value of music (9). Stable individual differences in music reward sensitivity traits are also linked to interactions between auditory and reward networks (1012). However, music may not always elicit the same emotional response in the same individual at different times, depending on their state (13). Such state-dependent fluctuations have not been studied. Specifically, an important unknown is the association between the pre-listening brain state and the musical reward experience.

The resting brain network before music listening could be a determinant of musical pleasure, based on other literature. In the past decade, many studies have reported that 5–10 minutes of intrinsic neural network activity, that is, resting-state functional connectivity (RSFC), could predict various human abilities or traits (1417). However, questions have also been raised about the sources of this predictive ability (18, 19). On the other hand, recent studies indicated that the short (several seconds to dozen seconds) resting brain network before an experimental task could predict cognitive task performance (2022), mind wandering (23), and value-based decision-making (24). These studies suggest that the state-endogenous fluctuation of the brain immediately before the task influences subsequent neural activity, and hence behavior. Because the functional link between the auditory and reward brain regions during music listening positively correlates with the reward strength (7, 9, 10, 25), emotional response to music may be facilitated when the auditory-reward brain network is in a relatively higher state of interaction prior to the musical experience.

It is possible that anticipation of an upcoming piece at rest would modulate the connection between auditory and reward brain regions. Past studies reported that the anticipation of future rewards activates rewarding brain regions, especially vmPFC/OFC (2630), as well as NAcc, and amygdala (31, 32). These brain activities should reflect that people enjoy the moments leading up to the reward (33). Moreover, anticipations for the next musical item may have an effect on auditory brain activity at rest, probably because of imagination and memory about music (3436). Therefore, the synchronization of auditory and rewarding brain regions could change by anticipatory brain activity during the resting period, resulting in an enhanced emotional response to music.

In the present study, we aim to reveal whether the resting brain state immediately before music listening is specifically associated with a rewarding experience. To examine this question, we recruited 49 participants for fMRI experiments and simultaneous autonomic nervous system-related psychophysiological measurements (Experiment 1: 38, Experiment 2: 11). Participants listened to their favorite and to experimenter-selected music. Listeners reported their emotional state behaviorally in real time by indicating if they were experiencing neutral, pleasure, chills (defined by goosebumps, shivers), or tears (defined by weeping, lump in the throat) responses (37, 38). Participants were instructed that they would listen to their favourite music or other music in the experiment. Therefore, they knew that they would listen to highly rewarding music, but they did not know when the music would play (Fig. 1 and Supplemental Fig. 1). In line with a recent technical advance in neuroimaging research to identify FC patterns associated with complex cognitive functions (22, 23, 39), we evaluated RSFC predictability by machine learning models.

Results

Prediction of emotional responses to music from RSFC immediately before music listening

We applied machine learning analysis to identify whether the RSFC between auditory cortex seeds (defined independently from 40) and 12 separate resting networks (Fig. 2A) could predict the duration of self-reported emotional responses during music listening. ໿To evaluate the prediction performance, we applied the least absolute shrinkage and selection operator (LASSO) machine learning models (40) to data from test subjects using leave-one-participant out cross-validation (LOPOCV) (see Method). Only the auditory-cortex reward network RSFC model accurately predicted subjective durations of chills during music listening following the resting-state period. Specifically, the correlation between predicted and actual chills duration was Pearson’s r = 0.53 (False Discovery Rate (FDR) correction, permutation test, p < .001), whereas other auditory-seed networks did not show significant correlations (Fig. 2B). No RSFC auditory-seed network was able to significantly predict durations of reported neutral, pleasure, or tear responses (Supplemental Fig. 3). Note that the pleasure ratings obtained after the end of the excerpt (range 1–4) were high both for self-selected (M = 3.17, SD = 0.74) and experimenter-selected (M = 3.00, SD = 0.77) music. Participants felt high pleasure during music listening; therefore, the musical chills reflected a pleasurable experience (as opposed to some other arousal-related emotion).

Whole-brain RSFC was not able to predict durations of reported chill (Fig. 2C) nor neutral, pleasure, or tear responses (Supplemental Fig. 3). Furthermore, as an exploratory approach, we examined all 78 possible network combinations derived from 13 networks to predict the subjective duration of chills (Fig. 2D). We found that the auditory-reward network had the best predictive ability compared to the 77 other pairs of networks.

Trial-by-trial chills experience prediction by auditory-reward RSFC with different time periods

As a complement to the previous per-participant analysis, we carried out a trial-by-trial machine learning analysis of whether the auditory-reward RSFC predicts the duration of chills. To reveal the specificity of the FC predictive ability for the subjective chills experience, we made three machine learning models setting RSFC preceding music, FC during music listening, and RSFC following music as features. Using leave-one-trial out cross-validation (LOTOCV), we found that the auditory-reward RSFC preceding music significantly predicted the duration of the chill on a per-trial basis, r = 0.23 (FDR correction, permutation test, p < .001) (Fig. 2E-left). But neither the FC during music listening nor the RSFC following music listening could predict the duration of the chill (Fig. 2E-middle, right). These results indicated that the auditory-reward brain network in the resting state prior to music is important in predicting subsequent emotional responses. In addition, the correlation between predicted chills duration by the auditory-reward RSFC preceding music and music type (coding self:1 and experimenter: 2) did not reach significance (r = -0.06). Thus, the predictive score did not reflect self- and experimenter-music order.

Validation of subjective chills prediction by auditory-reward RSFC

To further determine the sensitivity of the auditory-reward RSFC to predict the duration of subjective chills, we examined the predictive performance using different RSFC cumulative durations before music listening. When the duration of RSFC was more than 26 s, the auditory-reward RSFC significantly predicted the duration of chills for LOPOCV and LOTOCV analysis (uncorrected ps < .05, Fig. 2F-left). The prediction accuracy increased linearly from 20 to 40 s (LOPOCV: r2 = 0.88, p < .001, LOTOCV: r2 = 0.68, p < .001). The RSFC duration of 26 s corresponds well with a previous report that “FC states computed within windows as short as 22.5 s can track of cognition” (22). Using the minimum significant duration of the auditory-reward RSFC, we tried to predict the duration of subjective chills by successive 26 s sliding windows from 0 to 40 s, to determine the optimal time frame for prediction. The prediction accuracy was always significant (uncorrected ps < .05, Fig. 2F-right) except from 4 to 30s for LOTOCV (p = .077). ໿These findings validate the robust predictive ability for the duration of the chills by the auditory-reward RSFC immediately before music listening.

Prediction of physiological and neural activity during chills experience by auditory-reward RSFC

Subjective chills experiences are typically accompanied by physiological arousal and brain activity in reward regions (4, 5, 38, 43). SCR (Skin conductance response) is a robust physiological marker of the arousal effect of chills (44), and several studies suggested that the mesolimbic pathway, especially right NAcc activity, is critical for brain activity of chills (5, 9, 10). We investigated whether the auditory-reward RSFC immediately before music listening can predict these neurophysiological aspects of chills by making the LASSO-LOPOCV predictive model. We tried to predict the sum of the cumulative intensity of SCR, heart rate, respiration rate, and of the BOLD signal during the musical chills experience. The BOLD signal was extracted from each of 140 ROIs obtained from the AAL3 atlas (45). As shown in Fig. 3A, several neurophysiological activities during the chills experience can be predicted well. Specifically, the auditory-reward RSFC significantly predicts SCR (r = 0.51, p = .002) (Fig. 3B), but not heart or respiration rate. The same network also predicted BOLD activity in the right NAcc (r = 0.61, p = .002), left insula (r = 0.51, p = .002) (Fig. 3C, D), left pgACC (r = 0.56, p = .002), and dmPFC (r = 0.51, p = .004) (all p values computed with FDR correction, permutation test). These results indicate that not only subjective chills experience, but also physiological and neural aspects associated with pleasurable chills could be predicted by the auditory-reward RSFC. The right NAcc activity during chills positively correlated with the left insula activity during chills (r = 0.56, p < .001); therefore, the auditory-reward RSFC predicts co-activation of NAcc and insula. Note that we show left pgACC and dmPFC prediction plots in Supplemental Fig. 4 since these predictions did not replicate in Experiment 2 (see the following generalization section).

Similar to subjective chills prediction, we examined the predictive performance of different RSFC durations to validate the predictive ability of the auditory-reward RSFC model on physiological and neural measures. The result showed that when the duration of RSFC was more than 34 s, the auditory-reward RSFC significantly predicted all the SCR, right NAcc, and left insula activity (uncorrected ps < .05, Fig. 3E-left), indicating the robust predictive ability of the auditory-reward RSFC again. The prediction accuracy linearly increased from 20 to 40 s for SCR (r2 = 0.95, p < .001) and right NAcc (r2 = 0.88, p < .001), but the linear effect is weaker for the left insula (r2 = 0.48, p = .018) which shows a sudden increase at 34 s.

Finally, we investigated which auditory regions are most important for prediction of physiological and neural response by distinguishing between early, primary-like cortical regions and secondary auditory regions from HCP ROIs (46). We performed a similar machine-learning analysis as above. We found that the RSFC between the bilateral primary auditory region and reward can predict SCR activity, whereas the right secondary auditory-reward RSFC can predict the right NAcc activity (see Supplemental Fig. 5).

Asymmetry of the auditory-reward RSFC and behavioral relevance

Many previous studies indicated that emotional responses to music are related to the functional interactions between reward regions in the right hemisphere more than the left hemisphere (7, 9, 10, 25). To test for the asymmetric effect in the auditory-reward RSFC, we performed a machine learning analysis on whether the RSFC within each hemisphere has meaningful relationships to the chills experience. RSFC included 45 features separately for the left and right hemispheres. We confirmed that the auditory-reward RSFC from the right hemisphere, but not the left hemisphere, could predict the subjective duration of chills (r = 0.48, p < .001) (Fig. 5-A), the intensity of SCR (r = 0.55, p < .001) (Fig. 5-B) and right NAcc activity (r = 0.48, p < .001) (Fig. 5-C) with permutation test and FDR correction. Neither hemisphere significantly predict left insula activity. To examine the statistical hemisphere difference of predictability, we performed a bootstrap test (see Method). Although the prediction accuracy of the model on subjective chills and SCR failed to reach significance (bootstrap CI [-0.11, 0.66] and [-0.11, 0.59], respectively, ps > .05), the prediction accuracy on the right NAcc BOLD signal showed a significant difference (bootstrap CI [0.00, 0.60], p = .05), indicating that the right hemisphere has a high predictive ability.

The predictive weight of auditory-reward RSFC for the subjective, physiological, and neural responses of chills

໿With the applied machine-learning pipeline, non-zero LASSO regression weights delineate the predictive network. Each weight can be interpreted as the relative importance of the connectivity in the prediction. Positive (negative) weights translate to stronger interregional functional connectivity predicting longer (shorter) duration of chills experience. The predictive network for the subjective chill duration (Fig. 4-A), SCR intensity (Fig. 4-B), right NAcc intensity (Fig. 4-C), and left Insula intensity (Fig. 4-D) of chills are depicted in the plot.

໿ For the subjective chill duration, the strongest positive predictive connections were found between one of the right auditory regions and the left amygdala. However, as shown in Fig. 4-A, the cumulative positive weight (circle size) was also high in the amygdala, NAcc, and mOFC. Both positive and negative cumulative weight were high in the auditory region. These results indicated that the overall auditory-reward network is important for predicting subjective chills experience.

The predictive weight for physiological and neural activity during chills showed the importance of specific reward regions. The higher positive predictive weight for SCR was found between the bilateral auditory region and the left amygdala. The cumulative positive weight indicated the importance of the left amygdala, whereas both positive and negative cumulative weight was high in the auditory region. For the right NAcc and the left insula activities, the right auditory regions showed a higher positive predictive weight with the right and left NAcc, respectively. Moreover, for the right NAcc and the left insula, the left auditory regions showed a higher positive predictive weight with the left aOFC. In contrast, the negative predictive weight for right NAcc was mainly found between the auditory regions and a/mOFC, and weight for left insula was found between the auditory regions and right mOFC. These findings indicated that the RSFC between the auditory areas and amygdala is critical for predicting the physiological arousal of chills, whereas the RSFC between the auditory and NAcc/OFC is important to predicting rewarding brain activity of chills (see the similar connectivity weight for the right hemisphere prediction in Supplemental Fig. 6).

Generalization of the predictive model for independent experimental data.

໿To examine generalisability and independent validation, experiment 2 was performed with a different MRI scanner, a different subject pool, and a different stimulus set, but keeping the same model parameters derived from experiment 1. Furthermore, to reveal whether the auditory-reward RSFC immediately before music listening is an intrinsic, stable brain network or a pre-listening dependent brain network (i.e., more trait-like or more state-like), we measured the traditional 10-minute resting state before the music listening experiment. By inputting the auditory-reward RSFC score for machine learning models obtained from Experiment 1, we got prediction scores of chills-related variables both for the task and intrinsic rest. To perform a fair comparison, we calculated the same duration of auditory-reward RSFC from the pre-listening state and the intrinsic resting state by random sampling strategy (see Method).

This validation experiment revealed a considerable generalisability of the predictive model by the auditory-reward RSFC immediately before music listening: the correlation between actual and model-predicted responses was significant for the duration of chills (r = 0.62, p = .043, Fig. 6-A), the intensity of SCR (r = 0.75, p = .015, Fig. 6-B), right NAcc activity (r = 0.76, p = .015, Fig. 6-C) and left Insula activity (r = 0.66, p = .037, Fig. 6-D) with FDR correction. In contrast, model-predicted responses using conventional intrinsic RSFC scores obtained before the experiment started did not significantly correlate with actual responses (Fig. 6). The findings suggested that the current predictive model has generalisability only for pre-listening resting brain states. Note that the pleasure ratings obtained after the end of the excerpt were high both for self-selected (M = 3.41, SD = 0.73) and experimenter-selected (M = 2.86, SD = 0.82) music, as in the first study.

Further univariate tests with a linear mixed model showed that two auditory-reward RSFC networks (left aOFC-right aOFC and left auditory_6-right auditory_5) significantly differed between pre-task rest and intrinsic rest (p < .05 with FDR correction, Fig. 6-E, F). The results support the idea that the auditory-reward network in pre-task rest differs from the intrinsic brain network and may be a state-dependent brain network.

Discussion

In the present study, we investigated whether the auditory-reward RSFC immediately before music listening could predict emotional responses to music by machine learning. We found robust predictive ability for the auditory-reward RSFC with behavioral indicators of strong musical pleasure sensation, i.e., emotional chills. The auditory-reward RSFC could also robustly predict physiological arousal (SCR) and neural positive arousal activity (NAcc and Insula) during the chills experience. Especially, the auditory-reward RSFC in the right hemisphere predicts these chills indexes rather than the left hemisphere. As for the specific brain connection, the right auditory cortex-striatum/orbitofrontal connections were related to neural positive arousal responses, whereas the auditory-amygdala connection was associated with physiological arousal. Importantly, the predictive model developed for the auditory-reward RSFC from one sample generalized to new experimental data collected in an independent sample with different stimuli, and tested with another fMRI scanner. These results showed that the model captures relevant aspects of state-related brain activity fluctuations that predict subsequent responses.

Predicting musical rewarding experiences from pre-listening auditory-reward brain networks

We first found that the duration of subjective chills experience but not neutral, pleasure, or tears responses can be predicted by the auditory-reward RSFC immediately before listening (Fig. 2B). ໿Musical chills represent clear and discrete events, and they provide a reliable, objective indication of hedonic reactions to music (44). Importantly, the auditory-reward network achieved the best predictive performance compared to the other 77 network combinations or to whole-brain networks (Fig. 2C, D). Past studies confirmed that musical chills accompany strong pleasure and arousal (4, 5, 43), whereas musical tears are associated with complex emotions of mixed sadness and pleasure (43, 48). The current findings indicated that the auditory-reward RSFC is related to rewarding experiences such as chills, but not all ranges of musical emotion. Intense emotion may be specifically embedded in the small world of the resting brain rather than broad brain regions. Our results extend the recent findings that spontaneous brain states between tasks contain ample information to predict various aspects of human behavior (2024, 49) to strong pleasurable responses.

The psychophysiological activity of SCR, and the neural activity in the right NAcc, and left insula during the chills experience also can be predicted by the auditory-reward RSFC in a replicable manner (Fig. 3 and see also below). SCR is a physiological arousal index dependent on activation of the sympathetic nervous system (50), while meta-analysis indicated consistent signal increases in the NAcc in response to musical pleasure (51, 52). Past studies have also indicated that psychophysiological arousal and ໿dopaminergic signals in NAcc are accompanied by chills (44). Moreover, meta-analytic results have shown that NAcc and insula showed co-activation when people feel positive arousal (53). In accordance with the right NAcc activity positively correlated with the left insula activity during chills, the pre-listening auditory-reward network could determine the degree of neurophysiological positive arousal level during the chills experience. In addition, the top 5 predictable brain regions in the current study (see Fig. 3A) were included in the “liking” circuit described by Berridge and Kringelbach (54) although the predictive model of some of these regions could not be replicated in experiment 2. The auditory-reward RSFC could therefore also modulate the neural liking level during strong emotional responses. The specific exchange pattern of information among auditory and reward brain systems during pre-listening may facilitate subsequent transmission of auditory information to a liking/reward circuit (9).

Right lateralization for the predictive ability of auditory-reward brain networks

We found that the right but not left auditory-reward network significantly predicted subjective and neurophysiological responses to chills (Fig. 4). These results agree with past evidence that although reward-system responses to music are mostly bilateral (51), the link between auditory and reward responses to music tends to be stronger on the right than the left (7, 9, 10, 25) probably because of the dominant right cortex lateralization for melodic and harmonic aspects of music processing (5557). The new contribution here is that the lateralization of brain connections emerged even before the music-listening task in the broad rewarding brain regions. The lateralization result emphasizes that the pre-connection of music processing and rewarding brain activity facilitates the evocation of musical chills. Another possibility is that musical imagery (34, 35) starts before listening, and the imagination enhances the emotional response to real music.

Predictive network specificity for subjective, physiological, and neural reward responses

The predictive network was partly different for each of the four substantially predicted variables. The key nodes of the subjective chill predictor included the auditory and all the reward network regions (Amygdala, NAcc, and OFC) (Fig. 5A). Musical chills experience would be modulated by resting auditory-reward network in general but not by any one specific reward region, in keeping with the idea that these structures work in tandem as part of a functional network. Past studies have emphasized the importance of the right-NAcc and auditory cortex connection during highly pleasurable music experience (9, 10, 25); the current study supports this conclusion and extends it to the broad auditory-reward neural network at rest.

As a specific effect for each reward region, the pre-listening auditory-amygdala network and auditory-NAcc/OFC network was related to musical chills-related physiological arousal and rewarding brain activity, respectively (Fig. 5B, C, D). Previous research confirmed the association and causality between the amygdala and physiological arousal or NAcc/OFC and rewarding experience (25, 5860). Moreover, the critical auditory region may be the primary sensory area for physiological arousal, whereas the right secondary auditory region was important for right-NAcc activity (Supplemental Fig. 5). This anatomical dissociation suggests that arousal may be linked to lower-level acoustical features (abrupt onsets, changes in harmonicity, etc) likely to be processed at relatively early levels of the auditory hierarchy, whereas the recruitment of the NAcc may be related to more complex cognitive operations, including predictive and mnemonic functions that require more in-depth analysis of auditory patterns, and hence pertain to secondary auditory regions in the ventral stream of the temporal lobe (61). The findings indicate that the physiological arousal of chills has a distinctive neural predictor compared to the rewarding neural activity of chills. Indeed, the SCR intensity during chills was not associated with NAcc activity during chills (r = -0.18). Our results therefore support the dissociation between hedonic and arousal responses of musical reward suggested by recent pharmacological studies (62, 63). Predictive networks for physiological arousal and neural reward responses could therefore depend upon partially dissociable mechanisms.

For physiological arousal, the amygdala function may provide the interpretation. Amygdala receives substantial sensory information from the cortex and has a function of vigilance for salient stimuli (64). These findings also supported the idea that connecting the primary auditory region and the amygdala may mean that amygdala can easily process auditory sensory information. Such the auditory-amygdala connection before music listening may promote chills-related physiological arousal through detecting motivationally salient auditory stimuli effectively. Past research indicated that ໿musical chills are often experienced following sudden dynamic changes triggered by unexpected harmonies or rhythmic uncertainty (48, 65, 66), which would evoke physiological arousal. The pre-listening auditory-amygdala connection may facilitate to detection of these auditory cues.

The main prediction nodes of the NAcc and insula activity during the chills experience were the auditory-NAcc/OFC connection. NAcc and OFC are known to be core structures in the mesolimbic reward circuit (47, 59). Past studies also indicated that when people feel a strong reward for music, the functional connection between NAcc and the auditory cortex strengthens (9, 10, 25). The spontaneous auditory-NAcc/OFC connection may promote neural positive arousal experiences represented by NAcc and insula intensity (53), akin to how spontaneous neural replay promotes memory task performance (49, 67). That means that if the auditory-reward connection is already upregulated in the pre-listening period, the connection may make it easy to engage strong neural reward for the subsequent music. As NAcc flexibly encodes the reward dimension that is currently relevant for behavior (68), NAcc rather than OFC may represent both spontaneous neural activity and relevant rewarding outcomes. In addition, in line with the right hemisphere dominance reports (9, 10, 25), the highest positive weight for the right NAcc predictive model was the right auditory-NAcc connectivity (Fig. 5C).

Generalization of predictability for state sensory-reward neural connection

We used an independent-sample validation/replication design to examine the generalizability of our predictive models. ໿The applied validation procedure establishes the robustness of the auditory-reward predictive model, even with different brains and different music. Although the sample size of Experiment 2 is quite small, the fact that we obtained significant prediction of chills using the same parameters from Experiment 1 indicates that the model is robust enough to replicate even in small samples. Since we found that the generalization was possible for RSFC before music listening but not for 10-minute intrinsic RSFC before the start of the experiment (Fig. 6A, B, C, D), our predictive models would be specific for the resting period immediately before music listening. We found a significant difference in several auditory-reward networks between the two RSFCs (Fig. 6E, F), consistent with the suggestions of past studies that the RSFC is changeable to some degree (39, 69). These results and trial-by-trial prediction (Fig. 2E) indicate that pre-listening, the auditory-reward network reflects a temporary state network rather than individual differences of the intrinsic network. If the pre-listening state plays a critical role in modulating intense emotion for music, then, this could be a part of the reason that music elicits different emotional responses even in the same individual at different times. The current methodology could open a new horizon of reward neuroscience. If the current results transfer to different modalities, the RSFC between vision or taste area and reward regions might predict strong pleasurable responses to movie viewing or food consumption. Through such examination, the general context effect of the sensory-reward brain network for pleasurable stimuli may be revealed.

Although in the present data we emphasize state-dependent response, past studies have also indicated that stable trait-related musical reward sensitivity differs across individuals, and also predicts degree of pleasurable responses (10, 11, 70). We speculate that state-dependent fluctuations of brain activity may interact with stable, trait-related brain structural and functional patterns. High-musical reward responders may more easily enter into a mode where they are likely to experience chills compared to low-musical reward responders, perhaps because the high-reward responders have a better developed anatomical connection between the auditory and reward regions (11, 12).

In conclusion, our results demonstrate that the connectivity of the auditory-reward network, but not other network combinations, immediately before music listening can robustly predict subjective, physiological, and neurobiological aspects of musical chills. Specifically, the right secondary auditory-NAcc/OFC connections were related to neural positive arousal responses, whereas the peri-primary auditory-amygdala connections were associated with physiological arousal. In addition, the brain decoder based on the auditory-reward network could predict chills experience in an independent dataset from the pre-listening, but not the intrinsic resting brain network. Taken together, the findings suggest that the pre-listening auditory-reward network contains information about the responses to strong musical rewards before experiencing them. A key factor of why music becomes a strong reward may be eathpre-listening state which efficiently exchanges information between auditory and reward systems. If this is a general rule for our reward experience (not only music but also for example, movie/sports viewing and food), the rewarding experience could depend on the pre-connection state between sensory and reward brain regions, which may reflect the anticipation of how the next sensory experience will unfold in a pleasurable manner.

Materials And Methods

Participants

In the sampling phase, we recruited participants through web advertisement. Using a web-based survey, we asked them to complete two questionnaires to assess the frequency at which they experienced chills and tears while listening to music. The prevalence of the intense emotional responses to music was assessed based on the answers to the Barcelona Music Reward Questionnaire (BMRQ) (70) and Aesthetic Experience Scale in Music (AES-M) (12). In experiment 1, forty-three participants who sometimes experienced chills and tears were recruited. Four participants were removed because they did not report any chills or tears responses during the experiment. One participant was removed due to having an abnormality of autonomic nervous system activity. In experiment 2, twelve participants were recruited. One participant was removed due to misunderstanding online emotion ratings (many button presses under 1 s). Final sample was 38 in experiment 1 (14 women; age: = 21.8, SD = 1.4; BMRQ: M = 77.6, SD = 10.0, Max = 96, Min = 54; AES-M: M = 50.9, SD = 10.9, Max = 74, Min = 20) and 11 in experiment 2 (2 women; age: = 21.9, SD = 1.6; BMRQ: M = 83.3, SD = 10.4, Max = 99, Min = 69; AES-M: M = 67.8, SD = 17.1, Max = 96, Min = 41). From these questionnaire scores, we confirmed that the participants did not include strong musical anhedonia (10). For experiment 1, we verified that the sample size was appropriate using learning curve analysis (see Supplemental Figure 2). Participants were instructed to abstain from alcoholic beverages the night before the experiment and avoid coffee or cigarettes for three hours before the experiment. The study was approved by the human ethics research committee of the National Institute of Information and Communications Technology (NICT), and written consent was obtained for all participants. Participants were compensated for their participation in the study. 

Experimental procedure

Before the experiment, participants were instructed to select 4 songs, that is, 2 pieces of strong pleasurable chill- and 2 tear-evoking song (Supplemental sheet). To keep the music style uniform across participants, participants were asked to select music stimuli from pop/rock genres. In experiment 1, the experimenter additionally selected 4 popular songs based on the former year's hit chart in Japan (Supplemental Table 1). The songs most commonly included vocals, guitar, bass, and drums as similar as self-selected songs. In experiment 2, to generalize the machine learning model for different experimental paradigms, experimenter-selected songs were used from self-selected music of other participants (Supplemental sheet).

In the fMRI experiment, participants first engaged in a training session to familiarize themselves with the experimental procedure. After that participant performed two fMRI sessions. Each session consisted of one run in which the participants listened to 2 self-selected songs and 2 experimenter-selected songs. The order of presentation of songs was pseudo-counterbalanced, which alternated between self-selected and experimenter-selected songs to maximize the uncertainty of emerging songs across trials (see Supplemental Figure 1). Each trial started with an instruction lasting 5 s, followed by a musical excerpt with a maximum duration of 270 s in experiment 1 (M = 257.5 s, SD = 10.6 s) or a maximum duration of 360 s in experiment 2 (M = 281.1 s, SD = 12.6 s). The participants had to indicate, in real-time, their emotional responses while listening to the music by pressing one of four different buttons on an MRI-compatible response pad (1 = neutral, 2 = pleasure, 3(4) = chill, 4(3) = tear). Chills were defined as “goosebumps” or “shivers down the spine,” and tears were referred to as “weeping” or “a lump in the throat” (4, 5, 38, 43). The position of the chill and tear buttons was counterbalanced across participants to avoid ordering bias for these responses. The participants were instructed to hold down the button as long as they experienced the corresponding degree of emotion. Button signals were recorded at a frequency of 100 Hz. Note that the duration of the excerpt was not correlated with the duration of any four emotional responses (ps > .45). At the end of each excerpt, the participants were asked to rate sixteen items (from 1 = not at all to 4 = very strong) they felt in response to the musical excerpt. Each item was required to respond within 5 s. After the experiment, participants were asked to answer the familiarity (from 1=unfamiliar to 4= very familiar) for the experimenter-selected music.

To measure RSFC immediately before music listening, after each song was played, the participants relaxed for 50 s in experiment 1 and 40 s in experiment 2. We intend 80 s for sixteen rating items to reduce the carry-over effect of music listening for the RSFC. To avoid the remaining effect of the rating task, a further 10 s was removed from the examination of task rest.

Quantifying online behavioral measurement

The onset time and duration were recorded for every four emotional responses. Button presses of less than one second were not analyzed to avoid miss pressing. We removed button presses that were the same as the previous buttons and concatenated the duration of these two button presses. Since we asked the participant to press one button every time during music listening, we assumed the two presses were sustained. After that, the duration of each four button presses was summed during the piece of a song. 

Brain image and physiological acquisition

            In experiment 1, data were acquired using a 3T Siemens Trio scanner equipped with a 32-channel head coil. We recorded two experimental runs corresponding to the two blocks of trials in the main experimental task. Functional volumes were acquired using a T2*-weighted gradient echo, EPI sequence (62 interleaved slices, gap: 0.3 mm, voxel size: 3 × 3 × 3 mm, matrix size: 64 × 64, FOV: 192 mm, TE: 30 ms, TR: 2000 ms, flip angle: 75˚, multiband factor: 2). Additionally, a high-resolution anatomical volume was acquired at the mid-rest of the experimental session using a T1-weighted sequence (208 slices, no gap, voxel size: 1 × 1 × 1 mm, matrix size: 256 × 256, FOV: 256 mm, TE: 3.37 ms, TR: 1900 ms, flip angle: 9˚), which served as anatomical reference for the functional scans. 

In experiment 2, neuroimaging data were acquired on a 3T Siemens Prisma scanner with a 64-channel head coil. Both functional and anatomical volumes were acquired with the same parameters as experiment 1. Before the experiment, a resting-state fMRI session of 10 min (600 volumes) was followed with the parameters of T2*-weighted functional images acquisition (72 interleaved slices, gap: 0.16 mm, voxel size: 2 × 2 × 2 mm, matrix size: 100 × 100 mm, FOV: 200 mm, TE: 30 ms, TR: 1000 ms, flip angle: 60˚, multiband factor: 6). The parameter acquisition was determined for a different purpose.

Concurrent with functional imaging, physiological recordings were also acquired using an MR-compatible BIOPAC MP150 polygraph (BIOPAC Systems Inc., USA) for experiments 1 and 2. Cardiovascular, respiration, and electrodermal signals were obtained with a sampling frequency of 1000 Hz. Respiratory activity was assessed by a strain gauge transducer incorporated in a belt tied around the chest. To get a cardiac signal from a photo plethysmogram, the optical pulse sensor was attached to the proximal phalanx of the pinky finger of the subject's left hand. Skin conductance was measured continuously with Ag/AgCl electrodes placed at the index and middle fingers of the left hand.

Brain image and physiological data pre-processing

            Functional MRI data were pre-processed using Statistical Parametric Mapping software (SPM12; http://www.fil.ion.ucl.ac.uk/spm). Distortion correction was applied using field maps. Functional images were realigned to the mean image of the series, slice-time corrected, motion corrected, co-registered to the structural image, normalized to MNI space, and spatially smoothed with a 6 mm FWHM Gaussian kernel. Afterward, using CONN toolbox (https://web.conn-toolbox.org/home), the common nuisance variables were regressed out, including the subject-specific white matter mask and the cerebrospinal fluid signal mask (five PCA parameters, CompCor) and 12 movement regressors (six head motion parameters and their first-order temporal derivatives). Physiological noise regressors were also included as the nuisance variables. Tapas PhysIO Toolbox (71) was used to calculate HR and RR variables, including cardiac response function, respiration function, and RETROICOR at the same temporal resolution as the fMRI time series. Since the previous study indicated the relationship between resting physiological arousal and musical chills (72), removing the physiological noise is important to know the link between resting brain state and musical emotion. The linear trends of time courses were removed, and band-pass filtering (0.008–0.09 Hz) was applied to the time series of each voxel to reduce the effect of low-frequency drifts and high-frequency physiological noise. 

RSFC feature extraction

ROIs (region of interests) were defined using a functional brain atlas, which was derived from resting-state fMRI data based on InfoMap and a winner-take-all parcellation algorithm (41, 42). The atlas includes 288 ROIs spanning the whole brain, including the cerebellum and brainstem. The spherical ROIs were diameter = 8 mm. Each ROI was assigned to each 13 functional network communities (see Figure 2A). For each participant, a time course was computed for each ROI by averaging the BOLD signal of its constituent voxels at each time point. Second, functional connectivity between each pair of ROIs was calculated as the partial correlation (Pearson’s r) between the time courses of each pair of ROIs, yielding one correlation matrix per music stimuli per participant. Partial rather than simple correlations were used to rule out indirect connectivity effect (73). Fisher’s r-to-z transformation was then implemented to improve the normality of the correlation coefficients, resulting in 288 × 288 symmetric connectivity matrices representing the set of connections in each RSFC profile immediately before music listening. After that, the upper triangle of these matrices was used as the feature space for machine learning-based predictive modelling. Note that four trials of RSFC from different individuals were removed because Tapas PhysIO Toolbox cannot compute physiological denoising variables.

For predictive modelling, we used a network (includes several ROIs) seed-based functional connectivity to examine our auditory-reward network hypothesis and another possibility related to the auditory network. We set each 12 auditory seed networks as a feature (e.g., Auditory-Reward, Auditory-Default mode). We calculated the partial correlation matrix 12 (6-left and right) auditory ROI and several ROI in other networks (e.g., Reward: 8, Default mode: 65). We also examined whole-brain functional connectivity according to the distribution model of emotional brain (74, 75). In addition, we exploratory investigated the other possible 66 combinations of two networks (e.g., Visual-Reward, Salience-FrontoParietal). 

Predictive modelling of emotional responses using RSFC and statistical analysis

We used LASSO regression to decode each of the duration of emotional responses (neutral, pleasure, chill, tear) from RSFC immediately before music listening. The analysis was conducted using scikit-learn 1.0.2 in Python 3.8.13. First, RSFC features are standard scaled. Next, a LASSO decoder was trained, wherein an L1 regularization parameter λ was used for shrinkage and tuned to control overfitting. After averaging the eight RSFCs per participant, nested LOPOCV was applied, with outer LOPOCV estimating the model's generalizability and the inner LOPOCV to determine the optimal parameter λ. In the outer LOPOCV, 37 participants were used as the training set, with the remaining used as the testing set. Based on the optimal λ(see next paragraph), a model was trained using all subjects in the training set, and the model was then used to predict the outcome of a subject in the testing set. For all 38 participants, the above procedure was repeated.

Within each loop of the outer LOPOCV, the inner LOPOCVs were applied to determine the optimal λ. The training set for each loop of the inner LOPOCV was further partitioned into 37 participants, similar to the outer loop. Under a given λ in the range [0.001, 0.01, 0.1, 1, 10, 100, 1000], 36 participants were selected to train the model, and the remaining one participant was used to test the model. This procedure was repeated 37 times to ensure that each participant was used once as the testing dataset, thereby resulting in a total of 37 inner LOPOCV loops. For each λ value, accuracy between the actual and predicted outcomes was calculated for each inner LOPOCV loop and averaged across the 37 inner loops. The mean accuracy was defined as the inner prediction accuracy, and λ with the highest inner prediction accuracy was selected as the optimal λfor the outer LOPOCV.

Parametric and non-parametric statistical tests were used to evaluate the decoding results for the duration of musical emotion. To evaluate if the prediction performance was significantly better than would be expected by chance, we performed a correlation test. Furthermore, we conducted a permutation test only for significantly correlated variables to reduce computational time. As for the permutation test, for repeated random LOPOCV, the outer LOPOCV was repeated 10,000 times, but each time the duration of musical emotion across the training data was permuted without replacement. We calculated the p-value of obtaining a mean correlation in the null distributions equal to or higher than the true correlation from the analyses. FDR correction (Benjamini–Hochberg) was applied to correct multiple comparisons of the p-value. 

Trial-by-trial predictive modelling using the three periods of the auditory-reward network

             To complement the result of the main analysis, we investigated whether trial-by-trial RSFC can predict the subjective duration of chills. We attempted to predict the duration of musical chills from the preceding auditory-reward RSFC as like LOPOCV analysis. We also attempted to predict the ongoing duration of chills from the auditory-reward FC during music listening because task FC may have a higher predictive ability of behavior than RSFC (76, 77). In addition, we tried to predict the duration of chills from the following auditory-reward RSFC. The chills experienced for the preceding music listening may influence the auditory-reward RSFC. If this is true, the auditory-reward RSFC should predict a backward chills experience.

Like the LOPOCV-LASSO model, we construct a nested LOTOCV-LASSO model. First, we removed two outlier trials above three standard deviations, resulting in 298 trials being analyzed. Even if we did not remove outliers, the analysis showed the same results. In the inner loop, 296 trials were selected to train the model, and the remaining one trial was used to test the model to determine optimal λ. This procedure was repeated 297 times to ensure that each trial was used once as the testing dataset, resulting in 297 inner LOTOCV loops. For each λ value, accuracy between the actual and predicted outcomes was calculated for each inner LOTOCV loop and averaged across the 297 inner loops. In the outer LOTOCV, 297 trials were used as the training set, with the remaining one used as the testing set. Using the optimal λ, the model was trained using all trials in the training set, and the model was used to predict the outcome of a trial in the testing set, repeating for all 298 trials. Note that because we did not have the post-RSFC for the last trial for every two sessions, we used 222 trials to make the LOTOCV-LASSO model that predicts the backward chills experience. To test the significant prediction ability of chills experience, we conducted a correlation test and 1,000 permutation tests on actual chills duration and predicted chills duration.

Validation of RSFC predictive modelling

The prediction results of the auditory-reward network both for the LOPOCV- and LOTOCV-LASSO models were further validated with a range of different RSFC time windows. We set RSFC time windows by decreasing one TR. Because prior evidence suggests that 22.5 s functional connectivity can distinguish distinct cognitive tasks (22), we set RSFC time windows ranging from 20 to 40 s (20, 22, …, 40 s). Furthermore, using the minimum significant 26 s time windows (see Figure 2F-left), the sliding window analysis was performed over successive 26 s RSFC windows (24 s overlapping) from 0 to 40 s (0 to 26, 2 to 28, …, 14 to 40 s). We conducted a correlation test on actual chills duration and predicted chills duration.

Physiological and neural intensity of the emotional chills

To investigate whether RSFC can predict not only subjective duration but also physiological arousal and neural activation during musical chills, we examined the intensity of physiological and neural responses while participants experience chills. 

HR and RR was quantified after band-pass filtering raw signal (HR: low-pass 35 Hz, high-pass 1 Hz; RR: low-pass 1 Hz, high-pass 0.05 Hz). HR was calculated by the inverse instantaneous inter-beat intervals (in ms) from the photo plethysmogram using a peak detection algorithm implemented by SciPy 1.9.0 to detect successive R-waves. Careful visual confirmation of the electrocardiogram ensured that the automatic R-wave detection procedure had been performed correctly. We then performed cubic spline interpolation implemented by SciPy 1.9.0 within the two successive HR values to obtain 10 Hz time-series HR data. RR was also calculated by the peak detection algorithm and cubic spline interpolation. SCR data were analyzed using Ledalab (Version 3.4.9, MATLAB). Data were low-pass filtered (1 Hz) and downsampled to 10 Hz. Continuous decomposition analysis (CDA) was performed to extract the phasic SCRs from the electrodermal activity (50). CDA yields the phasic driver underlying skin conductance data as a continuous measure of SCR, which would be robust to common artifacts. We examined only SCR responses above 0.05 μS, which indicates an unambiguous increase in skin conductance (72). SCR scores were transformed by log function to modify distribution bias (50).

We calculated HR, RR, and SCR scores during the chills experience after synchronizing these scores to subjective responses. For every subjective chill experience, the average physiological response for the 1 s time window before the onset was calculated as a baseline. Any score exceeding three-point standard deviations from the mean was removed as an outlier. The sum of each physiological index while experiencing chills was examined as the score of three physiological activities.

Neural responses during the chills experience were quantified by the BOLD signal from each of 140 ROIs from AAL3.1 (45), excluding cerebellum regions. The pre-processing was the same as RSFC data except for filtering: the time series BOLD signals were high-pass filtered with a cut-off of 0.008 Hz (125 s). To avoid removing task-related neural activity, the low-pass filter was not used for the BOLD signal. Percent signal change was calculated relative to the 2 volumes, which were included before and after 1 volume on chills onset, to correct for differences in starting baseline. Copied time courses are shifted by 4 s to account for the hemodynamic delay of the fMRI signal. Data that deviate from the average by more than three standard deviations were removed for quality assurance (53). We investigated the sum of percent signal change during chill response as each 140 neural activity scores. We conducted correlation and 10,000 permutation tests on actual physiological and neural activities and predicted these activities. FDR correction was applied to correct multiple comparisons.

Asymmetry analysis of the auditory-reward RSFC

We tested for rightward asymmetry of emotional responses (7, 9, 10, 25) by performing a machine learning prediction on the difference between the subjective duration, SCR intensity, right-NAcc activity, and left-insula activity of chills in the right versus left ROIs, using the 40 s auditory-reward RSFC. We used 45 connections for a 10 × 10 partial correlation matrix (six auditory and four reward ROI) from each hemisphere to make the LOPOCV-LASSO model. To test the hypothesis that the right but not left hemisphere relates to neuropsychological responses to chills, we conducted correlation and 10,000 permutation tests on actual chills experience and predicted chills experience. FDR correction was applied to correct multiple comparisons. Furthermore, to test the difference in prediction accuracy (Pearson’s r) between the right and left hemispheres, we performed 10,000 bootstrap tests (78).

Generalization of predictive model

In experiment 2, participants were presented with 8 songs (4 self-selected and 4 experimenter-selected). The target was set as the mean duration of subjective chills responses for 8 songs for each participant. The single auditory-reward RSFC immediately before music listening (task rest) was calculated, and then the within-subject RSFC with the 8 songs was averaged, which resulted in one RSFC for each subject. The duration of RSFC is varied between 6 to 30 s. By inputting the auditory-reward RSFC score for the machine learning model from Experiment 1, we got the predicted score of subjective, physiological, and neural aspects of the chills experience for each RSFC duration. Furthermore, we calculated the auditory-reward RSFC from the traditional 10 minutes resting state (intrinsic rest) to examine whether the intrinsic brain network is different from RSFC immediately before music listening. To compare equal conditions for two types of RSFC, we sampled 8 short-time BOLD signals from 10 minutes of resting brain state. The random sampling is repeated 10,000 times to avoid biased sampling. The auditory-reward RSFC scores averaged 10,000 samplings were entered into the machine-learning model from Experiment 1 and got the predicted scores for the intrinsic resting brain state. Since the prediction accuracy summing subjective duration, SCR, right-NAcc, and left-insula was highest when we analyzed the 18 s auditory-reward RSFC (see Supplemental Figure 7), we decided to use 18 s both for the task and intrinsic rest. The evaluation of prediction accuracy for each task and intrinsic rest was performed by a one-sided correlation test. FDR correction was applied to correct multiple comparisons.

            In addition, in a complementary analysis to the machine learning approach, we examined a univariate analysis of the difference in the 18 s auditory-reward RSFC scores between task and intrinsic rest. Using each of the eight auditory-reward network pairs for each participant, we performed a linear mixed model with a significance set at Satterthwaite’s approximation by lme4 (79).

References

  1. S. A. Mehr, M. Singh, D. Knox, D. M. Ketter, D. Pickens-Jones, S. Atwood, C. Lucas, N. Jacoby, A. A. Egner, E. J. Hopkins, R. M. Howard, J. K. Hartshorne, M. V. Jennings, J. Simson, C. M. Bainbridge, S. Pinker, T. J. O, M. M. Krasnow, L. Glowacki, T. J. O’Donnell, M. M. Krasnow, L. Glowacki, Universality and diversity in human song. Science (80-.). 366, 1–17 (2019).
  2. P. N. Juslin, P. Laukka, Expression, Perception, and Induction of Musical Emotions: A Review and a Questionnaire Study of Everyday Listening. J. New Music Res. 33, 217–238 (2004).
  3. L. Dubé, J. L. Le Bel, The content and structure of laypeople’s concept of pleasure. Cogn. Emot. 17, 263–295 (2003).
  4. A. J. Blood, R. J. Zatorre, Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc. Natl. Acad. Sci. U. S. A. 98, 11818–11823 (2001).
  5. V. N. Salimpoor, M. Benovoy, K. Larcher, A. Dagher, R. J. Zatorre, Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nat. Neurosci. 14, 257–262 (2011).
  6. O. Shany, N. Singer, B. P. Gold, N. Jacoby, R. Tarrasch, T. Hendler, R. Granot, Surprise-related activation in the nucleus accumbens interacts with music-induced pleasantness. Soc. Cogn. Affect. Neurosci. 14, 459–470 (2019).
  7. T. P. Freeman, R. A. Pope, M. B. Wall, J. A. Bisby, M. Luijten, C. Hindocha, C. Mokrysz, W. Lawn, A. Moss, M. A. P. Bloomfield, C. J. A. Morgan, D. J. Nutt, H. V. Curran, Cannabis Dampens the Effects of Music in Brain Regions Sensitive to Reward and Emotion. Int. J. Neuropsychopharmacol. 21, 21–32 (2018).
  8. B. P. Gold, E. Mas-Herrero, Y. Zeighami, M. Benovoy, A. Dagher, R. J. Zatorre, Musical reward prediction errors engage the nucleus accumbens and motivate learning. Proc. Natl. Acad. Sci. 116 (2019), doi:10.1073/PNAS.1809855116.
  9. V. N. Salimpoor, I. van den Bosch, N. Kovacevic, A. R. McIntosh, A. Dagher, R. J. Zatorre, Interactions between the nucleus accumbens and auditory cortices predict music reward value. Science. 340, 216–9 (2013).
  10. N. Martínez-Molina, E. Mas-Herrero, A. Rodríguez-Fornells, R. J. Zatorre, J. Marco-Pallarés, Neural correlates of specific musical anhedonia. Proc. Natl. Acad. Sci. 113, E7337–E7345 (2016).
  11. N. Martínez-Molina, E. Mas-Herrero, A. Rodríguez-Fornells, R. J. Zatorre, J. Marco-Pallarés, White matter microstructure reflects individual differences in music reward sensitivity. J. Neurosci. 39, 5018–5027 (2019).
  12. M. E. Sachs, R. J. Ellis, G. Schlaug, P. Loui, Brain connectivity reflects human aesthetic responses to music. Soc. Cogn. Affect. Neurosci. 11, 884–891 (2016).
  13. R. Adolphs, How should neuroscience study emotions? By distinguishing emotion states, concepts, and experiences. Soc. Cogn. Affect. Neurosci. 12, 24–31 (2017).
  14. E. S. Finn, X. Shen, D. Scheinost, M. D. Rosenberg, J. Huang, M. M. Chun, X. Papademetris, R. Todd Constable, Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity. Nat. Neurosci. 18, 1–11 (2015).
  15. A. T. Drysdale, L. Grosenick, J. Downar, K. Dunlop, F. Mansouri, Y. Meng, R. N. Fetcho, B. Zebley, D. J. Oathes, A. Etkin, A. F. Schatzberg, K. Sudheimer, J. Keller, H. S. Mayberg, F. M. Gunning, G. S. Alexopoulos, M. D. Fox, A. Pascual-Leone, H. U. Voss, B. Casey, M. J. Dubin, C. Liston, Resting-state connectivity biomarkers define neurophysiological subtypes of depression. Nat. Med. 23 (2016), doi:10.1038/nm.4246.
  16. R. E. Beaty, Y. N. Kenett, A. P. Christensen, M. D. Rosenberg, M. Benedek, Q. Chen, A. Fink, J. Qiu, T. R. Kwapil, M. J. Kane, P. J. Silvia, Robust prediction of individual creative ability from brain functional connectivity. Proc. Natl. Acad. Sci., 201713532 (2018).
  17. W. T. Hsu, M. D. Rosenberg, D. Scheinost, R. T. Constable, M. M. Chun, Resting-state functional connectivity predicts neuroticism and extraversion in novel individuals. Soc. Cogn. Affect. Neurosci. 13, 224–232 (2018).
  18. S. Marek, B. Tervo-clemmens, F. J. Calabro, D. F. Montez, B. P. Kay, A. S. Hatoum, M. R. Donohue, W. Foran, R. L. Miller, T. J. Hendrickson, S. M. Malone, S. Kandala, Reproducible brain-wide association studies require thousands of individuals. Nature (2022), doi:10.1038/s41586-022-04492-9.
  19. E. S. Finn, Is it time to put rest to rest? Trends Cogn. Sci., 1–12 (2021).
  20. I. Momennejad, A. R. Otto, N. D. Daw, K. A. Norman, Offline replay supports planning in human reinforcement learning. Elife. 7, 1–25 (2018).
  21. S. Sadaghiani, J. B. Poline, A. Kleinschmidtc, M. D’Esposito, Ongoing dynamics in large-scale functional connectivity predict perception. Proc. Natl. Acad. Sci. U. S. A. 112, 8463–8468 (2015).
  22. J. Gonzalez-Castillo, C. W. Hoy, D. A. Handwerker, M. E. Robinson, L. C. Buchanan, Z. S. Saad, P. A. Bandettini, Tracking ongoing cognition in individuals using brief, whole-brain functional connectivity patterns. Proc. Natl. Acad. Sci. U. S. A. 112, 8762–8767 (2015).
  23. A. Kucyi, M. Esterman, J. Capella, A. Green, M. Uchida, J. Biederman, J. D. E. Gabrieli, E. M. Valera, S. Whitfield-Gabrieli, Prediction of stimulus-independent and task-unrelated thought from functional brain networks. Nat. Commun. 12 (2021), doi:10.1038/s41467-021-22027-0.
  24. B. Chew, T. U. Hauser, M. Papoutsi, J. Magerkurth, R. J. Dolan, R. B. Rutledge, Endogenous fluctuations in the dopaminergic midbrain drive behavioral choice variability. Proc. Natl. Acad. Sci. U. S. A. 116, 18732–18737 (2019).
  25. E. Mas-Herrero, A. Dagher, M. Farrés-Franch, R. J. Zatorre, Unraveling the temporal dynamics of reward signals in music-induced pleasure with TMS. J. Neurosci. 41, 3889–3899 (2021).
  26. K. Iigaya, T. U. Hauser, Z. Kurth-Nelson, J. P. O’Doherty, P. Dayan, R. J. Dolan, The value of what’s to come: Neural mechanisms coupling prediction error and reward anticipation. Sci. Adv. (2020), doi:10.1101/588699.
  27. T. Kahnt, J. Heinzle, S. Q. Park, J. D. Haynes, The neural code of reward anticipation in human orbitofrontal cortex. Proc. Natl. Acad. Sci. U. S. A. 107, 6010–6015 (2010).
  28. F. Filimon, J. D. Nelson, T. J. Sejnowski, M. I. Sereno, G. W. Cottrell, The ventral striatum dissociates information expectation, reward anticipation, and reward receipt. Proc. Natl. Acad. Sci. U. S. A. 117, 15200–15208 (2020).
  29. S. Bray, S. Shimojo, J. P. O’Doherty, Human medial orbitofrontal cortex is recruited during experience of imagined and real rewards. J. Neurophysiol. 103, 2506–2512 (2010).
  30. J. D. Howard, J. A. Gottfried, P. N. Tobler, T. Kahnt, Identity-specific coding of future rewards in the human orbitofrontal cortex. Proc. Natl. Acad. Sci. U. S. A. 112, 5195–5200 (2015).
  31. S. Oldham, C. Murawski, A. Fornito, G. Youssef, M. Yücel, V. Lorenzetti, The anticipation and outcome phases of reward and loss processing: A neuroimaging meta-analysis of the monetary incentive delay task. Hum. Brain Mapp. 39, 3398–3418 (2018).
  32. B. Knutson, S. M. Greer, Anticipatory affect: Neural correlates and consequences for choice. Philos. Trans. R. Soc. B Biol. Sci. 363, 3771–3786 (2008).
  33. G. Loewenstein, Anticipation and the Valuation of Delayed Consumption. Econ. J. 97, 666 (1987).
  34. G. Marion, G. M. Di Liberto, S. A. Shamma, The Music of Silence. Part I: Responses to Musical Imagery Accurately Encode Melodic Expectations and Acoustics. J. Neurosci. 41, 7435–7448 (2021).
  35. M. Regev, A. R. Halpern, A. M. Owen, A. D. Patel, R. J. Zatorre, Mapping Specific Mental Content during Musical Imagery. Cereb. Cortex, 1–19 (2021).
  36. M. Groussard, G. Rauchs, B. Landeau, F. Viader, B. Desgranges, F. Eustache, H. Platel, The neural substrates of musical memory revealed by fMRI and two semantic tasks. Neuroimage. 53, 1301–1309 (2010).
  37. Z. Deng, R. Navarathna, P. Carr, S. Mandt, Y. Yue, I. Matthews, G. Mori, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017).
  38. K. Mori, M. Iwanaga, Being emotionally moved is associated with phasic physiological calming during tonic physiological arousal from pleasant tears. Int. J. Psychophysiol. 159, 47–59 (2021).
  39. J. N. va. der Meer, M. Breakspear, L. J. Chang, S. Sonkusare, L. Cocchi, Movie viewing elicits rich and reliable brain state dynamics. Nat. Commun. 11, 1–14 (2020).
  40. R. Tibshirani, Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. Ser. B. 58, 267–288 (1996).
  41. B. A. Seitzman, C. Gratton, S. Marek, R. V. Raut, N. U. F. Dosenbach, B. L. Schlaggar, S. E. Petersen, D. J. Greene, A set of functionally-defined brain regions with improved representation of the subcortex and cerebellum. Neuroimage. 206, 116290 (2020).
  42. J. D. Power, A. L. Cohen, S. M. Nelson, G. S. Wig, K. A. Barnes, J. A. Church, A. C. Vogel, T. O. Laumann, F. M. Miezin, B. L. Schlaggar, S. E. Petersen, Functional Network Organization of the Human Brain. Neuron. 72, 665–678 (2011).
  43. K. Mori, M. Iwanaga, Two types of peak emotional responses to music: The psychophysiology of chills and tears. Sci. Rep. 7, 46063 (2017).
  44. R. de Fleurian, M. T. Pearce, Chills in music: A systematic review. Psychol. Bull. 147, 890–920 (2021).
  45. E. T. Rolls, C. C. Huang, C. P. Lin, J. Feng, M. Joliot, Automated anatomical labelling atlas 3. Neuroimage. 206, 116189 (2020).
  46. M. F. Glasser, T. S. Coalson, E. C. Robinson, C. D. Hacker, J. Harwell, E. Yacoub, K. Ugurbil, J. Andersson, C. F. Beckmann, M. Jenkinson, S. M. Smith, D. C. Van Essen, A multi-modal parcellation of human cerebral cortex. Nature (2016), doi:10.1038/nature18933.
  47. S. N. Haber, B. Knutson, The reward circuit: Linking primate anatomy and human imaging. Neuropsychopharmacology. 35, 4–26 (2010).
  48. K. Mori, Decoding peak emotional responses to music from computational acoustic and lyrical features. Cognition. 222, 105010 (2022).
  49. Y. Liu, M. M. Nour, N. W. Schuck, T. E. Behrens, R. J. Dolan, Decoding cognition from spontaneous neural activity. Nat. Rev. Neurosci. (2022), doi:10.1038/s41583-022-00570-z.
  50. M. Benedek, C. Kaernbach, Decomposition of skin conductance data by means of nonnegative deconvolution. Psychophysiology. 47, 647–658 (2010).
  51. E. Mas-Herrero, L. Maini, G. Sescousse, R. J. Zatorre, Common and distinct neural correlates of music and food-induced pleasure: A coordinate-based meta-analysis of neuroimaging studies. Neurosci. Biobehav. Rev. 123, 61–71 (2021).
  52. S. Koelsch, A coordinate-based meta-analysis of music-evoked emotions. Neuroimage. 223, 117350 (2020).
  53. B. Knutson, K. Katovich, G. Suri, Inferring affect from fMRI data. Trends Cogn. Sci. 18, 422–428 (2014).
  54. K. C. Berridge, M. L. Kringelbach, Pleasure Systems in the Brain. Neuron. 86, 646–664 (2015).
  55. P. Schneider, V. Sluming, N. Roberts, M. Scherg, R. Goebel, H. J. Specht, H. G. Dosch, S. Bleeck, C. Stippich, A. Rupp, Structural and functional asymmetry of lateral Heschl’s gyrus reflects pitch perception preference. Nat. Neurosci. 8, 1241–1247 (2005).
  56. I. S. Johnsrude, V. B. Penhune, R. J. Zatorre, Functional specificity in the right human auditory cortex for perceiving pitch direction. Brain. 123, 155–163 (2000).
  57. P. Albouy, L. Benjamin, B. Morillon, R. J. Zatorre, Distinct sensitivity to spectrotemporal modulation supports brain asymmetry for speech and melody. Science (80-.). 1047, 1043–1047 (2020).
  58. F. Beissner, K. Meissner, K. J. Bär, V. Napadow, The autonomic brain: An activation likelihood estimation meta-analysis for central processing of autonomic function. J. Neurosci. 33, 10503–10511 (2013).
  59. O. Bartra, J. T. McGuire, J. W. Kable, The valuation system: A coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. Neuroimage. 76, 412–427 (2013).
  60. C. S. Inman, K. R. Bijanki, D. I. Bass, R. E. Gross, S. Hamann, J. T. Willie, Human amygdala stimulation effects on emotion physiology and emotional experience. Neuropsychologia. 145, 106722 (2020).
  61. R. J. Zatorre, From perception to pleasure: the neuroscience of music and why we love it (Oxford University Press, 2023).
  62. L. Ferreri, E. Mas-Herrero, R. J. Zatorre, P. Ripollés, A. Gomez-Andres, H. Alicart, G. Olivé, J. Marco-Pallarés, R. M. Antonijoan, M. Valle, J. Riba, A. Rodriguez-Fornells, Dopamine modulates the reward experiences elicited by music. Proc. Natl. Acad. Sci. U. S. A. 116, 3793–3798 (2019).
  63. E. Mas-herrero, F. Pla-juncà, L. Ferreri, G. Cardona, J. Riba, R. J. Zatorre, M. Valle, R. M. Antonijoan, A. Rodriguez-fornells, The role of opioid transmission in music-induced pleasure. Ann. N. Y. Acad. Sci., 1–10 (2022).
  64. L. Pessoa, Emotion and cognition and the amygdala: From “what is it?” to “what’s to be done?” Neuropsychologia. 48, 3416–3429 (2010).
  65. J. A. Sloboda, Music structure and emotional response: Some empirical findings. Psychol. Music. 19, 110–120 (1991).
  66. F. Nagel, R. Kopiez, O. Grewe, E. Altenmüller, Psychoacoustical correlates of musically induced chills. Music. Sci. 12, 101–113 (2008).
  67. N. W. Schuck, Y. Niv, Sequential replay of non-spatial task states in the human hippocampus. Science (80-.). 5181, 315978 (2019).
  68. S. C. Weber, T. Kahnt, B. B. Quednow, P. N. Tobler, Fronto-striatal pathways gate processing of behaviorally relevant reward dimensions. PLoS Biol. 16, e2005722 (2018).
  69. M. W. Cole, D. S. Bassett, J. D. Power, T. S. Braver, S. E. Petersen, Intrinsic and task-evoked network architectures of the human brain. Neuron. 83, 238–251 (2014).
  70. E. Mass-Herrero, J. Marco-Pallares, U. Lorenzo-Seva, R. J. Zatorre, A. Rodriguez-Fornells, Individual differences in music reward experiences. Music Percept. 31, 118–138 (2013).
  71. L. Kasper, S. Bollmann, A. O. Diaconescu, C. Hutton, J. Heinzle, S. Iglesias, T. U. Hauser, M. Sebold, Z. M. Manjaly, K. P. Pruessmann, K. E. Stephan, The PhysIO Toolbox for Modeling Physiological Noise in fMRI Data. J. Neurosci. Methods. 276, 56–72 (2017).
  72. K. Mori, M. Iwanaga, Resting physiological arousal is associated with the experience of music-induced chills. Int. J. Psychophysiol. 93, 1–7 (2014).
  73. S. M. Smith, K. L. Miller, G. Salimi-Khorshidi, M. Webster, C. F. Beckmann, T. E. Nichols, J. D. Ramsey, M. W. Woolrich, Network modelling methods for FMRI. Neuroimage. 54, 875–891 (2011).
  74. P. A. Kragel, K. S. LaBar, Decoding the Nature of Emotion in the Brain. Trends Cogn. Sci. xx, 1–12 (2016).
  75. P. A. Kragel, L. Koban, L. F. Barrett, T. D. Wager, Representation, Pattern Information, and Brain Signatures: From Neurons to Neuroimaging. Neuron. 99, 257–273 (2018).
  76. A. S. Greene, S. Gao, D. Scheinost, R. T. Constable, Task-induced brain state manipulation improves prediction of individual traits. Nat. Commun. 9 (2018), doi:10.1038/s41467-018-04920-3.
  77. E. S. Finn, P. A. Bandettini, Movie-watching outperforms rest for functional connectivity-based prediction of behavior. Neuroimage. 235, 117963 (2021).
  78. W. H. Beasley, L. DeShea, L. E. Toothaker, J. L. Mendoza, D. E. Bard, J. L. Rodgers, Bootstrapping to Test for Nonzero Population Correlation Coefficients Using Univariate Sampling. Psychol. Methods. 12, 414–433 (2007).
  79. D. Bates, M. Mächler, B. M. Bolker, S. C. Walker, Fitting Linear Mixed-Effects Models Using lme4. J. Stat. Softw. (2015), doi:10.18637/jss.v067.i01.