README Music Genre fMRI Dataset by Tomoya Nakai, Naoko Koide-Majima, and Shinji Nishimoto References: Nakai, Koide-Majima, and Nishimoto (2021). Correspondence of categorical and feature-based representations of music in the human brain. Brain and Behavior. 11(1), e01936. https://doi.org/10.1002/brb3.1936 We measured brain activity using functional MRI while five subjects (“sub-001”, …, “sub-005”) listened to music stimuli of 10 different genres. The entire folder consists of subject-wise subfolders (“sub-001”,…). Each subject’s folder contains the following subfolders: 1) anat: T1-weighted structural images 2) func: functional signals (multi-band echo-planar images) Each subject performed 18 runs consisting of 12 training runs and 6 test runs. The training and test data were assigned with the following notations: Training data: sub-00*_task-Training_run-**_bold.json Test data: sub-00*_task-Test_run-**_bold.json Each *_event.tsv file contains following information: onset: stimulus onset Duration: stimulus duration genre: genre type (out of 10 genres) track: index to identify the original track start: onset of excerpt from the original track (second) end: offset of excerpt from the original track (second) The duration of all stimuli is 15s. For each clip, 2 s of fade-in and fade-out effects were applied, and the overall signal intensity was normalized in terms of the root mean square. For the training runs, the 1st stimulus (0-15s) is the same as the last stimulus of the previous run (600-615s). For the test runs, the1st stimulus (0-15s) is the same as the last stimulus of the same run (600-615s). The original music stimuli (GTZAN dataset) can be found here: http://marsyas.info/downloads/datasets.html ***Caution*** This dataset can be used for research purposes only. The data were anonymized, and users shall not perform analyses to re-identify individual subjects.