# Deep Image Reconstruction ## Original paper Shen, Horikawa, Majima, and Kamitani (2019) Deep image reconstruction from human brain activity. PLOS Computational Biology. ## Dataset ### MRI data This dataset contains fMRI data from from three subjects ('sub-01', 'sub-02', and 'sub-03'). Each subject data contains five types of MRI data collected over multiple scanning sessions. - 'ses-perceptionNaturalImageTraining': fMRI data from the training natural image sessions in the image presentation experiment (120 runs; 15 sessions) - 'ses-perceptionNaturalImageTest': fMRI data from the test natural image sessions in the image presentation experiment (24 runs; 3 sessions) - 'ses-perceptionArtificialImage': fMRI data from the geometric shape sessions in the image presentation experiment (20 runs; 2-3 sessions) - 'ses-perceptionLetterImage': fMRI data from the alphabetical letter sessions in the image presentation experiment (12 runs; 1 session) - 'ses-imagery': fMRI data in the imagery experiment (20 runs; 3-4 sessions) Each scanning session consisted of functional (EPI) and anatomical (inplane T2) images. The functional EPI images covered the entire brain (TR, 2000 ms; TE, 43 ms; flip angle, 80°; voxel size, 2 × 2 × 2 mm; FOV, 192 × 192 mm; number of slices, 76; slice gap, 0 mm; multiband factor, 4) and inplane T2-weighted anatomical images were acquired with the same slices used for the EPI (TR, 11,000 ms; TE, 59 ms; flip angle, 160°; voxel size, 0.75 × 0.75 × 2.0 mm; FOV, 192 × 192 mm). The dataset also includes a T1-weighted anatomical reference image for each subject (TR, 2250 ms; TE, 3.06 ms; TI, 900 ms; flip angle, 9°; voxel size, 1.0 × 1.0 × 1.0 mm; FOV, 256 × 256 mm). The T1-weighted images were scanned only once for each subject in a separate scanning session and are stored in 'ses-anatomy' directories. The T1-weighted images were defaced by pydeface (). All DICOM files are converted to Nifti-1 files by mri_convert in FreeSurfer. In addition, the dataset contains mask images of manually defined ROIs for each subject in 'sourcedata' directory (See 'README' in 'sourcedata' for more details). ### Task event files Task event files ('sub-\*_ses-\*_task-\*_run-\*_events.tsv') contains recorded event (stimuli presentation, subject responses, etc.) during fMRI runs. In task event files for image presentation experiments ('task-perception'), each column represents: - 'onset': onset time of an event (sec) - 'duration': duration of the event (sec) - 'event_type': type of the event (1: Stimulus presentation block, 2: Repetition block, -1, -2, and -3: Initial, inter-trial, and post rest blocks without visual stimulus) - 'stimulus_id': stimulus ID of the image presented in a stimulus block ('n/a' in rest blocks) - 'stimulus_name': stimulus file name of the image presented in a stimulus block ('n/a' in rest blocks) - 'response_time': time of button press in the block, elapsed time from the beginning of each run (sec; 'n/a' 'n/a' when the subject did not press the button in the block) - Additional columns 'category_index' and 'image_index' are for internal use. In task event files for imagery experiments ('task-imagery'), each column in represents: - 'onset': onset time (sec) of an event - 'duration': duration (sec) of the event - 'trial_no': trial (block) number of the event - 'event_type': type of the event (-1: rest block, 1: cue presentation period, 2: imagery period, 3: evaluation of imagery quality period, 4: inter-trial and post rest period) - 'category_id': ImageNet/WordNet synset ID of a synset (category) which the subject was instructed to imagine at the block ('n/a' in rest blocks) - 'category_name': ImageNet/WordNet synset (category) which the subject was instructed to imagine at the block ('n/a' in rest blocks) - 'response_time': time of button press for imagery quality evaluation at the block, elapsed time from the beginning of each run (sec; 'n/a' when the subject did not press the button in the block) - 'evaluation': vividness of their mental imagery evaluated by the subject (5: very vivid, 4: fairly vivid, 3: rather vivid, 2: not vivid, 1: cannot recognize the target image) - Additional column 'category_index' is for internal use. #### Image/category labels In the natural image presentation experiments, the stimulus images are named as 'n03626115_19498', where 'n03626115' is ImageNet/WorNet ID for a synset (category) and '19498' is image ID. In the geometric shape and alphabetical letter presentation experiments, the stimuli consisted of a total of 40 combinations of 5 shapes (filled square, small opened square, large opened square, +, and X) and 8 colors (red, green, blue, cyan, magenta, yellow, white, and black). The stimuli were named as 'colorExpStim\\_\\_\' (e.g, 'colorExpStim01_red_square'). In the alphabetical letter presentation experiments, 'A', 'C', 'E', 'I', 'N', 'O', 'R', 'S', 'T', and 'U' were used as visual stimuli. The stimuli were named as 'letter_black_img\\_\' (e.g., 'letter_black_img011_A'). In the imagery experiments, the categories are named as the ImageNet/WordNet sysnet ID (e.g., 'n03626115'). The stimulus and category names are included in the task event files as 'stimulus_name' and 'category_name', respectively. For use in analysis code, the task event files also contain 'stimulus_id' and 'category_id', which are float or integer numbers generated based on the stimulus or category names (e.g., '1518878.005958' represents image 'n01518878_5958'). The mapping between stimulus names and IDs (the first and second column from the left in each file is 'stimulus_name' and 'stimulus_id', respectively): - [stimulus_NaturalImageTraining.tsv](https://ndownloader.figshare.com/files/14876741) - [stimulus_NaturalImageTest.tsv](https://ndownloader.figshare.com/files/14876738) - [stimulus_LetterImage.tsv](https://ndownloader.figshare.com/files/14876732) - [stimulus_ArtificialImage.tsv](https://ndownloader.figshare.com/files/14876798) ### Stimulus images Due to license issues, our dataset do not include the stimulus images used in the natural image presentation experiments. A script downloading the stimulus images from ImageNet are available at . The stimulus images used in the geometric shape and alphabetical letter presentation experiments are available at [figshare](https://doi.org/10.6084/m9.figshare.7033577). - [Geometrical shape images](https://ndownloader.figshare.com/files/14876801) (ses-perceptionArtificialImage) - [Alphabetical letter images](https://ndownloader.figshare.com/files/14876735) (ses-perceptionLetterImage) ## Contact - Email: - We also accept inqueries at [issues on GitHub/KamitaniLab/OpenData](https://github.com/KamitaniLab/OpenData/issues).