Emotion Category and Face Perception Task Optimized for Multivariate Pattern Analysis

uploaded by Isaac David on 2021-02-25 - 3 months ago
last modified on 2021-04-08 - about 1 month ago
authored by Isaac David, Victor Olalde-Mathieu, Ana Y. Martínez, Lluviana Rodríguez-Vidal, Fernando A. Barrios
13294
We found 1 Warning in your dataset. You are not required to fix warnings, but doing so will make your dataset more BIDS compliant.

/sub-01/anat/sub-01_T1w.nii.gz

The most common set of dimensions is: 176,256,256 (voxels), This file has the dimensions: 512,512,296 (voxels). The most common resolution is: 1.00mm x 1.00mm x 1.00mm, This file has the resolution: 0.48mm x 0.48mm x 0.50mm.

/sub-02/anat/sub-02_T1w.nii.gz

The most common set of dimensions is: 176,256,256 (voxels), This file has the dimensions: 512,512,296 (voxels). The most common resolution is: 1.00mm x 1.00mm x 1.00mm, This file has the resolution: 0.48mm x 0.48mm x 0.50mm.

/sub-03/anat/sub-03_T1w.nii.gz

The most common set of dimensions is: 176,256,256 (voxels), This file has the dimensions: 512,512,296 (voxels). The most common resolution is: 1.00mm x 1.00mm x 1.00mm, This file has the resolution: 0.48mm x 0.48mm x 0.50mm.

/sub-04/anat/sub-04_T1w.nii.gz

The most common set of dimensions is: 176,256,256 (voxels), This file has the dimensions: 512,512,296 (voxels). The most common resolution is: 1.00mm x 1.00mm x 1.00mm, This file has the resolution: 0.48mm x 0.48mm x 0.50mm.

/sub-04/func/sub-04_task-emotionalfaces_run-1_bold.nii.gz

The most common resolution is: 4.00mm x 4.00mm x 4.00mm x 2.00s, This file has the resolution: 3.75mm x 3.75mm x 4.00mm x 2.00s.

/sub-04/func/sub-04_task-emotionalfaces_run-2_bold.nii.gz

The most common resolution is: 4.00mm x 4.00mm x 4.00mm x 2.00s, This file has the resolution: 3.75mm x 3.75mm x 4.00mm x 2.00s.

/sub-04/func/sub-04_task-emotionalfaces_run-3_bold.nii.gz

The most common resolution is: 4.00mm x 4.00mm x 4.00mm x 2.00s, This file has the resolution: 3.75mm x 3.75mm x 4.00mm x 2.00s.

/sub-04/func/sub-04_task-emotionalfaces_run-4_bold.nii.gz

The most common resolution is: 4.00mm x 4.00mm x 4.00mm x 2.00s, This file has the resolution: 3.75mm x 3.75mm x 4.00mm x 2.00s.

/sub-04/func/sub-04_task-emotionalfaces_run-5_bold.nii.gz

The most common resolution is: 4.00mm x 4.00mm x 4.00mm x 2.00s, This file has the resolution: 3.75mm x 3.75mm x 4.00mm x 2.00s.

/sub-05/anat/sub-05_T1w.nii.gz

The most common set of dimensions is: 176,256,256 (voxels), This file has the dimensions: 296,512,512 (voxels). The most common resolution is: 1.00mm x 1.00mm x 1.00mm, This file has the resolution: 0.50mm x 0.48mm x 0.48mm.

and 10 more files

OpenNeuro Accession Number: ds003548
Files: 268, Size: 3.51GB, Subjects: 16, Session: 1
Available Tasks: Emotional face perception, Resting state, awake, closed eyes
Available Modalities: T1w, T2w, bold

README

Sample

We used a cross-sectional group of 16 volunteers from both sexes (8 female, 8 male) with an average age of 25 years, recruited at UNAM campus Juriquilla from October 2019 to June 2020. Participants were briefly interviewed to exclude those previously diagnosed with neurological or psychiatric conditions. With the exception of one male subject, all of them reported having right-handed phenotype. Prior to the study, subjects formally consented to participating after being informed of its aims, risks and procedures — in accordance with the 1964 Declaration of Helsinki.

Image acquisition

Images were obtained from a 3-Tesla General Electric Discovery MR750 scanner at the MR Unit at UNAM's Institute of Neurobiology, during a single session per participant. The protocol included 5 echo-planar imaging (EPI) blood-oxygen level-dependent (BOLD) sequences for fMRI, 185 volumes each. A T1-weighted scan of head anatomy was also acquired, in addition to a 10-minutes long resting state fMRI before the task. Sequence parameters are described in the table. Echos were registered using a head-mounted 32-channel coil.

ParameterEPI BOLDT1w FSPGRT2w FSE
Slice orientationAxialAxial or sagitalAxial
Slices3517635
Field of view64×64256×256512x512
Voxel size(4 mm)^3(1 mm)^3(.5x.5x.4 mm^3)
Flip anglepi / 23pi / 45 | 7pi / 9
TR (ms)20008.186255
TE (ms)303.19101.4
TInv (ms)450

Stimuli and task

Each of the 5 fMRI sequences was temporally coupled to a psychological block-based task implemented in PsychoPy 3.0.1. All 5 tasks were identical, save for the pseudo-random order in which their 30 s blocks were administered. A total of 6 block classes were used: happy faces, sad faces, angry faces, neutral faces, pseudo (scrambled) faces and low-stimulation (dim).

Pseudo-faces and dim blocks were introduced so as to buttress and diagnose the analysis pipeline, by way of more trivial contrasts (like pseudo-faces vs low-stimulation and faces vs pseudo-faces).

Each block in turn comprises 10 randomly-presented images belonging to that class, each one shown for about 3 seconds and without possibility of reinstantiation during the same block. Each block occurs twice per sequence, yielding a total of 12 of them (360 s = 6 min). After their presentation, participants had to wait for 10 seconds before concluding the sequence, in order to capture the hemodynamic response (HR) elicited by the last stimuli. A selection of 10 grayscale photographs per category of frontal human faces (male and female) served as stimuli. These were chosen from the classical "Pictures of Facial Affect" database (Ekman, 1976). As for the low-stimulation (a.k.a. "dim") blocks, a small but visible fixation cross was made fluctuate from quadrant to quadrant at random every 3 seconds.

Additionally, behavioral responses were recorded throughout the task in order to measure performance and thus evaluate the suitability of physiological data for further analysis. Participants were instructed at the beginning of every sequence to indicate whether faces belonged to a man or a woman as soon as they were perceived. The response was submitted with the press of a button — one at each hand. Analogously, for scrambled and dim blocks (when no faces should have been perceived), the instruction was to simply report image change, alternating between buttons.

Task files and responses available here.

Authors

  • Isaac David
  • Victor Olalde-Mathieu
  • Ana Y. Martínez
  • Lluviana Rodríguez-Vidal
  • Fernando A. Barrios

Dataset DOI

10.18112/openneuro.ds003548.v1.0.1

License

CC0

Acknowledgements

Special thanks to Erick H. Pasaye and Leopoldo González-Santos

How to Acknowledge

Funding

  • Institute of Neurobiology, UNAM
  • CONACyT (CVU 891935)

How To Cite

Copy
Isaac David and Victor Olalde-Mathieu and Ana Y. Martínez and Lluviana Rodríguez-Vidal and Fernando A. Barrios (2021). Emotion Category and Face Perception Task Optimized for Multivariate Pattern Analysis. OpenNeuro. [Dataset] doi: 10.18112/openneuro.ds003548.v1.0.1
More citation info

Emotion Category and Face Perception Task Optimized for Multivariate Pattern Analysis

uploaded by Isaac David on 2021-02-25 - 3 months ago
last modified on 2021-04-08 - about 1 month ago
authored by Isaac David, Victor Olalde-Mathieu, Ana Y. Martínez, Lluviana Rodríguez-Vidal, Fernando A. Barrios
13294

OpenNeuro Accession Number: ds003548
Files: 268, Size: 3.51GB, Subjects: 16, Session: 1
Available Tasks: Emotional face perception, Resting state, awake, closed eyes
Available Modalities: T1w, T2w, bold

README

Sample

We used a cross-sectional group of 16 volunteers from both sexes (8 female, 8 male) with an average age of 25 years, recruited at UNAM campus Juriquilla from October 2019 to June 2020. Participants were briefly interviewed to exclude those previously diagnosed with neurological or psychiatric conditions. With the exception of one male subject, all of them reported having right-handed phenotype. Prior to the study, subjects formally consented to participating after being informed of its aims, risks and procedures — in accordance with the 1964 Declaration of Helsinki.

Image acquisition

Images were obtained from a 3-Tesla General Electric Discovery MR750 scanner at the MR Unit at UNAM's Institute of Neurobiology, during a single session per participant. The protocol included 5 echo-planar imaging (EPI) blood-oxygen level-dependent (BOLD) sequences for fMRI, 185 volumes each. A T1-weighted scan of head anatomy was also acquired, in addition to a 10-minutes long resting state fMRI before the task. Sequence parameters are described in the table. Echos were registered using a head-mounted 32-channel coil.

ParameterEPI BOLDT1w FSPGRT2w FSE
Slice orientationAxialAxial or sagitalAxial
Slices3517635
Field of view64×64256×256512x512
Voxel size(4 mm)^3(1 mm)^3(.5x.5x.4 mm^3)
Flip anglepi / 23pi / 45 | 7pi / 9
TR (ms)20008.186255
TE (ms)303.19101.4
TInv (ms)450

Stimuli and task

Each of the 5 fMRI sequences was temporally coupled to a psychological block-based task implemented in PsychoPy 3.0.1. All 5 tasks were identical, save for the pseudo-random order in which their 30 s blocks were administered. A total of 6 block classes were used: happy faces, sad faces, angry faces, neutral faces, pseudo (scrambled) faces and low-stimulation (dim).

Pseudo-faces and dim blocks were introduced so as to buttress and diagnose the analysis pipeline, by way of more trivial contrasts (like pseudo-faces vs low-stimulation and faces vs pseudo-faces).

Each block in turn comprises 10 randomly-presented images belonging to that class, each one shown for about 3 seconds and without possibility of reinstantiation during the same block. Each block occurs twice per sequence, yielding a total of 12 of them (360 s = 6 min). After their presentation, participants had to wait for 10 seconds before concluding the sequence, in order to capture the hemodynamic response (HR) elicited by the last stimuli. A selection of 10 grayscale photographs per category of frontal human faces (male and female) served as stimuli. These were chosen from the classical "Pictures of Facial Affect" database (Ekman, 1976). As for the low-stimulation (a.k.a. "dim") blocks, a small but visible fixation cross was made fluctuate from quadrant to quadrant at random every 3 seconds.

Additionally, behavioral responses were recorded throughout the task in order to measure performance and thus evaluate the suitability of physiological data for further analysis. Participants were instructed at the beginning of every sequence to indicate whether faces belonged to a man or a woman as soon as they were perceived. The response was submitted with the press of a button — one at each hand. Analogously, for scrambled and dim blocks (when no faces should have been perceived), the instruction was to simply report image change, alternating between buttons.

Task files and responses available here.

Authors

  • Isaac David
  • Victor Olalde-Mathieu
  • Ana Y. Martínez
  • Lluviana Rodríguez-Vidal
  • Fernando A. Barrios

Dataset DOI

10.18112/openneuro.ds003548.v1.0.1

License

CC0

Acknowledgements

Special thanks to Erick H. Pasaye and Leopoldo González-Santos

How to Acknowledge

Funding

  • Institute of Neurobiology, UNAM
  • CONACyT (CVU 891935)

How To Cite

Copy
Isaac David and Victor Olalde-Mathieu and Ana Y. Martínez and Lluviana Rodríguez-Vidal and Fernando A. Barrios (2021). Emotion Category and Face Perception Task Optimized for Multivariate Pattern Analysis. OpenNeuro. [Dataset] doi: 10.18112/openneuro.ds003548.v1.0.1
More citation info

Dataset File Tree

Git Hash: 3fdef3c 

BIDS Validation

We found 1 Warning in your dataset. You are not required to fix warnings, but doing so will make your dataset more BIDS compliant.

/sub-01/anat/sub-01_T1w.nii.gz

The most common set of dimensions is: 176,256,256 (voxels), This file has the dimensions: 512,512,296 (voxels). The most common resolution is: 1.00mm x 1.00mm x 1.00mm, This file has the resolution: 0.48mm x 0.48mm x 0.50mm.

/sub-02/anat/sub-02_T1w.nii.gz

The most common set of dimensions is: 176,256,256 (voxels), This file has the dimensions: 512,512,296 (voxels). The most common resolution is: 1.00mm x 1.00mm x 1.00mm, This file has the resolution: 0.48mm x 0.48mm x 0.50mm.

/sub-03/anat/sub-03_T1w.nii.gz

The most common set of dimensions is: 176,256,256 (voxels), This file has the dimensions: 512,512,296 (voxels). The most common resolution is: 1.00mm x 1.00mm x 1.00mm, This file has the resolution: 0.48mm x 0.48mm x 0.50mm.

/sub-04/anat/sub-04_T1w.nii.gz

The most common set of dimensions is: 176,256,256 (voxels), This file has the dimensions: 512,512,296 (voxels). The most common resolution is: 1.00mm x 1.00mm x 1.00mm, This file has the resolution: 0.48mm x 0.48mm x 0.50mm.

/sub-04/func/sub-04_task-emotionalfaces_run-1_bold.nii.gz

The most common resolution is: 4.00mm x 4.00mm x 4.00mm x 2.00s, This file has the resolution: 3.75mm x 3.75mm x 4.00mm x 2.00s.

/sub-04/func/sub-04_task-emotionalfaces_run-2_bold.nii.gz

The most common resolution is: 4.00mm x 4.00mm x 4.00mm x 2.00s, This file has the resolution: 3.75mm x 3.75mm x 4.00mm x 2.00s.

/sub-04/func/sub-04_task-emotionalfaces_run-3_bold.nii.gz

The most common resolution is: 4.00mm x 4.00mm x 4.00mm x 2.00s, This file has the resolution: 3.75mm x 3.75mm x 4.00mm x 2.00s.

/sub-04/func/sub-04_task-emotionalfaces_run-4_bold.nii.gz

The most common resolution is: 4.00mm x 4.00mm x 4.00mm x 2.00s, This file has the resolution: 3.75mm x 3.75mm x 4.00mm x 2.00s.

/sub-04/func/sub-04_task-emotionalfaces_run-5_bold.nii.gz

The most common resolution is: 4.00mm x 4.00mm x 4.00mm x 2.00s, This file has the resolution: 3.75mm x 3.75mm x 4.00mm x 2.00s.

/sub-05/anat/sub-05_T1w.nii.gz

The most common set of dimensions is: 176,256,256 (voxels), This file has the dimensions: 296,512,512 (voxels). The most common resolution is: 1.00mm x 1.00mm x 1.00mm, This file has the resolution: 0.50mm x 0.48mm x 0.48mm.

and 10 more files

Dataset File Tree

Git Hash: 3fdef3c 

Comments

Please sign in to contribute to the discussion.