Chinese Character-Sound Integration MEG Dataset

uploaded by Weiyong Xu on 2021-04-10 - about 1 month ago
last modified on 2021-04-28 - 15 days ago
authored by Weiyong Xu, Orsolya Kolozsvari, Robert Oostenveld, Paavo Leppänen, Jarmo Hämäläinen
1724943

OpenNeuro Accession Number: ds003608
Files: 205, Size: 36.82GB, Subjects: 25, Session: 1
Available Tasks: audiovisual
Available Modalities: meg, coordsystem, channels, events

README

========= Dataset description: Magnetoencephalography (MEG) dataset on Chinese Character-Sound Integration.

This MEG dataset was prepared in the Brain Imaging Data Structure (MEG-BIDS, Niso et al. 2018) format using MNE-BIDS (Appelhoff et al. 2019).

In total, we measured MEG data from 19 native Chinese speakers and 18 Finnish speakers. Of those seven were excluded from the Chinese group and five were excluded in the Finnish group for the following reasons: four subjects due to poor visual acuity even with magnetically neutral glasses for vision correction, two subjects due to excessive head movements or low head positions in the MEG helmet, one due to technical problems during the recording, and five subjects due to strong noise interference and poor signal quality. The final dataset thus included 12 Chinese participants and 13 Finnish participants.

EXPERIMENT

The stimuli consisted of six Chinese characters (Simplified Chinese) and their corresponding flat tone speech sounds (1. 步: bu; 2. 都: du; 3. 谷: gu; 4. 酷: ku; 5. 普: pu; 6. 兔: tu). The characters were all common characters familiar to the native Chinese speakers and had the following meanings: 1. steps/walking; 2. both/capital; 3. grain; 4. cool; 5. common; and 6. rabbit. The mean duration of the auditory stimuli was 447.2 ms (SD: 32.7 ms). The duration of the visual stimuli was 1,000 ms. Four kinds of stimuli, auditory only (A), visual only (V), audiovisual congruent (AVC), and audiovisual incongruent (AVI) were presented in random order with 108 trials in each type of stimuli. Each trial started with a fixation cross at the center of the screen for 1,000 ms.

Trigger code: A:1 V:2 AVC:3 AVI:4

MEG

Three anatomical landmarks were used to define the MEG head coordinate system: Nasion, LPA, and RPA.

The position of the HPI coils and the head shape (>100 points evenly distributed over the scalp) were digitized using the Polhemus Isotrak digital tracker system (Polhemus, Colchester, VT, United States).

MEG was recorded using the Elekta Neuromag TRIUX system (Elekta AB, Stockholm, Sweden) at the Centre for Interdisciplinary Brain Research, University of Jyväskylä.

Data were acquired from 306 MEG channels and 2 EOG channels with a sampling rate of 1000 Hz, an online band-pass filter of 0.1-330 Hz, and a 68° upright gantry position.

Maxfilter version 3.0.17 was used for movement compensation using temporal signal-space separation (tSSS).

Bad MEG channels were identified manually and were interpolated by Maxfilter.

MRI

Individual structural MRIs were not acquired. For source reconstruction, it is recommended to use a template, for example the “fsaverage“ brain from Freesurfer (https://surfer.nmr.mgh.harvard.edu/), and to scale the template head model and source space to the shape and size of the individual participants (as obtained from the head shape points).

REFERENCES

Xu, W., Kolozsvári, O. B., Oostenveld, R., Leppänen, P. H. T., & Hämäläinen, J. A. (2019). Audiovisual processing of chinese characters elicits suppression and congruency effects in MEG. Frontiers in human neuroscience, 13, 18. https://doi.org/10.3389/fnhum.2019.00018

Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896

Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. https://doi.org/10.1038/sdata.2018.110

Authors

  • Weiyong Xu
  • Orsolya Kolozsvari
  • Robert Oostenveld
  • Paavo Leppänen
  • Jarmo Hämäläinen

Dataset DOI

10.18112/openneuro.ds003608.v1.0.1

License

CC0

Acknowledgements

How to Acknowledge

Funding

  • ChildBrain (Marie Curie Innovative Training Networks, no. 641652)
  • Predictable (Marie Curie Innovative Training Networks, no. 641858)
  • Academy of Finland (MultiLeTe #292466)

Ethics Approvals

How To Cite

Copy
Weiyong Xu and Orsolya Kolozsvari and Robert Oostenveld and Paavo Leppänen and Jarmo Hämäläinen (2021). Chinese Character-Sound Integration MEG Dataset. OpenNeuro. [Dataset] doi: 10.18112/openneuro.ds003608.v1.0.1
More citation info

Chinese Character-Sound Integration MEG Dataset

uploaded by Weiyong Xu on 2021-04-10 - about 1 month ago
last modified on 2021-04-28 - 15 days ago
authored by Weiyong Xu, Orsolya Kolozsvari, Robert Oostenveld, Paavo Leppänen, Jarmo Hämäläinen
1724943

OpenNeuro Accession Number: ds003608
Files: 205, Size: 36.82GB, Subjects: 25, Session: 1
Available Tasks: audiovisual
Available Modalities: meg, coordsystem, channels, events

README

========= Dataset description: Magnetoencephalography (MEG) dataset on Chinese Character-Sound Integration.

This MEG dataset was prepared in the Brain Imaging Data Structure (MEG-BIDS, Niso et al. 2018) format using MNE-BIDS (Appelhoff et al. 2019).

In total, we measured MEG data from 19 native Chinese speakers and 18 Finnish speakers. Of those seven were excluded from the Chinese group and five were excluded in the Finnish group for the following reasons: four subjects due to poor visual acuity even with magnetically neutral glasses for vision correction, two subjects due to excessive head movements or low head positions in the MEG helmet, one due to technical problems during the recording, and five subjects due to strong noise interference and poor signal quality. The final dataset thus included 12 Chinese participants and 13 Finnish participants.

EXPERIMENT

The stimuli consisted of six Chinese characters (Simplified Chinese) and their corresponding flat tone speech sounds (1. 步: bu; 2. 都: du; 3. 谷: gu; 4. 酷: ku; 5. 普: pu; 6. 兔: tu). The characters were all common characters familiar to the native Chinese speakers and had the following meanings: 1. steps/walking; 2. both/capital; 3. grain; 4. cool; 5. common; and 6. rabbit. The mean duration of the auditory stimuli was 447.2 ms (SD: 32.7 ms). The duration of the visual stimuli was 1,000 ms. Four kinds of stimuli, auditory only (A), visual only (V), audiovisual congruent (AVC), and audiovisual incongruent (AVI) were presented in random order with 108 trials in each type of stimuli. Each trial started with a fixation cross at the center of the screen for 1,000 ms.

Trigger code: A:1 V:2 AVC:3 AVI:4

MEG

Three anatomical landmarks were used to define the MEG head coordinate system: Nasion, LPA, and RPA.

The position of the HPI coils and the head shape (>100 points evenly distributed over the scalp) were digitized using the Polhemus Isotrak digital tracker system (Polhemus, Colchester, VT, United States).

MEG was recorded using the Elekta Neuromag TRIUX system (Elekta AB, Stockholm, Sweden) at the Centre for Interdisciplinary Brain Research, University of Jyväskylä.

Data were acquired from 306 MEG channels and 2 EOG channels with a sampling rate of 1000 Hz, an online band-pass filter of 0.1-330 Hz, and a 68° upright gantry position.

Maxfilter version 3.0.17 was used for movement compensation using temporal signal-space separation (tSSS).

Bad MEG channels were identified manually and were interpolated by Maxfilter.

MRI

Individual structural MRIs were not acquired. For source reconstruction, it is recommended to use a template, for example the “fsaverage“ brain from Freesurfer (https://surfer.nmr.mgh.harvard.edu/), and to scale the template head model and source space to the shape and size of the individual participants (as obtained from the head shape points).

REFERENCES

Xu, W., Kolozsvári, O. B., Oostenveld, R., Leppänen, P. H. T., & Hämäläinen, J. A. (2019). Audiovisual processing of chinese characters elicits suppression and congruency effects in MEG. Frontiers in human neuroscience, 13, 18. https://doi.org/10.3389/fnhum.2019.00018

Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896

Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. https://doi.org/10.1038/sdata.2018.110

Authors

  • Weiyong Xu
  • Orsolya Kolozsvari
  • Robert Oostenveld
  • Paavo Leppänen
  • Jarmo Hämäläinen

Dataset DOI

10.18112/openneuro.ds003608.v1.0.1

License

CC0

Acknowledgements

How to Acknowledge

Funding

  • ChildBrain (Marie Curie Innovative Training Networks, no. 641652)
  • Predictable (Marie Curie Innovative Training Networks, no. 641858)
  • Academy of Finland (MultiLeTe #292466)

Ethics Approvals

How To Cite

Copy
Weiyong Xu and Orsolya Kolozsvari and Robert Oostenveld and Paavo Leppänen and Jarmo Hämäläinen (2021). Chinese Character-Sound Integration MEG Dataset. OpenNeuro. [Dataset] doi: 10.18112/openneuro.ds003608.v1.0.1
More citation info

Dataset File Tree

Git Hash: c422f8f 

BIDS Validation

Dataset File Tree

Git Hash: c422f8f 

Comments

Please sign in to contribute to the discussion.