Inner Speech Dataset. Author: Nicolas Nieto Code available at https://github.com/N-Nieto/Inner_Speech_Dataset Prepreint available at https://www.biorxiv.org/content/10.1101/2021.04.19.440473v1 Abstract: Surface electroencephalography is a standard and noninvasive way to measure electrical brain activity. Recent advances in artificial intelligence led to significant improvements in the automatic detection of brain patterns, allowing increasingly faster, more reliable and accessible Brain-Computer Interfaces. Different paradigms have been used to enable the human-machine interaction and the last few years have broad a mark increase in the interest for interpreting and characterizing the "inner voice" phenomenon. This paradigm, called inner speech, raises the possibility of executing an order just by thinking about it, allowing a “natural” way of controlling external devices. Unfortunately, the lack of publicly available electroencephalography datasets, restricts the development of new techniques for inner speech recognition. A ten-subjects dataset acquired under this and two others related paradigms, obtain with an acquisition systems of 136 channels, is presented. The main purpose of this work is to provide the scientific community with an open-access multiclass electroencephalography database of inner speech commands that could be used for better understanding of the related brain mechanisms. Conditions = Inner Speech, Pronounced Speech, Visualized Condition Classes = "Arriba/Up", "Abajo/Down", "Derecha/Right", "Izquierda/Left" Total Trials = 5640 Please contact us at this e-mail address if you have any doubts: nnieto@sinc.unl.edu.ar