Page 58 - Fister jr., Iztok, Andrej Brodnik, Matjaž Krnc and Iztok Fister (eds.). StuCoSReC. Proceedings of the 2019 6th Student Computer Science Research Conference. Koper: University of Primorska Press, 2019
P. 58
icator for us to know which type of the image was user looking We repeated the recordings on the same user for multiple hours for
when specific values from the channel were recorded. In the end, each type of dataset. Our recording room was isolated from the
our recorded dataset had the following structure: outside noise and with the constant lighting and minimalism of the
objects in the room. We knew that we have to eliminate as much as
• ID, possible distractions to get reliable recordings from the consumer-
• TIMESTAMP,
• RAW_CQ (raw value of the quality of the signal), based device. All that data was saved in a CSV format per
• Values from the electrodes in mV for each location on the head recording, on the interval of 100ms (this is the interval of each read
of data made by the device).
(Af3, T7, P7, T8, Af4) (semanticscholar, 2019),
• EXPOSURE (0 = nature images, 1 = food images). ID Timestamp Raw_value Af3 T7 P7 T8 Af4 Marker
First, we created the desktop version of the application, which was
used as a bridge between the connecting the device with the PC and 0 115 500 4279.487 4250.256 4254.872 4225.641 4291.795 0
the presentation media to display the images to the user.
Application's basic workflow was that first, we established the 1 230 500 4252.821 4209.231 4249.231 4165.641 4246.667 0
connection between the device and the PC, we had to choose the
type of image dataset, and the interval of the presentation. 2 345 1023 4267.692 4235.897 4192.308 4167.179 4249.231 0

3. IMPLEMENTATING THE EEG 3 460 500 4253.333 4229.744 4276.41 4169.744 4240.513 0
ANALYSIS FRAMEWORK
4 575 1023 4262.564 4203.077 4183.077 4165.641 4246.154 0
With all that set we run the recording and first displayed to the user
the blank screen, just for calibrating the data when a user is having Table 1. Collected dataset of EEG signals
closed/open eyes and watching the monitor. from Emotive Insight device.

Figure 1. The experimental setup. It was a mixture of the recorded values from different datasets. Our
After 30 sec of each, the pictures (Figure 2) from the selected entire dataset with the length about 50.000 rows. This was the result
dataset started to switch in the selected interval. Our recordings of almost 2 hours of recording the brainwaves. We split the list into
lasted maximum to 30 minutes, depending on the calmness, two groups for having the data for testing and learning process in
relaxation of the user. It was really hard to stay concentrated for a ML. The ratio was 80:20 for the learning process. With all that
while, just looking at the pictures, without thinking and moving
much. ready we continued our work, with creating the artificial neural
network model.

3.1 Analysis with Neural Network

Python framework Keras helped us to create the following neural
model. The type of model we have created is called the sequential
model and allows you to create multiple levels one by one. It is
limited in that it does not allow you to create models that share
layers or have multiple inputs or outputs

In the input level it is required to define the dimension of the data
in the first level, because at the beginning the model cannot know

what data will come to the input. Input data in our case were the
values from the electrodes (Af3, T7, P7, T8, Af4).

Figure 2. The picture sets for the experiment. Figure 3. Construction of neural
network model with three layers.

Normal distribution was used to initialize the weights, which
initialized the weights according to the function results. The
activation parameter was defined with a well-established and
relatively simple Relu function in the input and hidden levels of the
model. For the output level, however, we had to define a sigmoid
function that allows us to get a result between 0 and 1.

We also added a so-called level dropout among the individual
levels, which serves to ensure that randomly selected neurons are
omitted during the learning phase. This means that the results of the
forward pass activation function are removed, as well as any weight

StuCoSReC Proceedings of the 2019 6th Student Computer Science Research Conference 58
Koper, Slovenia, 10 October
   53   54   55   56   57   58   59   60   61   62   63