Rendering seizures Via eeg to Audio in python- A how to guide

When I minimise the phase of the combined EEG signals I get a convulsant effect.

By doing so accounts for the inverse solution, the actual location in the brain in space that emanates a frequency is, those channels various frequencies & amplitudes corresponds to in phase in total

I apply Spectral centroid & Spectral bandwidth of a Seizure EEG recordings

subtract & plot the difference

resting state of the epileptic Spectral centroid & Spectral bandwidth

resting state of the non epileptic Spectral centroid & Spectral bandwidth

upscale

fm synthesise 2 tracks the difference between the frequency and phase of L & R is from the modulation of the centroid subtract & plot

Independent Component Analysis for EEG


jimi working Guinea-Bissau-Copy 4 of EEG_visualize_ICA binaural beat.ipynb_


Independent Component Analysis for EEG


In this notebook, we will look into some specific aspects of EEG artifical removel via Independent Component Analysis. We look at specific noise patterns typically ocurring during EEG recording and see if we can automatically cluster these components for easier detection.



Imports


First we have to install MNE-Python on Colab’s runtime…


[1]

7s

!pip install mne -q

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.7/7.7 MB 88.4 MB/s eta 0:00:00

… and then import all the libraries that we’ll need for this session.


[2]

2s

import os

import numpy as np

import mne

from mne.time_frequency import psd_array_welch

from mne.preprocessing import (ICA, create_eog_epochs, create_ecg_epochs,

                               corrmap)

from sklearn.cluster import AgglomerativeClustering, KMeans, DBSCAN, OPTICS, SpectralClustering

from sklearn.preprocessing import StandardScaler

from sklearn.decomposition import PCA

import matplotlib.pyplot as plt


Investigating different types of noise

Along with some fellow students (who did all the hard work – managing the experiment and recording), we gathered some data that should allow us to have a more systematic look at EEG noise: We gave a simple visual task, which was heavily distorted with specific types of noise:

In all trials, participants were asked to look at an image for 15 seconds. But beside a few control trials, they were also instructed to induce the following noise while looking at the images:

  • Moving their head
  • Moving their jaw
  • Blinking
  • Swallowing
  • Clenching their teeth

If we now only look at recorded Epochs with one of these specific noise types, we can investigate and select typical Independent Components, nicely representing the corresponding type of noise.


As a first step, we will use pydrive to load the recorded data from Google Drive.


[3]

0s

# to connect with Google drive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials

Copy the files from Google Drive into Google Colab’s runtime.

Dont’t worry:

Nothing of this will be saved to your personal Google Drive or Personal Computer. All of this will only temporarily exist in Google Colab’s hosted runtime.


[4]

5m

from google.colab import drive
import pandas as pd

# Mount Google Drive
drive.mount('/content/drive')
Mounted at /content/drive

[5]

0s

# authenticate your google account to get access to google cloud
#auth.authenticate_user()
##gauth = GoogleAuth()
#gauth.credentials = GoogleCredentials.get_application_default()
#drive = GoogleDrive(gauth)

# identify all the files in our dataset folder
#folder_id = "1w34G5mhUDebZ_Wi7wKAWo78mLuDumCAB"
#file_list = drive.ListFile({'q': "'{}' in parents and trashed=false".format(folder_id)}).GetList()

# load the data into Google Colab
#for i, drive_file in enumerate(sorted(file_list, key = lambda x: x['title']), start=1):
#  if i == 1:  # only use the 1st recording to save time
##    file_name = 'sub_{}.fif'.format(i)
 #   print('Downloading {} from GDrive ({}/{})'.format(file_name, i, len(file_list)))
 #   drive_file.GetContentFile(file_name)

    # Note: Sub 2 and 3 have something super weird going on. All components are
    # largely overshadowed by something from the back left shoulder.
    # Broken electrode/ground?

Guinea bissau


[6]

0s

#!wget https://zenodo.org/record/1252141/files/EEGs_Guinea-Bissau.zip -P /content/drive/MyDrive/Colab Notebooks/EEG\ data


[7]

0s

#!wget https://zenodo.org/record/1252141/files/metadata_guineabissau.csv 

extract


[8]

0s

#import os
#import zipfile

# Specify the path to the downloaded zip file
#zip_file = "/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau.zip"

# Specify the directory to extract the files
#extract_dir = "/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau"

# Extract the contents of the zip file
#with zipfile.ZipFile(zip_file, 'r') as zip_ref:
 #   zip_ref.extractall(extract_dir)

# Get the path to the extracted folder
#extracted_folder = os.path.join(extract_dir, "EEGs_Guinea-Bissau")

# Continue with the rest of your analysis using the extracted files

[9]

0s

import pandas as pd
import numpy as np
from matplotlib import pyplot as plt

Folder Location


[10]

0s

#cd /content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau

[11]

3s

cd /content/drive/MyDrive/Colab-Notebooks/EEG-data
/content/drive/MyDrive/Colab-Notebooks/EEG-data

[12]

0s

meta_df=pd.read_csv('/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/metadata_guineabissau.csv')
meta_df.head()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384

now i need to seprate Epilepsy vs Control subjects

[13]

0s

EP_sub=meta_df['subject.id'][meta_df['Group']=='Epilepsy']
CT_sub=meta_df['subject.id'][meta_df['Group']=='Control']

[14]

51s

#read csv files
Epilepsy=[pd.read_csv('EEGs_Guinea-Bissau/signal-{}.csv.gz'.format(i), compression='gzip') for i in  EP_sub]
Control=[pd.read_csv('EEGs_Guinea-Bissau/signal-{}.csv.gz'.format(i), compression='gzip') for i in  CT_sub]

[15]

0s

Epilepsy[0].head()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[16]

0s

meta_df=pd.read_csv('/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/metadata_guineabissau.csv')
meta_df.head()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[17]

0s

#remove non eeg channels
Epilepsy=[i.iloc[:,1:15] for i in  Epilepsy]
Control=[i.iloc[:,1:15] for i in  Control]

[18]

3s

import pandas as pd
import mne

EP_sub = [1, 2, 3]  # Replace with your desired subject IDs
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']

# Load and plot raw EEG signal for each subject
for subject_id in EP_sub:
    filename = 'EEGs_Guinea-Bissau/signal-{}.csv.gz'.format(subject_id)
    data = pd.read_csv(filename, compression='gzip')
    
    # Drop specified columns
    data = data.drop(columns_to_drop, axis=1)
    
    # Get the column names (channel names)
    channel_names = data.columns.tolist()
    
    # Convert DataFrame to MNE Raw object
    info = mne.create_info(ch_names=channel_names, sfreq=250, ch_types='eeg')
    raw = mne.io.RawArray(data.T, info)
    
    # Plot raw EEG signal
    raw.plot()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


convert to mne object


[19]

0s

import mne
def convertDF2MNE(sub):
    info = mne.create_info(list(sub.columns), ch_types=['eeg'] * len(sub.columns), sfreq=128)
    info.set_montage('standard_1020')
    data=mne.io.RawArray(sub.T, info)
    data.set_eeg_reference()
    data.filter(l_freq=0.1,h_freq=45)
    epochs=mne.make_fixed_length_epochs(data,duration=5,overlap=1)
    epochs=epochs.drop_bad()
    
    return epochs

convert to mne object change for lick sound


[20]

0s

import mne
def convertDF2MNE(sub):
    info = mne.create_info(list(sub.columns), ch_types=['eeg'] * len(sub.columns), sfreq=128)
    info.set_montage('standard_1020')
    data=mne.io.RawArray(sub.T, info)
    data.set_eeg_reference()
    data.filter(l_freq=0.1,h_freq=45)
    epochs=mne.make_fixed_length_epochs(data,duration=5,overlap=1)
    epochs=epochs.drop_bad()
    
    return epochs

[21]

7s

%%capture
#Convert each dataframe to mne object
Epilepsy=[convertDF2MNE(i) for i in  Epilepsy]
Control=[convertDF2MNE(i) for i in  Control]

[22]

1s

%%capture
#concatenate the epochs
Epilepsy_epochs=mne.concatenate_epochs(Epilepsy)
Control_epochs=mne.concatenate_epochs(Control)

[23]

0s

print(Epilepsy_epochs)
<EpochsArray |  3995 events (all good), 0 – 4.99219 s, baseline off, ~273.1 MB, data loaded,
 '1': 3995>

[24]

17s

import matplotlib.pyplot as plt

# Plot the image visualization for all channels and epochs
Epilepsy_epochs.plot_image(combine='std')

# Show the plot
plt.show()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


create labels groups


[25]

0s

Epilepsy_group=np.concatenate([[i]*len(Epilepsy[i]) for i in range(len(Epilepsy))])#create a list of list where each sub list corresponds to subject_no

Control_group=np.concatenate([[i]*len(Control[i]) for i in range(len(Control))])#create a list of list where each sub list corresponds to subject_no

Epilepsy_label=np.concatenate([[0]*len(Epilepsy[i]) for i in range(len(Epilepsy))])

Control_label=np.concatenate([[1]*len(Control[i]) for i in range(len(Control))])


[26]

0s

Epilepsy_group.shape,Control_group.shape,Epilepsy_label.shape,Control_label.shape
((3995,), (3461,), (3995,), (3461,))

[27]

0s

#combine data
data=mne.concatenate_epochs([Epilepsy_epochs,Control_epochs])
group=np.concatenate((Epilepsy_group,Control_group))
label=np.concatenate((Epilepsy_label,Control_label))
print(len(data),len(group),len(label))
Not setting metadata
7456 matching events found
No baseline correction applied
7456 7456 7456

Get the number of channels from the shape of the data array


[28]

0s

import pandas as pd

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Extract the data from the DataFrame
data = data_frame.values

# Get the number of channels from the shape of the data array
num_channels = data.shape[1]

print("Number of channels:", num_channels)
Number of channels: 36

Get the column names (channel names)


[29]

0s

import pandas as pd

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Print the channel names
print(channel_names)
['Unnamed: 0', 'AF3', 'AF4', 'F3', 'F4', 'F7', 'F8', 'FC5', 'FC6', 'O1', 'O2', 'P7', 'P8', 'T7', 'T8', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']

Drop the non eeg channels- ‘Unnamed: 0’ column columns_to_drop 


[30]

2s

import pandas as pd
import matplotlib.pyplot as plt

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Drop the 'Unnamed: 0' column
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Plot the EEG data
data_frame.plot(figsize=(12, 6))
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.legend(channel_names)
plt.show()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[31]

1s

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Drop the 'Unnamed: 0' column
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX']
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Transpose the DataFrame to make each column represent a channel
transposed_df = data_frame.transpose()

# Get the column names (channel names)
channel_names = transposed_df.columns.tolist()

# Extract the values from the DataFrame
data = transposed_df.values

# Create a meshgrid for the x and y coordinates
time = np.arange(data.shape[0])
amplitude = np.arange(data.shape[1])
amp_mesh, time_mesh = np.meshgrid(amplitude, time)

# Create a figure and axes for the 3D plot
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')

# Plot the surface
surf = ax.plot_surface(time_mesh, amp_mesh, data, cmap='viridis')

# Set labels for the axes
ax.set_xlabel('Time')
ax.set_ylabel('Channel ')
ax.set_zlabel('Amplitude')

plt.show()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[32]

2s

import mne

# Load the raw data
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif.gz")


# Drop the 'Unnamed: 0' column
data_frame = pd.read_csv(csv_file)
data_frame = data_frame.drop('Unnamed: 0', axis=1)

# Plot the EEG data
raw.plot(duration=100, scalings='auto')

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


render as audio ASAP appears to work


[77]

0s

import mne
import numpy as np
import soundfile as sf

# Load the raw data with preload=True
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif.gz", preload=True)

# Apply high-pass filter
highpass_freq = 1.0  # Adjust as needed
raw.filter(l_freq=highpass_freq, h_freq=None)

# Get the audio signal from the raw data
audio_signal = raw.get_data().flatten()

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
time = np.arange(len(audio_signal)) / raw.info['sfreq']
modulation_signal = modulation_index * np.sin(2 * np.pi * time)
audio_signal_fm = np.sin(2 * np.pi * carrier_frequency * time + modulation_signal)

# Normalize the audio signal
normalized_audio = audio_signal_fm / np.max(np.abs(audio_signal_fm))

# Rescale the audio signal to match the duration of the EEG recording
duration = raw.times[-1]  # Duration of the EEG recording in seconds
normalized_audio = normalized_audio[:int(duration * raw.info['sfreq'])]

# Save the audio signal to a WAV file
sf.write("output_audio.wav", normalized_audio, int(raw.info['sfreq']))

# Play the audio
import IPython.display as ipd
ipd.Audio("output_audio.wav")

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[78]

3s

import mne
import numpy as np
import soundfile as sf
import pandas as pd
import matplotlib.pyplot as plt

# Drop the 'Unnamed: 0' column from the DataFrame
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = pd.read_csv(csv_file)
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Plot the EEG data
data_frame.plot(figsize=(12, 6))
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.legend(channel_names)
plt.show()

# Load the raw data with preload=True
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif.gz", preload=True)

# Apply high-pass filter
highpass_freq = 1.0  # Adjust as needed
raw.filter(l_freq=highpass_freq, h_freq=None)

# Get the audio signal from the raw data
audio_signal = raw.get_data().flatten()

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
time = np.arange(len(audio_signal)) / raw.info['sfreq']
modulation_signal = modulation_index * np.sin(2 * np.pi * time)
audio_signal_fm = np.sin(2 * np.pi * carrier_frequency * time + modulation_signal)

# Normalize the audio signal
normalized_audio = audio_signal_fm / np.max(np.abs(audio_signal_fm))

# Rescale the audio signal to match the duration of the EEG recording
duration = raw.times[-1]  # Duration of the EEG recording in seconds
normalized_audio = normalized_audio[:int(duration * raw.info['sfreq'])]

# Save the audio signal to a WAV file
sf.write("output_audio.wav", normalized_audio, int(raw.info['sfreq']))

# Play the audio
import IPython.display as ipd
ipd.Audio("output_audio.wav")

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[79]

1s

import mne
import numpy as np
import soundfile as sf
import pandas as pd
import matplotlib.pyplot as plt

# Drop the unnecessary columns from the DataFrame
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = pd.read_csv(csv_file)
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Load the raw data with preload=True
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif.gz", preload=True)

# Apply high-pass filter
highpass_freq = 1.0  # Adjust as needed
raw.filter(l_freq=highpass_freq, h_freq=None)

# Get the EEG data
eeg_data = raw.get_data()

# Compute the spectrogram
sfreq = raw.info['sfreq']
n_fft = int(sfreq * 2)  # Adjust the window length as needed
n_overlap = int(n_fft / 2)  # Adjust the overlap as needed
frequencies, times, spectrogram = plt.specgram(eeg_data[0], NFFT=n_fft, Fs=sfreq, noverlap=n_overlap)

# Plot the spectrogram
plt.imshow(10 * np.log10(spectrogram), aspect='auto', cmap='inferno', origin='lower')
plt.xlabel('Time')
plt.ylabel('Frequency')
plt.title('EEG Spectrogram')
plt.colorbar(label='Power Spectral Density (dB/Hz)')
plt.show()

# Save the spectrogram as an image file
plt.imsave('spectrogram.png', 10 * np.log10(spectrogram), cmap='inferno')

# Convert the spectrogram to an audio signal
audio_signal = np.sum(eeg_data, axis=0)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Save the audio signal to a WAV file
sf.write("output_audio.wav", normalized_audio, int(raw.info['sfreq']))

# Play the audio
import IPython.display as ipd
ipd.Audio("output_audio.wav")

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


THIS WORKS RENDERS THE AUDIO PERFECTLY


[83]

1s

import mne

import numpy as np

import soundfile as sf

import pandas as pd

import matplotlib.pyplot as plt

# Load the raw data with preload=True

raw = mne.io.read_raw(“/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif.gz”, preload=True)

# Apply high-pass filter

highpass_freq = 6.5  # Adjust as needed

raw.filter(l_freq=highpass_freq, h_freq=None)

# Get the EEG data

eeg_data = raw.get_data()

# Compute the spectrogram

sfreq = raw.info[‘sfreq’]

n_fft = int(sfreq * 2)  # Adjust the window length as needed

n_overlap = int(n_fft / 2)  # Adjust the overlap as needed

spectrogram, frequencies, times, _ = plt.specgram(eeg_data[0], NFFT=n_fft, Fs=sfreq, noverlap=n_overlap)

# Plot the spectrogram

plt.imshow(10 * np.log10(spectrogram), aspect=’auto’, cmap=’inferno’, origin=’lower’)

plt.xlabel(‘Time’)

plt.ylabel(‘Frequency’)

plt.title(‘EEG Spectrogram’)

plt.colorbar(label=’Power Spectral Density (dB/Hz)’)

plt.show()

# Save the spectrogram as an image file

plt.imsave(‘spectrogram.png’, 10 * np.log10(spectrogram), cmap=’inferno’)

# Convert the spectrogram to an audio signal

audio_signal = np.sum(eeg_data, axis=0)

# Normalize the audio signal

normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Save the audio signal to a WAV file

sf.write(“output_audio.wav”, normalized_audio, int(raw.info[‘sfreq’]))

# Play the audio

# Drop the unnecessary columns from the DataFrame

columns_to_drop = [‘Unnamed: 0’, ‘GYROY’, ‘GYROX’, ‘COUNTER’, ‘INTERPOLATED’, ‘GYROX’, ‘GYROY’, ‘RAW_CQ’, ‘CQ_CMS’, ‘CQ_F7’, ‘CQ_T7’, ‘CQ_O2’, ‘CQ_FC6’, ‘CQ_AF4’, ‘CQ_F3’, ‘CQ_P7’, ‘CQ_P8’, ‘CQ_F4’, ‘CQ_AF3’, ‘CQ_FC5’, ‘CQ_O1’, ‘CQ_T8’, ‘CQ_F8’, ‘CQ_DRL’]

data_frame = pd.read_csv(csv_file)

data_frame = data_frame.drop(columns_to_drop, axis=1)

import IPython.display as ipd

ipd.Audio(“output_audio.wav”)

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[33]

0s

#import mne
#import pandas as pd

# Define the path to the locs file
#locs_file = '/content/Standard-10-10-Cap33.locs'

# Read the locs file into a DataFrame
#locs_data = pd.read_csv(locs_file, sep='\t', skiprows=1, usecols=[1, 2, 3], names=['x', 'y', 'label'])

# Create a dictionary of electrode positions
#electrode_positions = {ch: [x, y] for ch, x, y in zip(locs_data['label'], locs_data['x'], locs_data['y'])}

# Print the electrode positions
#for ch, pos in electrode_positions.items():
#    print(f'{ch}: {pos}')

[34]

10m

import numpy as np
import matplotlib.pyplot as plt

import mne
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
from mne.datasets import sample
from pathlib import Path

print(__doc__)

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Drop the 'Unnamed: 0' column and other unnecessary columns
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Convert the data to numpy array
data = data_frame.values.T  # Transpose the data to have shape (n_channels, n_samples)

# Create an MNE Info object
info = mne.create_info(ch_names=data_frame.columns.tolist(), sfreq=250, ch_types='eeg')

# Create an MNE Raw object
raw = mne.io.RawArray(data, info)

# Set up the paths and file names
data_path = sample.data_path()
meg_path = data_path / "MEG" / "sample"
fname_inv = meg_path / "sample_audvis-meg-oct-6-meg-inv.fif"
label_name = "Aud-lh"
fname_label = meg_path / "labels" / f"{label_name}.label"

event_id, tmin, tmax = 1, -0.2, 0.5

# Using the same inverse operator when inspecting single trials Vs. evoked
snr = 3.0  # Standard assumption for average data but using it for single trial
lambda2 = 1.0 / snr**2

method = "dSPM"  # use dSPM method (could also be MNE or sLORETA)

# Load inverse operator and label
inverse_operator = read_inverse_operator(fname_inv)
label = mne.read_label(fname_label)

# Create epochs from the raw data
epochs = mne.Epochs(
    raw,
    events=np.array([[0, 0, event_id]]),
    event_id=event_id,
    tmin=tmin,
    tmax=tmax,
    baseline=None,
    preload=True,
)

# Compute inverse solution and stcs for each epoch
stcs = apply_inverse_epochs(
    epochs,
    inverse_operator,
    lambda2,
    method,
    label,
    pick_ori="normal",
    nave=1,
)

# Mean across trials but not across vertices in label
mean_stc = sum(stcs) / len(stcs)

# compute sign flip to avoid signal cancellation when averaging signed values
flip = mne.label_sign_flip(label, inverse_operator["src"])

label_mean = np.mean(mean_stc.data, axis=0)

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


Creates epochs


[35]

2s

import pandas as pd
import mne
import matplotlib.pyplot as plt

# Load the compressed CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file, compression='gzip')

# Drop the unnecessary columns
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX']
data_frame = data_frame.drop(columns_to_drop, axis=1)



# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Convert the data to numpy array
data = data_frame.values.T  # Transpose the data to have shape (n_channels, n_samples)

# Create an MNE Info object
sfreq = 1000  # Sampling frequency in Hz
info = mne.create_info(channel_names, sfreq, ch_types='eeg')

# Create MNE RawArray object
raw = mne.io.RawArray(data, info)

# Set up events (assuming single continuous recording)
events = mne.make_fixed_length_events(raw, duration=1.0)  # 1 second duration epochs

# Create MNE Epochs object
epochs = mne.Epochs(raw, events, tmin=0, tmax=1.0, baseline=None, preload=True)

# Plot the epochs
epochs.plot(scalings='auto', n_epochs=30)

# Show the plot
plt.show()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[ ]

0s






epoch for just on recording


[36]

0s

import pandas as pd
import mne
import numpy as np
import matplotlib.pyplot as plt

# Select the EEG data from the first file in the Epilepsy object
data_frame = Epilepsy[0]

# Select channel AF3
channel_name = '1'
channel_data = data_frame[channel_name].values

# Create an MNE Info object for a single channel
sfreq = 1000  # Sampling frequency in Hz
info = mne.create_info([channel_name], sfreq, ch_types='eeg')

# Create an MNE RawArray object for the selected channel data
raw = mne.io.RawArray([channel_data], info)

# Set up events (assuming single continuous recording)
event_id = 1  # Event ID for the epochs
event_duration = 1.0  # Duration of each epoch in seconds
events = mne.make_fixed_length_events(raw, duration=event_duration, event_id=event_id)

# Create MNE Epochs object for the selected channel data
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=0, tmax=event_duration, baseline=None, preload=True)

# Select epoch 5 from the Epochs object
epoch_index = 4  # Zero-based index (epoch 5 corresponds to index 4)
epoch5AF3 = epochs[epoch_index]

# Plot the epoch
epoch5AF3.plot()

# Show the plot
plt.show()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[37]

0s

import pandas as pd
import mne
import numpy as np
import matplotlib.pyplot as plt

# Select the EEG data from the first file in the Epilepsy object
data_frame = Epilepsy[0]

# Convert the data to numpy array
data = data_frame.values.T  # Transpose the data to have shape (n_channels, n_samples)

# Create an MNE Info object
sfreq = 1000  # Sampling frequency in Hz
info = mne.create_info(n_channels=data.shape[0], sfreq=sfreq, ch_types='eeg')

# Create MNE RawArray object
raw = mne.io.RawArray(data, info)

# Set up events (assuming single continuous recording)
event_id = 1  # Event ID for the epochs
event_duration = 1.0  # Duration of each epoch in seconds
events = mne.make_fixed_length_events(raw, duration=event_duration, event_id=event_id)

# Create MNE Epochs object
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=0, tmax=event_duration, baseline=None, preload=True)

# Plot the epochs
epochs

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[38]

0s

import pandas as pd
import mne
import numpy as np
import matplotlib.pyplot as plt

# Select the EEG data from the first file in the Epilepsy object
data_frame = Epilepsy[0]

# Get the column names (channel names)
channel_names = data_frame.info.ch_names

# Convert the data to numpy array
data = np.transpose(data_frame.get_data())  # Transpose the data to have shape (n_samples, n_channels)

# Combine the data from all events
combined_data = data.reshape(-1, data.shape[-1])  # Reshape to (n_channels, n_samples * n_events)

# Create an MNE Info object
sfreq = data_frame.info['sfreq']
info = mne.create_info(channel_names, sfreq, ch_types='eeg')

# Create MNE RawArray object
raw = mne.io.RawArray(combined_data, info)

# Set up events (assuming single continuous recording)
event_id = 1  # Event ID for the epochs
event_duration = 1.0  # Duration of each epoch in seconds
events = mne.make_fixed_length_events(raw, duration=event_duration, event_id=event_id)

# Create MNE Epochs object
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=0, tmax=event_duration, baseline=None, preload=True)

# Plot the epochs
epochs.plot(scalings='auto', n_epochs=30)

# Show the plot
plt.show()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[39]

1s

epochs.plot_psd(picks='F4')

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[40]

0s

epochs.plot_image(picks='T8')

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[41]

1s

epochs_for_tfr = mne.Epochs(raw, events, tmin=-0.5, tmax=0.5, baseline=None, preload=True)
epochs_for_tfr['1'].plot_image(picks='AF3')

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[42]

0s

import pandas as pd
import mne
import matplotlib.pyplot as plt

# Select the EEG data from the first file in the Epilepsy object
data_frame = Epilepsy[0]

# Create an MNE Info object
sfreq = 1000  # Sampling frequency in Hz
info = mne.create_info(data_frame.columns.tolist(), sfreq, ch_types='eeg')

# Create MNE RawArray object
raw = mne.io.RawArray(data_frame.values.T, info)

# Set up events (assuming single continuous recording)
event_duration = 1.0  # Duration of each epoch in seconds
events = mne.make_fixed_length_events(raw, duration=event_duration)

# Create MNE Epochs object
epochs = mne.Epochs(raw, events, tmin=0, tmax=event_duration, baseline=None, preload=True)

# Plot the image visualization for all channels and epochs
epochs.plot_image(combine='mean')

# Show the plot
plt.show()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[43]

0s

import pandas as pd
import matplotlib.pyplot as plt

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Drop the 'Unnamed: 0' column
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = data_frame.drop(columns_to_drop, axis=1)

data_frame_1d = data_frame.values.ravel()

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Plot the EEG data
#data_frame.plot(figsize=(12, 6))
#plt.xlabel('Time')
#plt.ylabel('Amplitude')
#plt.legend(channel_names)
#plt.show()

#epochs.plot_image(data_frame_1d)

[61]

4s

import pandas as pd

# Step 1: Get the index of all channels
channel_indices = [epochs.ch_names.index(ch) for ch in epochs.ch_names]

# Step 2: Extract data for all channels
data = epochs.get_data()[:, channel_indices, :]

# Step 3: Convert to DataFrame
df = pd.DataFrame(data.transpose(0, 2, 1).reshape(-1, len(channel_indices)), columns=epochs.ch_names)

# Step 4: Drop columns
#columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
#df = df.drop(columns_to_drop, axis=1)

# Plot the EEG data
data_frame.plot(figsize=(12, 6))

# Step 5: Plot the image
df.plot_image()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[45]

0s

import pandas as pd

# Step 1: Get the index of 'AF3' channel
channel_index = epochs.ch_names.index('AF3')

# Step 2: Extract 'AF3' data
af3_data = epochs.get_data()[:, channel_index, :]

# Step 3: Convert to DataFrame
af3_dataframe = pd.DataFrame(af3_data.T, columns=epochs.events[:, 2])

# Step 4: Drop columns
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
af3_dataframe = af3_dataframe.drop(columns_to_drop, axis=1)

# Step 5: Plot the image
af3_dataframe.plot_image()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[46]

0s

# MNE functions
from mne import Epochs,find_events, concatenate_raws
from mne.time_frequency import tfr_morlet

# EEG-Notebooks functions
#from eegnb.analysis.utils import load_data,plot_conditions
#from eegnb.datasets import fetch_dataset

import mne

# Assuming you have the 'epochs' object and want to select channel 'AF3'
epochs = mne.Epochs(raw, events, tmin=0, tmax=1.0, baseline=None, preload=True)  # Your 'epochs' object
channel_name = 'AF3'

# Select the channel index
channel_idx = epochs.ch_names.index(channel_name)

# Define the frequency range for the Morlet wavelets
freqs = [4, 8, 12, 16, 20]  # Frequency range in Hz

# Define the number of cycles for each frequency in the range
n_cycles = [freq / 2.0 for freq in freqs]  # Adjust the division factor as needed

# Compute the time-frequency representation (TFR) using Morlet wavelets
tfr = mne.time_frequency.tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, picks=channel_idx, average=True)
Not setting metadata
38 matching events found
No baseline correction applied
0 projection items activated
Using data from preloaded Raw for 38 events and 1001 original time points ...
0 bad epochs dropped
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:    0.1s remaining:    0.0s
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:    0.1s finished

[47]

0s

import numpy as np
from matplotlib import pyplot as plt

from mne import create_info, Epochs
from mne.baseline import rescale
from mne.io import RawArray
from mne.time_frequency import (
    tfr_multitaper,
    tfr_stockwell,
    tfr_morlet,
    tfr_array_morlet,
    AverageTFR,
)
from mne.viz import centers_to_edges

print(__doc__)
Automatically created module for IPython interactive environment

Morlet transform


[48]

2s

n_cycles = [freq / 2.0 for freq in freqs]
power = tfr_morlet(
    epochs, freqs=freqs, n_cycles=n_cycles, return_itc=False, average=False
)
print(type(power))
avgpower = power.average()
avgpower.plot(
    [0],
    baseline=(0.0, 0.1),
    mode="mean",
    vmin=None,
    vmax=None,
    title="Using Morlet wavelets and EpochsTFR",
    show=False,
)
plt.show()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


Define and fit the ICA


[ ]

0s

# Define and fit the ICA
#ica = ICA(random_state=30042020)
#ica.fit(raw)

# Get the ICA-transformed data
#ica_data = ica._transform_raw(raw, 0, len(raw.times))

# Obtain the power spectrum using psd_array_welch
#psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

[66]

0s

ica = ICA(n_components=14)
ica.fit(raw)

# Get the ICA-transformed data
ica_data = ica._transform_raw(raw, 0, len(raw.times))

# Obtain the power spectrum using psd_array_welch
psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

#normalized_psd = psd / np.max(psd)
#audio_signal = normalized_psd.ravel()
Fitting ICA to data using 33 channels (please be patient, this may take a while)
Selecting by number: 14 components
<ipython-input-66-da386618e51d>:2: RuntimeWarning: The data has not been high-pass filtered. For good ICA performance, it should be high-pass filtered (e.g., with a 1.0 Hz lower bound) before fitting ICA.
  ica.fit(raw)
Fitting ICA took 2.0s.
Effective window size : 0.064 (s)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:    0.0s finished

Normalize and reshape power spectrum as audio signal


[67]

0s

# Normalize and reshape power spectrum as audio signal
normalized_psd = psd / np.max(psd)
audio_signal = normalized_psd.ravel()

[68]

0s

import numpy as np
import matplotlib.pyplot as plt

# Assuming you have the variable 'psd' that holds the power spectrum
# Calculate the normalized power spectral density
normalized_psd = psd / np.max(psd)

# Flatten the normalized power spectral density into a 1D array
audio_signal = normalized_psd.ravel()

# Plot the normalized power spectral density
plt.figure(figsize=(10, 6))
plt.plot(normalized_psd)
plt.title('Normalized Power Spectral Density')
plt.xlabel('Frequency Bins')
plt.ylabel('Normalized Power')
plt.show()

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


Convert audio signal to appropriate data type for writing to WAV


as single channel AF3


[69]

1s

import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA
from mne.time_frequency import psd_array_welch

# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=14)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Obtain the power spectrum using psd_array_welch
psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)

# Flatten the normalized power spectrum into a 1D array
audio_signal = normalized_psd.ravel()

# Plot the normalized power spectrum for the selected channel
plt.figure(figsize=(10, 6))
plt.plot(freqs, normalized_psd)
plt.title('Normalized Power Spectral Density - Channel: ' + channel_name)
plt.xlabel('Frequency (Hz)')
plt.ylabel('Normalized Power')
plt.show()

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 4  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = 10  # Duration of the audio signal in seconds
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Generate the binaural beats
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


AS FM no PSD


[52]

5s

import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=14)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Set the parameters for FM synthesis
carrier_frequency = 440  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')


# Play the FM audio signal in Colab
Audio("fm_audio.wav")

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


try and do it with RAW


[57]

0s

# Generate FM audio for each channel
sample_rate = 44100  # Adjust as needed
normalized_audio = np.zeros((len(channel_names), data.shape[1]))

for i, channel_index in enumerate(channel_indices):
    ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

    # Set the parameters for FM synthesis
    carrier_frequency = 440  # Adjust the carrier frequency as needed
    modulation_index = 4  # Adjust the modulation index as needed

    # Generate the FM audio signal
    audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

    # Normalize the audio signal
    normalized_audio[i] = audio_signal[i] / np.max(np.abs(audio_signal[i]))

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[58]

2s

import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

# Set the channels to process
channel_names = epochs.ch_names

# Get the indices of all channels
channel_indices = [epochs.ch_names.index(ch) for ch in channel_names]

# Extract data for all channels
data = epochs.get_data()[:, channel_indices, :]

# Transpose the data to have shape (n_channels, n_samples, n_epochs)
data = data.transpose(1, 2, 0)

# Reshape the data to have shape (n_channels * n_samples, n_epochs)
data = data.reshape(-1, data.shape[2])

# Fit ICA on the raw data
ica = ICA(n_components=14)
ica.fit(raw)

# Generate FM audio for each channel
sample_rate = 44100  # Adjust as needed
normalized_audio = np.zeros((len(channel_names), data.shape[1]))

for i, channel_index in enumerate(channel_indices):
    ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

    # Set the parameters for FM synthesis
    carrier_frequency = 440  # Adjust the carrier frequency as needed
    modulation_index = 4  # Adjust the modulation index as needed

    # Generate the FM audio signal
    audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

    # Normalize the audio signal
    normalized_audio[i] = audio_signal / np.max(np.abs(audio_signal))

# Combine the normalized audio signals for all channels
combined_audio = np.sum(normalized_audio, axis=0)

# Normalize the combined audio signal
combined_audio /= np.max(np.abs(combined_audio))

# Rescale the combined audio signal to match the desired duration
duration = raw.times[-1]  # Duration of the EEG recording in seconds
combined_audio = np.resize(combined_audio, int(sample_rate * duration))

# Save the FM audio signal to a file
sf.write("fm_audio.wav", combined_audio, sample_rate, 'PCM_16')

# Play the FM audio signal in Colab
Audio("fm_audio.wav")

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


drop collumns play that remain


[ ]

0s

import pandas as pd

# List of file paths
Epilepsy = ['EEGs_Guinea-Bissau/signal-1.csv.gz', 'EEGs_Guinea-Bissau/signal-2.csv.gz', 'EEGs_Guinea-Bissau/signal-3.csv.gz']

# Drop the specified columns and save the modified data as a new object
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']

epilepsy = []  # List to store modified data

# Process each file
for file_path in Epilepsy:
    # Load the CSV file using Pandas
    data_frame = pd.read_csv(file_path, compression='gzip')
    
    # Drop the specified columns
    data_frame = data_frame.drop(columns_to_drop, axis=1)
    
    # Append the modified data to the epilepsy list
    epilepsy.append(data_frame)

# If you want to concatenate all the modified data into a single DataFrame, you can use the following line:
    #epilepsy = pd.concat(epilepsy, ignore_index=True)

[ ]

0s

import matplotlib.pyplot as plt

# Plot the data in the epilepsy object
for i, data_frame in enumerate(epilepsy):
    # Get the column names (channel names)
    channel_names = data_frame.columns.tolist()

    # Plot the EEG data
    plt.figure()
    data_frame.plot(figsize=(12, 6))
    plt.xlabel('Time')
    plt.ylabel('Amplitude')
    plt.legend(channel_names)
    plt.title(f'Epilepsy {i+1}')
    plt.show()

In this modified code, we select the EEG data from the first file in the epilepsy object by accessing epilepsy[0]. We then iterate over each channel of the EEG data and apply FM synthesis to sonify it. The resulting audio signals for each channel are combined, normalized, and saved to a WAV file. Finally, the audio signal is played using IPython.display.Audio.

Please note that you may need to adjust the carrier_frequency, modulation_index, and sample_rate parameters according to your requirements.


[ ]

0s

import numpy as np
import soundfile as sf
from scipy.signal import chirp
from IPython.display import Audio

# Select the EEG data from the first file in the epilepsy object
eeg_data = Epilepsy[0]

# Set the parameters for FM synthesis
carrier_frequency = 440  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed
sample_rate = 44100  # Adjust as needed

# Sonify each channel of the EEG data
audio_signals = []
for channel_name in eeg_data.columns:
    channel_data = eeg_data[channel_name].values

    # Normalize the channel data
    normalized_data = (channel_data - np.min(channel_data)) / (np.max(channel_data) - np.min(channel_data))

    # Apply FM synthesis
    t = np.arange(len(channel_data)) / sample_rate
    modulation = normalized_data * modulation_index
    audio_signal = chirp(t, f0=carrier_frequency, f1=carrier_frequency, t1=t[-1], method='linear', phi=90, vertex_zero=False) * modulation

    # Append the audio signal to the list
    audio_signals.append(audio_signal)

# Combine the audio signals from all channels
combined_audio = np.sum(audio_signals, axis=0)

# Normalize the combined audio signal
combined_audio /= np.max(np.abs(combined_audio))

# Save the combined audio signal to a file
sf.write("eeg_audio.wav", combined_audio, sample_rate, 'PCM_16')

# Play the combined audio signal
Audio("eeg_audio.wav")

[ ]

0s

import numpy as np
import soundfile as sf
from IPython.display import Audio

# Select the EEG data from the first file in the epilepsy object
eeg_data = epilepsy[0]

# Specify the channels to process
channels_to_process = ['AF3', 'AF4']  # Add or remove channel names as needed

# Set the parameters for FM synthesis
carrier_frequency = 440  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed
sample_rate = 44100  # Adjust as needed

# Create an empty list to store the audio signals for each channel
audio_signals = []

# Generate the FM audio signal for each selected channel
for channel_name in channels_to_process:
    # Get the index of the selected channel
    channel_index = eeg_data.columns.get_loc(channel_name)

    # Get the selected channel data
    channel_data = eeg_data.iloc[:, channel_index]

    # Determine the duration of the EEG recording
    duration = channel_data.shape[0] / sample_rate

    # Generate the FM audio signal
    time = np.linspace(0, duration, channel_data.shape[0])
    audio_signal = np.sin(2 * np.pi * carrier_frequency * time + modulation_index * channel_data.values)

    # Normalize the audio signal
    normalized_audio = audio_signal / np.max(np.abs(audio_signal))

    # Append the audio signal to the list
    audio_signals.append(normalized_audio)

# Combine the audio signals from all channels
combined_audio = np.sum(audio_signals, axis=0)

# Normalize the combined audio signal
normalized_combined_audio = combined_audio / np.max(np.abs(combined_audio))

# Save the combined audio signal to a file
sf.write("eeg_audio.wav", normalized_combined_audio, sample_rate, 'PCM_16')

# Play the combined audio signal
Audio("eeg_audio.wav")

Double-click (or enter) to edit


[ ]

0s

import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

#Epilepsy=[pd.read_csv('EEGs_Guinea-Bissau/signal-{}.csv.gz'.format(i), compression='gzip') for i in  EP_sub]

# Load the CSV file using Pandas
csv_file = 'Epilepsy'
data_frame = [pd.read_csv('EEGs_Guinea-Bissau/signal-{}.csv.gz'.format(i), compression='gzip') for i in  EP_sub]

# Drop the specified columns
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Fit ICA on the raw data
ica = FastICA(n_components=len(channel_names))
ica.fit(data_frame)

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Set the parameters for FM synthesis
carrier_frequency = 440  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')


# Play the FM audio signal in Colab
Audio("fm_audio.wav")

FFT


[70]

6s

import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

#Set the channel to process
channel_name = 'AF3'

#Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

#Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

#Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

#Set the parameters for FM synthesis
carrier_frequency = 880 # Adjust the carrier frequency as needed
modulation_index = 4 # Adjust the modulation index as needed

#Reshape the ica_data to match the number of samples
n_samples = len(raw.times)
ica_data = np.resize(ica_data, (n_samples,))

#Compute the audio signal using FFT
exponential_array = np.exp(1j * modulation_index * 2 * np.pi * carrier_frequency * np.arange(n_samples) / sample_rate)
exponential_array = np.resize(exponential_array, (n_samples//2+1,))
audio_signal = np.fft.irfft(np.fft.rfft(ica_data) * exponential_array)

#ormalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

#Rescale the audio signal to match the desired duration
normalized_audio *= 0.5 # Adjust the volume as needed
sample_rate = 44100 # Adjust as needed
duration = raw.times[-1] # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

#Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')

#Play the FM audio signal in Colab
Audio("fm_audio.wav")

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[ ]

0s

import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

#Set the channel to process
channel_name = 'AF3'

#Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

#Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

##Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

#Set the parameters for FM synthesis
carrier_frequency = 440 # Adjust the carrier frequency as needed
modulation_index = 4 # Adjust the modulation index as needed

#Reshape the ica_data to match the number of samples
n_samples = len(raw.times)
ica_data = np.resize(ica_data, (n_samples,))

#Compute the audio signal using FFT
exponential_array = np.exp(1j * modulation_index * 2 * np.pi * carrier_frequency * np.arange(n_samples) / sample_rate)
exponential_array = np.resize(exponential_array, (n_samples,))
audio_signal = np.fft.irfft(np.fft.rfft(ica_data) * exponential_array)

#Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

#Rescale the audio signal to match the desired duration
normalized_audio *= 0.5 # Adjust the volume as needed
sample_rate = 44100 # Adjust as needed
duration = raw.times[-1] # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

#Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')

#Play the FM audio signal in Colab
Audio("fm_audio.wav")

[ ]

0s

import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))[:, :min(ica._transform_raw(raw, channel_index, len(raw.times)).shape[1], len(raw.times[:-1]))]

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times[:-1][:len(ica_data[0])] + modulation_index * ica_data[0])

# Calculate the amplitude envelope
amplitude_envelope = np.abs(audio_signal)

# Normalize the audio signal and the amplitude envelope
normalized_audio = audio_signal / np.max(np.abs(audio_signal))
normalized_envelope = amplitude_envelope / np.max(amplitude_envelope)

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-2]  # Duration of the EEG recording in seconds (use -2 to match the trimmed length)
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')

# Plot the amplitude envelope
plt.figure(figsize=(10, 6))
plt.plot(raw.times[:-1][:len(ica_data[0])], normalized_envelope)
plt.title('Amplitude Envelope - Channel: ' + channel_name)
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.show()

# Play the FM audio signal in Colab
Audio(audio_signal, rate=sample_rate)

hihg pass filter


[72]

10s

import mne

# Define the cutoff frequency for the high-pass filter
highpass_freq = 1.0  # Adjust as needed

# Apply the high-pass filter to the raw data
raw.filter(l_freq=highpass_freq, h_freq=None)

# Continue with ICA fitting and processing
ica = mne.preprocessing.ICA(n_components=33)
ica.fit(raw)

# Rest of your code...

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


Zer crossing


[73]

7s

import numpy as np

import matplotlib.pyplot as plt

import librosa

import soundfile as sf

from IPython.display import Audio

from mne.preprocessing import ICA

# Set the channel to process

channel_name = ‘AF3’

# Fit ICA on the raw data

ica = ICA(n_components=33)

ica.fit(raw)

# Get the index of the selected channel

channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel

ica_data = ica._transform_raw(raw, channel_index, len(raw.times))[:, :min(ica._transform_raw(raw, channel_index, len(raw.times)).shape[1], len(raw.times[:-1]))]

# Set the parameters for FM synthesis

carrier_frequency = 220  # Adjust the carrier frequency as needed

modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal

audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times[:-1][:len(ica_data[0])] + modulation_index * ica_data[0])

# Calculate the amplitude envelope

amplitude_envelope = np.abs(audio_signal)

# Normalize the audio signal and the amplitude envelope

normalized_audio = audio_signal / np.max(np.abs(audio_signal))

normalized_envelope = amplitude_envelope / np.max(amplitude_envelope)

# Find zero-crossing indices

zero_crossings = np.where(np.diff(np.sign(normalized_audio)))[0]

# Adjust the waveform at zero crossings

for index in zero_crossings:

    if normalized_audio[index] > 0:

        normalized_audio[index:] -= 2 * normalized_audio[index]

    else:

        normalized_audio[index:] += 2 * np.abs(normalized_audio[index])

# Rescale the audio signal to match the desired duration

normalized_audio *= 0.5  # Adjust the volume as needed

sample_rate = 44100  # Adjust as needed

duration = raw.times[-2]  # Duration of the EEG recording in seconds (use -2 to match the trimmed length)

# Save the FM audio signal to a file

sf.write(“fm_audio.wav”, audio_signal, sample_rate, ‘PCM_16’)

# Play the FM audio signal in Colab

Audio(audio_signal, rate=sample_rate)

https://nulxx78u82r-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230606-060135-RC00_538140384


[ ]

0s

import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, (int(sample_rate * duration), len(audio_signal)))

    stereo_audio = librosa.util.normalize(audio_signal) * volume

    return stereo_audio, sample_rate

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)  # Normalize the power spectrum

# Reshape the epoch into a 1D array for each channel
audio_signals = [normalized_psd[channel_index].ravel() for channel_index in range(normalized_psd.shape[0])]

# Rescale the audio signals to match the desired duration
audio_signals = [audio_signal * 0.5 for audio_signal in audio_signals]  # Adjust the volume as needed
audio_signals = [np.resize(audio_signal, int(sample_rate * duration)) for audio_signal in audio_signals]

# Generate the binaural beats for each channel
binaural_beats = []
for audio_signal in audio_signals:
    channel_binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)
    binaural_beats.append(channel_binaural_beats)

# Combine the binaural beats for all channels
stereo_audio = np.concatenate(binaural_beats, axis=1)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", stereo_audio, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

FM & AMP


[ ]

0s

import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

# Set the channel to process
channel_name = 'F3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')


# Play the FM audio signal in Colab
Audio("fm_audio.wav")

[ ]

0s

import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, int(sample_rate * duration))

    left_channel = audio_signal
    right_channel = audio_signal

    stereo_audio = np.vstack((left_channel, right_channel)).T
    stereo_audio = librosa.util.normalize(stereo_audio) * volume

    return stereo_audio, sample_rate


# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Generate the binaural beats
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 100  # Adjust the beat frequency as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

[ ]

0s

import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, int(sample_rate * duration))

    left_channel = audio_signal
    right_channel = audio_signal

    stereo_audio = np.vstack((left_channel, right_channel)).T
    stereo_audio = librosa.util.normalize(stereo_audio) * volume

    return stereo_audio, sample_rate


# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Generate the binaural beats
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 100  # Adjust the beat frequency as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

[ ]

0s

import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA
from mne.time_frequency import psd_array_welch

# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Obtain the power spectrum using psd_array_welch
psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)

# Calculate the frequency modulation (FM) signal
fm_signal = np.sin(2 * np.pi * freqs * normalized_psd)

# Rescale the FM signal to match the desired duration
fm_signal *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = 10  # Duration of the audio signal in seconds
fm_signal = np.resize(fm_signal, int(sample_rate * duration))

# Generate the binaural beats
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 4  # Adjust the beat frequency as needed
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, fm_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

enire length


[ ]

0s

import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA
from mne.time_frequency import psd_array_welch

# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Obtain the power spectrum using psd_array_welch
psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)

# Flatten the normalized power spectrum into a 1D array
audio_signal = normalized_psd.ravel()

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 4  # Adjust the beat frequency as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Generate the binaural beats for the entire EEG recording
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

[ ]

0s

#!pip install librosa

[ ]

0s

import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, int(sample_rate * duration))

    left_channel = audio_signal
    right_channel = audio_signal

    stereo_audio = np.vstack((left_channel, right_channel)).T
    stereo_audio = librosa.util.normalize(stereo_audio) * volume

    return stereo_audio, sample_rate

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)  # Normalize the power spectrum

# Reshape the epoch into a 1D array
audio_signal = normalized_psd.ravel()

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Generate the binaural beats
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

[ ]

0s

import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, (int(sample_rate * duration), len(audio_signal)))

    stereo_audio = librosa.util.normalize(audio_signal) * volume

    return stereo_audio, sample_rate

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)  # Normalize the power spectrum

# Reshape the epoch into a 1D array for each channel
audio_signals = [normalized_psd[channel_index].ravel() for channel_index in range(normalized_psd.shape[0])]

# Rescale the audio signals to match the desired duration
audio_signals = [audio_signal * 0.5 for audio_signal in audio_signals]  # Adjust the volume as needed
audio_signals = [np.resize(audio_signal, int(sample_rate * duration)) for audio_signal in audio_signals]

# Generate the binaural beats for each channel
binaural_beats = []
for audio_signal in audio_signals:
    channel_binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)
    binaural_beats.append(channel_binaural_beats)

# Combine the binaural beats for all channels
stereo_audio = np.concatenate(binaural_beats, axis=1)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", stereo_audio, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

[ ]

0s

!pip install soundfile

[ ]

0s

import numpy as np
import sounddevice as sd
import matplotlib.pyplot as plt

# Assume you have the Morlet wavelet coefficients stored in the variable 'power'
# and the sampling frequency in the variable 'sfreq'
# and the carrier frequency is 440 Hz

# Generate the carrier signal
duration = len(power.times) / sfreq
t = np.linspace(0, duration, len(power.times))
carrier = np.sin(2 * np.pi * 440 * t)

# Modulate the carrier signal with the Morlet wavelet coefficients
modulated_signal = power.data * carrier[:, np.newaxis]

# Sum the modulated signals across time to obtain the audio waveform
audio = np.sum(modulated_signal, axis=1)

# Normalize the audio waveform
audio /= np.max(np.abs(audio))

# Plot the average power
power.plot([0], baseline=(0.0, 0.1), mode="mean", vmin=None, vmax=None, title="Using Morlet wavelets and EpochsTFR", show=False)

# Show the plot
plt.show()

# Play the audio
sd.play(audio, sfreq)
sd.wait()

[ ]

0s

import numpy as np
import soundfile as sf
import matplotlib.pyplot as plt

# Assume you have the Morlet wavelet coefficients stored in the variable 'power'
# and the sampling frequency in the variable 'sfreq'
# and the carrier frequency is 440 Hz

# Reshape the power data to (n_epochs, n_freqs, n_times)
power_data = power.data.reshape(power.data.shape[0], power.data.shape[1], -1)

# Sum the power across frequencies to obtain the audio waveform
audio = np.sum(power_data, axis=1)

# Normalize the audio waveform
audio /= np.max(np.abs(audio))

# Save the audio waveform as a WAV file using soundfile
sf.write('eeg_audio.wav', audio.T, sfreq)

# Plot the average power
#power.plot([0], baseline=(0.0, 0.1), mode="mean", vmin=None, vmax=None, title="Using Morlet wavelets and EpochsTFR", show=False)

# Show the plot
plt.show()

[ ]

0s

#Power density distribution

[ ]

0s

# Assuming you have the PSD data stored in a variable named 'psd'
psd2 = psd

[ ]

0s

normalized_psd = psd / np.max(psd)

works for audio


[ ]

0s

import numpy as np
import matplotlib.pyplot as plt
import soundfile as sf

# Assuming you have the PSD data stored in a variable named 'psd'
# Normalize the PSD
normalized_psd = psd2 / np.max(psd2)

# Define the duration and sample rate for the audio signal
duration = 10  # Duration of the audio signal in seconds
sample_rate = 44100  # Sample rate of the audio signal

# Reshape the normalized PSD into a 1D array
audio_signal = normalized_psd.ravel()

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Save the audio signal to a file
sf.write("psd_audio2.wav", audio_signal, sample_rate, 'PCM_16')

# Plot the PSD
plt.plot(audio_signal)
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.title('PSD Audio Signal')
plt.show()

[ ]

0s






[ ]

0s

import numpy as np
import librosa
import soundfile as sf

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Generate the binaural beats using the PSD as the audio signal
audio_signal = psd

# Generate the binaural beats
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats
Audio("binaural_beats.wav")

[ ]

0s


import mne
import numpy as np
import pandas as pd

# Define the electrode names
ch_pos=['AF3', 'AF4', 'F7', 'F3', 'F4', 'F8', 'FC5', 'FC6', 'T7', 'T8', 'P7', 'P8', 'O1', 'O2']

# Load the electrode positions from the EEGlab repository
loc_file = 'https://raw.githubusercontent.com/sccn/eeglab/develop/sample_locs/Standard-10-20-Cap81.locs'
montage = mne.channels.read_custom_montage(loc_file)

# Get the electrode positions from the montage
positions = montage.get_positions()

# Create a dictionary mapping electrode names to positions
electrode_positions = {ch: pos for ch, pos in zip(ch_pos, positions)}

# Create a montage using the electrode positions
montage = mne.channels.make_dig_montage(ch_pos=electrode_positions, coord_frame='head')

# Plot the electrode positions on the head
montage.plot()
plt.show()


[ ]

0s

scalings = {"eeg":2e-5}
epochs[("1", "2")].plot(title="Condition: Control",
                        scalings=scalings, n_epochs=1)
epochs[("41", "42")].plot(title="Condition: Swallow",
                          scalings=scalings, n_epochs=1);

Load and preprocess the data, extract events and create Epochs.


[ ]

0s

import mne

# Load raw data
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif")

# Downsample the data to 256 Hz
raw.resample(256.)

# Apply high-pass filter
raw.filter(l_freq=2., h_freq=None)

# Set the reference to average reference
raw.set_eeg_reference("average")

# Pick EEG channels only
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False)
raw = raw.pick(picks)

# Set the channel locations
montage = mne.channels.make_standard_montage('standard_1005')
raw.set_montage(montage)

# Get the channel names from the raw data
channel_names = raw.info['ch_names']

# Print the original channel names
print("Original channel names:")
print(channel_names)

# Define the number of desired channels
num_desired_channels = 35  # Specify the number of desired channels

# Define the new channel names
new_channel_names = ['Channel {}'.format(i+1) for i in range(num_desired_channels)]

# Rename channels in the raw data
channel_mapping = {channel_names[i]: new_channel_names[i] for i in range(num_desired_channels)}
raw.rename_channels(channel_mapping)

# Get the updated channel names
updated_channel_names = raw.info['ch_names']

# Print the updated channel names
print("Updated channel names:")
print(updated_channel_names)

# Create a custom montage based on the updated channel names
ch_pos = {ch_name: raw.info['chs'][i]['loc'][:3] for i, ch_name in enumerate(updated_channel_names)}
montage = mne.channels.make_dig_montage(ch_pos=ch_pos)

raw.set_montage(montage)

# Continue with your analysis using the updated data


# Set the montage
raw.set_montage(montage)

# Save the updated data
# raw.save('/path/to/save/updated_data.fif', overwrite=True)

[ ]

17m

fname = "/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-1_raw.fif" 

[ ]

17m

epochs = mne.read_epochs(fname)

Double-click (or enter) to edit


[ ]

17m

import mne

# Load raw data
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-1_raw.fif.gz")

# Get the channel names from the raw data
channel_names = raw.info['ch_names']

# Print the channel names
print("Original channel names:")
print(channel_names)

# Define the number of and name of the desired channels
num_desired_channels = 36  # Specify the number of desired channels
desired_channel_names = [f"Desired Channel {i+1}" for i in range(num_desired_channels)]  # Generate the desired channel names

# Select the desired channels based on the number
selected_channels = channel_names[:num_desired_channels]

# Create a dictionary for channel name mapping
channel_mapping = dict(zip(selected_channels, desired_channel_names))

# Rename channels in the raw data
raw.rename_channels(channel_mapping)

# Get the updated channel names
updated_channel_names = raw.info['ch_names']

# Print the updated channel names
print("Updated channel names:")
print(updated_channel_names)

# Create a custom montage based on the updated channel names
montage = mne.channels.make_standard_montage('standard_1005')
montage.ch_names = updated_channel_names[:len(montage.ch_names)]  # Use the same number of channel names as in the montage

# Update the number of positions in the montage
montage.pos = montage.pos[:num_desired_channels]

raw.set_montage(montage)

# Save the updated raw data file
output_raw_file = "/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau-FIF/updated_signal-1.fif.gz"
raw.save(output_raw_file, overwrite=True)

# Save the montage file
output_montage_file = "/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau-FIF/updated_montage.fif"
montage.save(output_montage_file)

# Continue with the rest of your analysis
# Perform macro analysis, micro analysis, and mid analysis as required

change channel names


First of all, we should inspect our epochs and have a look at the data. As you will see the swallowing condition contains disproportionately more noise than the control condition.


[ ]

17m

scalings = {"eeg":2e-5}
epochs[("1", "2")].plot(title="Condition: Control",
                        scalings=scalings, n_epochs=1)
epochs[("41", "42")].plot(title="Condition: Swallow",
                          scalings=scalings, n_epochs=1);

Now, we can fit an ICA to every type of noise and checkout how each type of noise looks like. This will just give out a list of all diferent conditions with the 5 largest components displayed in detail. After inspections, you might find that for all conditions, not all important components are made up of the respective noise condition, but often EOG/Blink noise is still the most prevalent (because of course people also blinked and moved their eyes during the other conditions). However, you can definitely find some really distinctive components for each type of noise.


[ ]

17m

# initialize ICA
ica = ICA(n_components=20) # We only look at few components to save time

# define the names of our condition and the according trials:
names = ["control conditon", "move your head", "move your jaw",
         "blink ", "swallow", "clench your teeth"]
s_events = [("1", "2"), ("11", "12"), ("21", "22"),
          ("31", "32"), ("41", "42"), ("51", "52")]

plot_dict = {}
for name, event in zip(names, s_events):

  # we fit the ICA on trials for one noise condition specifically
  ica.fit(epochs[event], verbose=0)
  
  # investigate the first 5 ICs from each noise condition
  print("\n\n\nCONDITION: " + name + "\n\n\n")
  plot_dict[name] = ica.plot_properties(epochs[event], np.arange(5), verbose=0);

Let’s take a look at these interesting ICs above. Note that for most conditions, EOG movements/Blinks are still the most important/disruptive components. But usually, the specific noise conditions are clearly distinguishable in the first 4 components.

Below, I’ve collected some components (from different subjects) that I found to be very typical of a specific noise pattern:


Typical Alternating Current/ Line noise:


Typical EOG/Eye Movement


Typical Blinking:


Extensive Blinking:


Extensive Head Movement:


Extensive Jaw Movement:


Extensive Swallowing:


Extensive Tooth clenching:


PCA on ICA

Here, we use one of MNE-Python’s example datasets to further investigate ICA (and to fool around a little). Can PCA and Clustering on the ICA’s component topology or power spectrum help us identifying and selecting specific components for exclusion?

The following code is purely experimental and is not part of a conventional ICA analysis.


We start by loading the data.


[ ]

17m

# load the sample data from MNE datasets
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
                                    'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)

# crop data to 1 minute of recording time
raw.crop(tmax=60.)

# pick EEG channels only
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False)
raw = raw.pick(picks)

# load the data and apply a highpass filter above 1 Hz
raw.load_data().filter(l_freq=2., h_freq=None)

Now we fit an ICA to the data. Two important feature for spotting ICs are the topology and the frequency spectrum of a specific component. We can later use them to investigate features in component space.


[ ]

17m

# define and fit the ICA
ica = ICA(random_state=30042021)
ica.fit(raw)

# get the power spectrum of each component
ica_data = ica._transform_raw(raw, 0, len(raw.times))
psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

# get the topology of each component
topo = ica.get_components().T

Now let’s see how the ICs can be differentiated in based on either their Power Spectrum or their Topology. For this, we apply a PCA over the extracted ICA features, allowing us to spatially separate and plot the ICs along the 2 most important axes:


[ ]

17m

fig, axes = plt.subplots(1, 2, figsize=(20, 10))
for name, data, ax in [("PSD", psd, axes[0]), ("Topology", topo, axes[1])]:
  # extract the main axes of our data for plotting purposes
  vis_pca = PCA(n_components=2)
  scatter_data = vis_pca.fit_transform(StandardScaler().fit_transform(data))

  # plot the ICs across the main separation axes
  ax.scatter(scatter_data[:, 0], scatter_data[:, 1])  # , c=[scatter_data[:, 2]])
  ax.set_xlabel("1st component")
  ax.set_ylabel("2nd component")

  # plot their labels to recognize each IC
  for idx, coords in enumerate(scatter_data[:, :2]):
    ax.annotate(ica._ica_names[idx], coords)

  ax.set_title("Spatially separating ICA components based on " + name)

#plt.colorbar(label="3rd component")

The pattern on the left looks really interesting. Theoretically, this would be something that we are looking for, due to its clear clustering. However, if we inspect the clusters and their components (shown below), we can’t find any cluster which would purely consist of noisy components. There are some noisy IC clusters visible, but they aren’t very clearly outlined. Looks like noise/not noise is not among the most important differenciation factors of our IC’s topologies and frequency spectra. Also, both of these patterns will look completely different if we fit the ICA on the raw data from above insted of the MNE example data, meaning that even if there was something interesting to find here, these results would probably not generalize to a different EEG recording.

We can still combine the two features and see whether we can fit meaningful clusters on top of these ICs.


[ ]

17m

# select the input for the cluster
cluster_input = np.hstack([psd, topo])
#cluster_input = psd
#cluster_input = ica_data

# set the number of clusters
n_clusters = 3

# concatenate the input and fit the cluster
cluster = KMeans(n_clusters).fit(StandardScaler().fit_transform(cluster_input))
#cluster = OPTICS().fit(StandardScaler().fit_transform(cluster_input))

Now lets do a PCA again to spatially separate them well and plot what different types of ICs our clustering algorithm has found.


[ ]

17m

# extract the main axes of our data for plotting purposes
vis_pca = PCA(n_components=2)
scatter_data = vis_pca.fit_transform(StandardScaler().fit_transform(cluster_input))

# Number of clusters in labels
labels = cluster.labels_

# Define the colors four our clusters
unique_labels = set(labels)
colors = [plt.cm.Spectral(each)
          for each in np.linspace(0, 1, len(unique_labels))]

fig = plt.figure(figsize=(10,10))
# fplot all cluster classes in a different color
fSaved successfully!
 jimi working Guinea-Bissau-Copy 4 of  EEG_visualize_ICA binaural beat.ipynb
 jimi working Guinea-Bissau-Copy 4 of  EEG_visualize_ICA binaural beat.ipynb_Notebook unstarred
Files
..
Drop files to upload them to session storage
Disk
198.82 GB available
Independent Component Analysis for EEG
In this notebook, we will look into some specific aspects of EEG artifical removel via Independent Component Analysis. We look at specific noise patterns typically ocurring during EEG recording and see if we can automatically cluster these components for easier detection.

Imports
First we have to install MNE-Python on Colab's runtime...

[1]
7s
!pip install mne -q
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.7/7.7 MB 88.4 MB/s eta 0:00:00
... and then import all the libraries that we'll need for this session.

[2]
2s
import os

import numpy as np

import mne
from mne.time_frequency import psd_array_welch
from mne.preprocessing import (ICA, create_eog_epochs, create_ecg_epochs,
                               corrmap)

from sklearn.cluster import AgglomerativeClustering, KMeans, DBSCAN, OPTICS, SpectralClustering

Investigating different types of noise
Along with some fellow students (who did all the hard work - managing the experiment and recording), we gathered some data that should allow us to have a more systematic look at EEG noise: We gave a simple visual task, which was heavily distorted with specific types of noise:

In all trials, participants were asked to look at an image for 15 seconds. But beside a few control trials, they were also instructed to induce the following noise while looking at the images:

Moving their head
Moving their jaw
Blinking
Swallowing
Clenching their teeth
If we now only look at recorded Epochs with one of these specific noise types, we can investigate and select typical Independent Components, nicely representing the corresponding type of noise.

As a first step, we will use pydrive to load the recorded data from Google Drive.

[3]
0s
# to connect with Google drive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
Copy the files from Google Drive into Google Colab's runtime.

Dont't worry:

Nothing of this will be saved to your personal Google Drive or Personal Computer. All of this will only temporarily exist in Google Colab's hosted runtime.

[4]
5m
from google.colab import drive
import pandas as pd

# Mount Google Drive
drive.mount('/content/drive')
Mounted at /content/drive
[5]
0s
# authenticate your google account to get access to google cloud
#auth.authenticate_user()
##gauth = GoogleAuth()
#gauth.credentials = GoogleCredentials.get_application_default()
#drive = GoogleDrive(gauth)

# identify all the files in our dataset folder
#folder_id = "1w34G5mhUDebZ_Wi7wKAWo78mLuDumCAB"
#file_list = drive.ListFile({'q': "'{}' in parents and trashed=false".format(folder_id)}).GetList()

# load the data into Google Colab
#for i, drive_file in enumerate(sorted(file_list, key = lambda x: x['title']), start=1):
#  if i == 1:  # only use the 1st recording to save time
##    file_name = 'sub_{}.fif'.format(i)
 #   print('Downloading {} from GDrive ({}/{})'.format(file_name, i, len(file_list)))
 #   drive_file.GetContentFile(file_name)

    # Note: Sub 2 and 3 have something super weird going on. All components are
    # largely overshadowed by something from the back left shoulder.
    # Broken electrode/ground?
Guinea bissau

[6]
0s
#!wget https://zenodo.org/record/1252141/files/EEGs_Guinea-Bissau.zip -P /content/drive/MyDrive/Colab Notebooks/EEG\ data


[7]
0s
#!wget https://zenodo.org/record/1252141/files/metadata_guineabissau.csv 
extract

[8]
0s
#import os
#import zipfile

# Specify the path to the downloaded zip file
#zip_file = "/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau.zip"

# Specify the directory to extract the files
#extract_dir = "/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau"

# Extract the contents of the zip file
#with zipfile.ZipFile(zip_file, 'r') as zip_ref:
 #   zip_ref.extractall(extract_dir)

# Get the path to the extracted folder
#extracted_folder = os.path.join(extract_dir, "EEGs_Guinea-Bissau")

# Continue with the rest of your analysis using the extracted files

[9]
0s
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
Folder Location
[10]
0s
#cd /content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau
[11]
3s
cd /content/drive/MyDrive/Colab-Notebooks/EEG-data
/content/drive/MyDrive/Colab-Notebooks/EEG-data
[12]
0s
meta_df=pd.read_csv('/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/metadata_guineabissau.csv')
meta_df.head()

[13]
0s
#now i need to seprate Epilepsy vs Control subjects
EP_sub=meta_df['subject.id'][meta_df['Group']=='Epilepsy']
CT_sub=meta_df['subject.id'][meta_df['Group']=='Control']
[14]
51s
#read csv files
Epilepsy=[pd.read_csv('EEGs_Guinea-Bissau/signal-{}.csv.gz'.format(i), compression='gzip') for i in  EP_sub]
Control=[pd.read_csv('EEGs_Guinea-Bissau/signal-{}.csv.gz'.format(i), compression='gzip') for i in  CT_sub]
[15]
0s
Epilepsy[0].head()

[16]
0s
meta_df=pd.read_csv('/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/metadata_guineabissau.csv')
meta_df.head()

[17]
0s
#remove non eeg channels
Epilepsy=[i.iloc[:,1:15] for i in  Epilepsy]
Control=[i.iloc[:,1:15] for i in  Control]
[18]
3s
import pandas as pd
import mne

EP_sub = [1, 2, 3]  # Replace with your desired subject IDs
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']

# Load and plot raw EEG signal for each subject
for subject_id in EP_sub:
    filename = 'EEGs_Guinea-Bissau/signal-{}.csv.gz'.format(subject_id)
    data = pd.read_csv(filename, compression='gzip')
    
    # Drop specified columns
    data = data.drop(columns_to_drop, axis=1)
    
    # Get the column names (channel names)
    channel_names = data.columns.tolist()
    
    # Convert DataFrame to MNE Raw object
    info = mne.create_info(ch_names=channel_names, sfreq=250, ch_types='eeg')
    raw = mne.io.RawArray(data.T, info)
    
    # Plot raw EEG signal
    raw.plot()


convert to mne object

[19]
0s
import mne
def convertDF2MNE(sub):
    info = mne.create_info(list(sub.columns), ch_types=['eeg'] * len(sub.columns), sfreq=128)
    info.set_montage('standard_1020')
    data=mne.io.RawArray(sub.T, info)
    data.set_eeg_reference()
    data.filter(l_freq=0.1,h_freq=45)
    epochs=mne.make_fixed_length_epochs(data,duration=5,overlap=1)
    epochs=epochs.drop_bad()
    
    return epochs
convert to mne object change for lick sound

[20]
0s
import mne
def convertDF2MNE(sub):
    info = mne.create_info(list(sub.columns), ch_types=['eeg'] * len(sub.columns), sfreq=128)
    info.set_montage('standard_1020')
    data=mne.io.RawArray(sub.T, info)
    data.set_eeg_reference()
    data.filter(l_freq=0.1,h_freq=45)
    epochs=mne.make_fixed_length_epochs(data,duration=5,overlap=1)
    epochs=epochs.drop_bad()
    
    return epochs
[21]
7s
%%capture
#Convert each dataframe to mne object
Epilepsy=[convertDF2MNE(i) for i in  Epilepsy]
Control=[convertDF2MNE(i) for i in  Control]
[22]
1s
%%capture
#concatenate the epochs
Epilepsy_epochs=mne.concatenate_epochs(Epilepsy)
Control_epochs=mne.concatenate_epochs(Control)
[23]
0s
print(Epilepsy_epochs)
<EpochsArray |  3995 events (all good), 0 – 4.99219 s, baseline off, ~273.1 MB, data loaded,
 '1': 3995>
[24]
17s
import matplotlib.pyplot as plt

# Plot the image visualization for all channels and epochs
Epilepsy_epochs.plot_image(combine='std')

# Show the plot
plt.show()


create labels groups

[25]
0s
Epilepsy_group=np.concatenate([[i]*len(Epilepsy[i]) for i in range(len(Epilepsy))])#create a list of list where each sub list corresponds to subject_no
Control_group=np.concatenate([[i]*len(Control[i]) for i in range(len(Control))])#create a list of list where each sub list corresponds to subject_no

Epilepsy_label=np.concatenate([[0]*len(Epilepsy[i]) for i in range(len(Epilepsy))])
Control_label=np.concatenate([[1]*len(Control[i]) for i in range(len(Control))])
[26]
0s
Epilepsy_group.shape,Control_group.shape,Epilepsy_label.shape,Control_label.shape
((3995,), (3461,), (3995,), (3461,))
[27]
0s
#combine data
data=mne.concatenate_epochs([Epilepsy_epochs,Control_epochs])
group=np.concatenate((Epilepsy_group,Control_group))
label=np.concatenate((Epilepsy_label,Control_label))
print(len(data),len(group),len(label))
Not setting metadata
7456 matching events found
No baseline correction applied
7456 7456 7456
Get the number of channels from the shape of the data array

[28]
0s
import pandas as pd

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Extract the data from the DataFrame
data = data_frame.values

# Get the number of channels from the shape of the data array
num_channels = data.shape[1]

print("Number of channels:", num_channels)

Number of channels: 36
Get the column names (channel names)

[29]
0s
import pandas as pd

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Print the channel names
print(channel_names)

['Unnamed: 0', 'AF3', 'AF4', 'F3', 'F4', 'F7', 'F8', 'FC5', 'FC6', 'O1', 'O2', 'P7', 'P8', 'T7', 'T8', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
Drop the 'Unnamed: 0' column & plot
[30]
2s
import pandas as pd
import matplotlib.pyplot as plt

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Drop the 'Unnamed: 0' column
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Plot the EEG data
data_frame.plot(figsize=(12, 6))
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.legend(channel_names)
plt.show()


[31]
1s
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Drop the 'Unnamed: 0' column
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX']
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Transpose the DataFrame to make each column represent a channel
transposed_df = data_frame.transpose()

# Get the column names (channel names)
channel_names = transposed_df.columns.tolist()

# Extract the values from the DataFrame
data = transposed_df.values

# Create a meshgrid for the x and y coordinates
time = np.arange(data.shape[0])
amplitude = np.arange(data.shape[1])
amp_mesh, time_mesh = np.meshgrid(amplitude, time)

# Create a figure and axes for the 3D plot
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')

# Plot the surface
surf = ax.plot_surface(time_mesh, amp_mesh, data, cmap='viridis')

# Set labels for the axes
ax.set_xlabel('Time')
ax.set_ylabel('Channel ')
ax.set_zlabel('Amplitude')

plt.show()



[32]
2s
import mne

# Load the raw data
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif.gz")


# Drop the 'Unnamed: 0' column
data_frame = pd.read_csv(csv_file)
data_frame = data_frame.drop('Unnamed: 0', axis=1)

# Plot the EEG data
raw.plot(duration=100, scalings='auto')


render as audio ASAP appears to work

[77]
0s
import mne
import numpy as np
import soundfile as sf

# Load the raw data with preload=True
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif.gz", preload=True)

# Apply high-pass filter
highpass_freq = 1.0  # Adjust as needed
raw.filter(l_freq=highpass_freq, h_freq=None)

# Get the audio signal from the raw data
audio_signal = raw.get_data().flatten()

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
time = np.arange(len(audio_signal)) / raw.info['sfreq']
modulation_signal = modulation_index * np.sin(2 * np.pi * time)
audio_signal_fm = np.sin(2 * np.pi * carrier_frequency * time + modulation_signal)

# Normalize the audio signal
normalized_audio = audio_signal_fm / np.max(np.abs(audio_signal_fm))

# Rescale the audio signal to match the duration of the EEG recording
duration = raw.times[-1]  # Duration of the EEG recording in seconds
normalized_audio = normalized_audio[:int(duration * raw.info['sfreq'])]

# Save the audio signal to a WAV file
sf.write("output_audio.wav", normalized_audio, int(raw.info['sfreq']))

# Play the audio
import IPython.display as ipd
ipd.Audio("output_audio.wav")


[78]
3s
import mne
import numpy as np
import soundfile as sf
import pandas as pd
import matplotlib.pyplot as plt

# Drop the 'Unnamed: 0' column from the DataFrame
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = pd.read_csv(csv_file)
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Plot the EEG data
data_frame.plot(figsize=(12, 6))
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.legend(channel_names)
plt.show()

# Load the raw data with preload=True
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif.gz", preload=True)

# Apply high-pass filter
highpass_freq = 1.0  # Adjust as needed
raw.filter(l_freq=highpass_freq, h_freq=None)

# Get the audio signal from the raw data
audio_signal = raw.get_data().flatten()

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
time = np.arange(len(audio_signal)) / raw.info['sfreq']
modulation_signal = modulation_index * np.sin(2 * np.pi * time)
audio_signal_fm = np.sin(2 * np.pi * carrier_frequency * time + modulation_signal)

# Normalize the audio signal
normalized_audio = audio_signal_fm / np.max(np.abs(audio_signal_fm))

# Rescale the audio signal to match the duration of the EEG recording
duration = raw.times[-1]  # Duration of the EEG recording in seconds
normalized_audio = normalized_audio[:int(duration * raw.info['sfreq'])]

# Save the audio signal to a WAV file
sf.write("output_audio.wav", normalized_audio, int(raw.info['sfreq']))

# Play the audio
import IPython.display as ipd
ipd.Audio("output_audio.wav")


[79]
1s
import mne
import numpy as np
import soundfile as sf
import pandas as pd
import matplotlib.pyplot as plt

# Drop the unnecessary columns from the DataFrame
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = pd.read_csv(csv_file)
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Load the raw data with preload=True
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif.gz", preload=True)

# Apply high-pass filter
highpass_freq = 1.0  # Adjust as needed
raw.filter(l_freq=highpass_freq, h_freq=None)

# Get the EEG data
eeg_data = raw.get_data()

# Compute the spectrogram
sfreq = raw.info['sfreq']
n_fft = int(sfreq * 2)  # Adjust the window length as needed
n_overlap = int(n_fft / 2)  # Adjust the overlap as needed
frequencies, times, spectrogram = plt.specgram(eeg_data[0], NFFT=n_fft, Fs=sfreq, noverlap=n_overlap)

# Plot the spectrogram
plt.imshow(10 * np.log10(spectrogram), aspect='auto', cmap='inferno', origin='lower')
plt.xlabel('Time')
plt.ylabel('Frequency')
plt.title('EEG Spectrogram')
plt.colorbar(label='Power Spectral Density (dB/Hz)')
plt.show()

# Save the spectrogram as an image file
plt.imsave('spectrogram.png', 10 * np.log10(spectrogram), cmap='inferno')

# Convert the spectrogram to an audio signal
audio_signal = np.sum(eeg_data, axis=0)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Save the audio signal to a WAV file
sf.write("output_audio.wav", normalized_audio, int(raw.info['sfreq']))

# Play the audio
import IPython.display as ipd
ipd.Audio("output_audio.wav")


THIS WORKS RENDERS THE AUDIO PERFECTLY
[83]
1s

# Drop the unnecessary columns from the DataFrame
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = pd.read_csv(csv_file)
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Apply high-pass filter
highpass_freq = 6.5  # Adjust as needed
raw.filter(l_freq=highpass_freq, h_freq=None)



[33]
0s
#import mne
#import pandas as pd

# Define the path to the locs file
#locs_file = '/content/Standard-10-10-Cap33.locs'

# Read the locs file into a DataFrame
#locs_data = pd.read_csv(locs_file, sep='\t', skiprows=1, usecols=[1, 2, 3], names=['x', 'y', 'label'])

# Create a dictionary of electrode positions
#electrode_positions = {ch: [x, y] for ch, x, y in zip(locs_data['label'], locs_data['x'], locs_data['y'])}

# Print the electrode positions
#for ch, pos in electrode_positions.items():
#    print(f'{ch}: {pos}')

[34]
10m
import numpy as np
import matplotlib.pyplot as plt

import mne
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
from mne.datasets import sample
from pathlib import Path

print(__doc__)

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Drop the 'Unnamed: 0' column and other unnecessary columns
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Convert the data to numpy array
data = data_frame.values.T  # Transpose the data to have shape (n_channels, n_samples)

# Create an MNE Info object
info = mne.create_info(ch_names=data_frame.columns.tolist(), sfreq=250, ch_types='eeg')

# Create an MNE Raw object
raw = mne.io.RawArray(data, info)

# Set up the paths and file names
data_path = sample.data_path()
meg_path = data_path / "MEG" / "sample"
fname_inv = meg_path / "sample_audvis-meg-oct-6-meg-inv.fif"
label_name = "Aud-lh"
fname_label = meg_path / "labels" / f"{label_name}.label"

event_id, tmin, tmax = 1, -0.2, 0.5

# Using the same inverse operator when inspecting single trials Vs. evoked
snr = 3.0  # Standard assumption for average data but using it for single trial
lambda2 = 1.0 / snr**2

method = "dSPM"  # use dSPM method (could also be MNE or sLORETA)

# Load inverse operator and label
inverse_operator = read_inverse_operator(fname_inv)
label = mne.read_label(fname_label)

# Create epochs from the raw data
epochs = mne.Epochs(
    raw,
    events=np.array([[0, 0, event_id]]),
    event_id=event_id,
    tmin=tmin,
    tmax=tmax,
    baseline=None,
    preload=True,
)

# Compute inverse solution and stcs for each epoch
stcs = apply_inverse_epochs(
    epochs,
    inverse_operator,
    lambda2,
    method,
    label,
    pick_ori="normal",
    nave=1,
)

# Mean across trials but not across vertices in label
mean_stc = sum(stcs) / len(stcs)

# compute sign flip to avoid signal cancellation when averaging signed values
flip = mne.label_sign_flip(label, inverse_operator["src"])

label_mean = np.mean(mean_stc.data, axis=0)


Creates epochs

[35]
2s
import pandas as pd
import mne
import matplotlib.pyplot as plt

# Load the compressed CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file, compression='gzip')

# Drop the unnecessary columns
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX']
data_frame = data_frame.drop(columns_to_drop, axis=1)



# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Convert the data to numpy array
data = data_frame.values.T  # Transpose the data to have shape (n_channels, n_samples)

# Create an MNE Info object
sfreq = 1000  # Sampling frequency in Hz
info = mne.create_info(channel_names, sfreq, ch_types='eeg')

# Create MNE RawArray object
raw = mne.io.RawArray(data, info)

# Set up events (assuming single continuous recording)
events = mne.make_fixed_length_events(raw, duration=1.0)  # 1 second duration epochs

# Create MNE Epochs object
epochs = mne.Epochs(raw, events, tmin=0, tmax=1.0, baseline=None, preload=True)

# Plot the epochs
epochs.plot(scalings='auto', n_epochs=30)

# Show the plot
plt.show()


[ ]
0s

epoch for just on recording
[36]
0s
import pandas as pd
import mne
import numpy as np
import matplotlib.pyplot as plt

# Select the EEG data from the first file in the Epilepsy object
data_frame = Epilepsy[0]

# Select channel AF3
channel_name = '1'
channel_data = data_frame[channel_name].values

# Create an MNE Info object for a single channel
sfreq = 1000  # Sampling frequency in Hz
info = mne.create_info([channel_name], sfreq, ch_types='eeg')

# Create an MNE RawArray object for the selected channel data
raw = mne.io.RawArray([channel_data], info)

# Set up events (assuming single continuous recording)
event_id = 1  # Event ID for the epochs
event_duration = 1.0  # Duration of each epoch in seconds
events = mne.make_fixed_length_events(raw, duration=event_duration, event_id=event_id)

# Create MNE Epochs object for the selected channel data
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=0, tmax=event_duration, baseline=None, preload=True)

# Select epoch 5 from the Epochs object
epoch_index = 4  # Zero-based index (epoch 5 corresponds to index 4)
epoch5AF3 = epochs[epoch_index]

# Plot the epoch
epoch5AF3.plot()

# Show the plot
plt.show()


[37]
0s
import pandas as pd
import mne
import numpy as np
import matplotlib.pyplot as plt

# Select the EEG data from the first file in the Epilepsy object
data_frame = Epilepsy[0]

# Convert the data to numpy array
data = data_frame.values.T  # Transpose the data to have shape (n_channels, n_samples)

# Create an MNE Info object
sfreq = 1000  # Sampling frequency in Hz
info = mne.create_info(n_channels=data.shape[0], sfreq=sfreq, ch_types='eeg')

# Create MNE RawArray object
raw = mne.io.RawArray(data, info)

# Set up events (assuming single continuous recording)
event_id = 1  # Event ID for the epochs
event_duration = 1.0  # Duration of each epoch in seconds
events = mne.make_fixed_length_events(raw, duration=event_duration, event_id=event_id)

# Create MNE Epochs object
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=0, tmax=event_duration, baseline=None, preload=True)

# Plot the epochs
epochs


[38]
0s
import pandas as pd
import mne
import numpy as np
import matplotlib.pyplot as plt

# Select the EEG data from the first file in the Epilepsy object
data_frame = Epilepsy[0]

# Get the column names (channel names)
channel_names = data_frame.info.ch_names

# Convert the data to numpy array
data = np.transpose(data_frame.get_data())  # Transpose the data to have shape (n_samples, n_channels)

# Combine the data from all events
combined_data = data.reshape(-1, data.shape[-1])  # Reshape to (n_channels, n_samples * n_events)

# Create an MNE Info object
sfreq = data_frame.info['sfreq']
info = mne.create_info(channel_names, sfreq, ch_types='eeg')

# Create MNE RawArray object
raw = mne.io.RawArray(combined_data, info)

# Set up events (assuming single continuous recording)
event_id = 1  # Event ID for the epochs
event_duration = 1.0  # Duration of each epoch in seconds
events = mne.make_fixed_length_events(raw, duration=event_duration, event_id=event_id)

# Create MNE Epochs object
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=0, tmax=event_duration, baseline=None, preload=True)

# Plot the epochs
epochs.plot(scalings='auto', n_epochs=30)

# Show the plot
plt.show()


[39]
1s
epochs.plot_psd(picks='F4')

[40]
0s
epochs.plot_image(picks='T8')

[41]
1s
epochs_for_tfr = mne.Epochs(raw, events, tmin=-0.5, tmax=0.5, baseline=None, preload=True)
epochs_for_tfr['1'].plot_image(picks='AF3')


[42]
0s
import pandas as pd
import mne
import matplotlib.pyplot as plt

# Select the EEG data from the first file in the Epilepsy object
data_frame = Epilepsy[0]

# Create an MNE Info object
sfreq = 1000  # Sampling frequency in Hz
info = mne.create_info(data_frame.columns.tolist(), sfreq, ch_types='eeg')

# Create MNE RawArray object
raw = mne.io.RawArray(data_frame.values.T, info)

# Set up events (assuming single continuous recording)
event_duration = 1.0  # Duration of each epoch in seconds
events = mne.make_fixed_length_events(raw, duration=event_duration)

# Create MNE Epochs object
epochs = mne.Epochs(raw, events, tmin=0, tmax=event_duration, baseline=None, preload=True)

# Plot the image visualization for all channels and epochs
epochs.plot_image(combine='mean')

# Show the plot
plt.show()


[43]
0s
import pandas as pd
import matplotlib.pyplot as plt

# Load the CSV file using Pandas
csv_file = '/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau/signal-1.csv.gz'
data_frame = pd.read_csv(csv_file)

# Drop the 'Unnamed: 0' column
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = data_frame.drop(columns_to_drop, axis=1)

data_frame_1d = data_frame.values.ravel()

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Plot the EEG data
#data_frame.plot(figsize=(12, 6))
#plt.xlabel('Time')
#plt.ylabel('Amplitude')
#plt.legend(channel_names)
#plt.show()

#epochs.plot_image(data_frame_1d)

[61]
4s
import pandas as pd

# Step 1: Get the index of all channels
channel_indices = [epochs.ch_names.index(ch) for ch in epochs.ch_names]

# Step 2: Extract data for all channels
data = epochs.get_data()[:, channel_indices, :]

# Step 3: Convert to DataFrame
df = pd.DataFrame(data.transpose(0, 2, 1).reshape(-1, len(channel_indices)), columns=epochs.ch_names)

# Step 4: Drop columns
#columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
#df = df.drop(columns_to_drop, axis=1)

# Plot the EEG data
data_frame.plot(figsize=(12, 6))

# Step 5: Plot the image
df.plot_image()


[45]
0s
import pandas as pd

# Step 1: Get the index of 'AF3' channel
channel_index = epochs.ch_names.index('AF3')

# Step 2: Extract 'AF3' data
af3_data = epochs.get_data()[:, channel_index, :]

# Step 3: Convert to DataFrame
af3_dataframe = pd.DataFrame(af3_data.T, columns=epochs.events[:, 2])

# Step 4: Drop columns
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
af3_dataframe = af3_dataframe.drop(columns_to_drop, axis=1)

# Step 5: Plot the image
af3_dataframe.plot_image()


[46]
0s
# MNE functions
from mne import Epochs,find_events, concatenate_raws
from mne.time_frequency import tfr_morlet

# EEG-Notebooks functions
#from eegnb.analysis.utils import load_data,plot_conditions
#from eegnb.datasets import fetch_dataset

import mne

# Assuming you have the 'epochs' object and want to select channel 'AF3'
epochs = mne.Epochs(raw, events, tmin=0, tmax=1.0, baseline=None, preload=True)  # Your 'epochs' object
channel_name = 'AF3'

# Select the channel index
channel_idx = epochs.ch_names.index(channel_name)

# Define the frequency range for the Morlet wavelets
freqs = [4, 8, 12, 16, 20]  # Frequency range in Hz

# Define the number of cycles for each frequency in the range
n_cycles = [freq / 2.0 for freq in freqs]  # Adjust the division factor as needed

# Compute the time-frequency representation (TFR) using Morlet wavelets
tfr = mne.time_frequency.tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, picks=channel_idx, average=True)

Not setting metadata
38 matching events found
No baseline correction applied
0 projection items activated
Using data from preloaded Raw for 38 events and 1001 original time points ...
0 bad epochs dropped
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:    0.1s remaining:    0.0s
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:    0.1s finished
[47]
0s
import numpy as np
from matplotlib import pyplot as plt

from mne import create_info, Epochs
from mne.baseline import rescale
from mne.io import RawArray
from mne.time_frequency import (
    tfr_multitaper,
    tfr_stockwell,
    tfr_morlet,
    tfr_array_morlet,
    AverageTFR,
)
from mne.viz import centers_to_edges

print(__doc__)
Automatically created module for IPython interactive environment
Morlet transform

[48]
2s
n_cycles = [freq / 2.0 for freq in freqs]
power = tfr_morlet(
    epochs, freqs=freqs, n_cycles=n_cycles, return_itc=False, average=False
)
print(type(power))
avgpower = power.average()
avgpower.plot(
    [0],
    baseline=(0.0, 0.1),
    mode="mean",
    vmin=None,
    vmax=None,
    title="Using Morlet wavelets and EpochsTFR",
    show=False,
)
plt.show()


Define and fit the ICA
[ ]
0s
# Define and fit the ICA
#ica = ICA(random_state=30042020)
#ica.fit(raw)

# Get the ICA-transformed data
#ica_data = ica._transform_raw(raw, 0, len(raw.times))

# Obtain the power spectrum using psd_array_welch
#psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)
[66]
0s
ica = ICA(n_components=14)
ica.fit(raw)

# Get the ICA-transformed data
ica_data = ica._transform_raw(raw, 0, len(raw.times))

# Obtain the power spectrum using psd_array_welch
psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

#normalized_psd = psd / np.max(psd)
#audio_signal = normalized_psd.ravel()
Fitting ICA to data using 33 channels (please be patient, this may take a while)
Selecting by number: 14 components
<ipython-input-66-da386618e51d>:2: RuntimeWarning: The data has not been high-pass filtered. For good ICA performance, it should be high-pass filtered (e.g., with a 1.0 Hz lower bound) before fitting ICA.
  ica.fit(raw)
Fitting ICA took 2.0s.
Effective window size : 0.064 (s)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:    0.0s finished
Normalize and reshape power spectrum as audio signal
[67]
0s
# Normalize and reshape power spectrum as audio signal
normalized_psd = psd / np.max(psd)
audio_signal = normalized_psd.ravel()
[68]
0s
import numpy as np
import matplotlib.pyplot as plt

# Assuming you have the variable 'psd' that holds the power spectrum
# Calculate the normalized power spectral density
normalized_psd = psd / np.max(psd)

# Flatten the normalized power spectral density into a 1D array
audio_signal = normalized_psd.ravel()

# Plot the normalized power spectral density
plt.figure(figsize=(10, 6))
plt.plot(normalized_psd)
plt.title('Normalized Power Spectral Density')
plt.xlabel('Frequency Bins')
plt.ylabel('Normalized Power')
plt.show()

Convert audio signal to appropriate data type for writing to WAV
as single channel AF3
[69]
1s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA
from mne.time_frequency import psd_array_welch

# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=14)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Obtain the power spectrum using psd_array_welch
psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)

# Flatten the normalized power spectrum into a 1D array
audio_signal = normalized_psd.ravel()

# Plot the normalized power spectrum for the selected channel
plt.figure(figsize=(10, 6))
plt.plot(freqs, normalized_psd)
plt.title('Normalized Power Spectral Density - Channel: ' + channel_name)
plt.xlabel('Frequency (Hz)')
plt.ylabel('Normalized Power')
plt.show()

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 4  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = 10  # Duration of the audio signal in seconds
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Generate the binaural beats
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")


AS FM no PSD
[52]
5s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=14)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Set the parameters for FM synthesis
carrier_frequency = 440  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')


# Play the FM audio signal in Colab
Audio("fm_audio.wav")


try and do it with RAW

[57]
0s
# Generate FM audio for each channel
sample_rate = 44100  # Adjust as needed
normalized_audio = np.zeros((len(channel_names), data.shape[1]))

for i, channel_index in enumerate(channel_indices):
    ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

    # Set the parameters for FM synthesis
    carrier_frequency = 440  # Adjust the carrier frequency as needed
    modulation_index = 4  # Adjust the modulation index as needed

    # Generate the FM audio signal
    audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

    # Normalize the audio signal
    normalized_audio[i] = audio_signal[i] / np.max(np.abs(audio_signal[i]))



[58]
2s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

# Set the channels to process
channel_names = epochs.ch_names

# Get the indices of all channels
channel_indices = [epochs.ch_names.index(ch) for ch in channel_names]

# Extract data for all channels
data = epochs.get_data()[:, channel_indices, :]

# Transpose the data to have shape (n_channels, n_samples, n_epochs)
data = data.transpose(1, 2, 0)

# Reshape the data to have shape (n_channels * n_samples, n_epochs)
data = data.reshape(-1, data.shape[2])

# Fit ICA on the raw data
ica = ICA(n_components=14)
ica.fit(raw)

# Generate FM audio for each channel
sample_rate = 44100  # Adjust as needed
normalized_audio = np.zeros((len(channel_names), data.shape[1]))

for i, channel_index in enumerate(channel_indices):
    ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

    # Set the parameters for FM synthesis
    carrier_frequency = 440  # Adjust the carrier frequency as needed
    modulation_index = 4  # Adjust the modulation index as needed

    # Generate the FM audio signal
    audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

    # Normalize the audio signal
    normalized_audio[i] = audio_signal / np.max(np.abs(audio_signal))

# Combine the normalized audio signals for all channels
combined_audio = np.sum(normalized_audio, axis=0)

# Normalize the combined audio signal
combined_audio /= np.max(np.abs(combined_audio))

# Rescale the combined audio signal to match the desired duration
duration = raw.times[-1]  # Duration of the EEG recording in seconds
combined_audio = np.resize(combined_audio, int(sample_rate * duration))

# Save the FM audio signal to a file
sf.write("fm_audio.wav", combined_audio, sample_rate, 'PCM_16')

# Play the FM audio signal in Colab
Audio("fm_audio.wav")


drop collumns play that remain
[ ]
0s
import pandas as pd

# List of file paths
Epilepsy = ['EEGs_Guinea-Bissau/signal-1.csv.gz', 'EEGs_Guinea-Bissau/signal-2.csv.gz', 'EEGs_Guinea-Bissau/signal-3.csv.gz']

# Drop the specified columns and save the modified data as a new object
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']

epilepsy = []  # List to store modified data

# Process each file
for file_path in Epilepsy:
    # Load the CSV file using Pandas
    data_frame = pd.read_csv(file_path, compression='gzip')
    
    # Drop the specified columns
    data_frame = data_frame.drop(columns_to_drop, axis=1)
    
    # Append the modified data to the epilepsy list
    epilepsy.append(data_frame)

# If you want to concatenate all the modified data into a single DataFrame, you can use the following line:
    #epilepsy = pd.concat(epilepsy, ignore_index=True)

[ ]
0s
import matplotlib.pyplot as plt

# Plot the data in the epilepsy object
for i, data_frame in enumerate(epilepsy):
    # Get the column names (channel names)
    channel_names = data_frame.columns.tolist()

    # Plot the EEG data
    plt.figure()
    data_frame.plot(figsize=(12, 6))
    plt.xlabel('Time')
    plt.ylabel('Amplitude')
    plt.legend(channel_names)
    plt.title(f'Epilepsy {i+1}')
    plt.show()

In this modified code, we select the EEG data from the first file in the epilepsy object by accessing epilepsy[0]. We then iterate over each channel of the EEG data and apply FM synthesis to sonify it. The resulting audio signals for each channel are combined, normalized, and saved to a WAV file. Finally, the audio signal is played using IPython.display.Audio.

Please note that you may need to adjust the carrier_frequency, modulation_index, and sample_rate parameters according to your requirements.

[ ]
0s
import numpy as np
import soundfile as sf
from scipy.signal import chirp
from IPython.display import Audio

# Select the EEG data from the first file in the epilepsy object
eeg_data = Epilepsy[0]

# Set the parameters for FM synthesis
carrier_frequency = 440  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed
sample_rate = 44100  # Adjust as needed

# Sonify each channel of the EEG data
audio_signals = []
for channel_name in eeg_data.columns:
    channel_data = eeg_data[channel_name].values

    # Normalize the channel data
    normalized_data = (channel_data - np.min(channel_data)) / (np.max(channel_data) - np.min(channel_data))

    # Apply FM synthesis
    t = np.arange(len(channel_data)) / sample_rate
    modulation = normalized_data * modulation_index
    audio_signal = chirp(t, f0=carrier_frequency, f1=carrier_frequency, t1=t[-1], method='linear', phi=90, vertex_zero=False) * modulation

    # Append the audio signal to the list
    audio_signals.append(audio_signal)

# Combine the audio signals from all channels
combined_audio = np.sum(audio_signals, axis=0)

# Normalize the combined audio signal
combined_audio /= np.max(np.abs(combined_audio))

# Save the combined audio signal to a file
sf.write("eeg_audio.wav", combined_audio, sample_rate, 'PCM_16')

# Play the combined audio signal
Audio("eeg_audio.wav")

[ ]
0s
import numpy as np
import soundfile as sf
from IPython.display import Audio

# Select the EEG data from the first file in the epilepsy object
eeg_data = epilepsy[0]

# Specify the channels to process
channels_to_process = ['AF3', 'AF4']  # Add or remove channel names as needed

# Set the parameters for FM synthesis
carrier_frequency = 440  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed
sample_rate = 44100  # Adjust as needed

# Create an empty list to store the audio signals for each channel
audio_signals = []

# Generate the FM audio signal for each selected channel
for channel_name in channels_to_process:
    # Get the index of the selected channel
    channel_index = eeg_data.columns.get_loc(channel_name)

    # Get the selected channel data
    channel_data = eeg_data.iloc[:, channel_index]

    # Determine the duration of the EEG recording
    duration = channel_data.shape[0] / sample_rate

    # Generate the FM audio signal
    time = np.linspace(0, duration, channel_data.shape[0])
    audio_signal = np.sin(2 * np.pi * carrier_frequency * time + modulation_index * channel_data.values)

    # Normalize the audio signal
    normalized_audio = audio_signal / np.max(np.abs(audio_signal))

    # Append the audio signal to the list
    audio_signals.append(normalized_audio)

# Combine the audio signals from all channels
combined_audio = np.sum(audio_signals, axis=0)

# Normalize the combined audio signal
normalized_combined_audio = combined_audio / np.max(np.abs(combined_audio))

# Save the combined audio signal to a file
sf.write("eeg_audio.wav", normalized_combined_audio, sample_rate, 'PCM_16')

# Play the combined audio signal
Audio("eeg_audio.wav")

Double-click (or enter) to edit

[ ]
0s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

#Epilepsy=[pd.read_csv('EEGs_Guinea-Bissau/signal-{}.csv.gz'.format(i), compression='gzip') for i in  EP_sub]

# Load the CSV file using Pandas
csv_file = 'Epilepsy'
data_frame = [pd.read_csv('EEGs_Guinea-Bissau/signal-{}.csv.gz'.format(i), compression='gzip') for i in  EP_sub]

# Drop the specified columns
columns_to_drop = ['Unnamed: 0', 'GYROY', 'GYROX', 'COUNTER', 'INTERPOLATED', 'GYROX', 'GYROY', 'RAW_CQ', 'CQ_CMS', 'CQ_F7', 'CQ_T7', 'CQ_O2', 'CQ_FC6', 'CQ_AF4', 'CQ_F3', 'CQ_P7', 'CQ_P8', 'CQ_F4', 'CQ_AF3', 'CQ_FC5', 'CQ_O1', 'CQ_T8', 'CQ_F8', 'CQ_DRL']
data_frame = data_frame.drop(columns_to_drop, axis=1)

# Get the column names (channel names)
channel_names = data_frame.columns.tolist()

# Fit ICA on the raw data
ica = FastICA(n_components=len(channel_names))
ica.fit(data_frame)

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Set the parameters for FM synthesis
carrier_frequency = 440  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')


# Play the FM audio signal in Colab
Audio("fm_audio.wav")

FFT
[70]
6s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

#Set the channel to process
channel_name = 'AF3'

#Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

#Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

#Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

#Set the parameters for FM synthesis
carrier_frequency = 880 # Adjust the carrier frequency as needed
modulation_index = 4 # Adjust the modulation index as needed

#Reshape the ica_data to match the number of samples
n_samples = len(raw.times)
ica_data = np.resize(ica_data, (n_samples,))

#Compute the audio signal using FFT
exponential_array = np.exp(1j * modulation_index * 2 * np.pi * carrier_frequency * np.arange(n_samples) / sample_rate)
exponential_array = np.resize(exponential_array, (n_samples//2+1,))
audio_signal = np.fft.irfft(np.fft.rfft(ica_data) * exponential_array)

#ormalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

#Rescale the audio signal to match the desired duration
normalized_audio *= 0.5 # Adjust the volume as needed
sample_rate = 44100 # Adjust as needed
duration = raw.times[-1] # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

#Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')

#Play the FM audio signal in Colab
Audio("fm_audio.wav")

[ ]
0s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

#Set the channel to process
channel_name = 'AF3'

#Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

#Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

##Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

#Set the parameters for FM synthesis
carrier_frequency = 440 # Adjust the carrier frequency as needed
modulation_index = 4 # Adjust the modulation index as needed

#Reshape the ica_data to match the number of samples
n_samples = len(raw.times)
ica_data = np.resize(ica_data, (n_samples,))

#Compute the audio signal using FFT
exponential_array = np.exp(1j * modulation_index * 2 * np.pi * carrier_frequency * np.arange(n_samples) / sample_rate)
exponential_array = np.resize(exponential_array, (n_samples,))
audio_signal = np.fft.irfft(np.fft.rfft(ica_data) * exponential_array)

#Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

#Rescale the audio signal to match the desired duration
normalized_audio *= 0.5 # Adjust the volume as needed
sample_rate = 44100 # Adjust as needed
duration = raw.times[-1] # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

#Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')

#Play the FM audio signal in Colab
Audio("fm_audio.wav")
[ ]
0s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))[:, :min(ica._transform_raw(raw, channel_index, len(raw.times)).shape[1], len(raw.times[:-1]))]

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times[:-1][:len(ica_data[0])] + modulation_index * ica_data[0])

# Calculate the amplitude envelope
amplitude_envelope = np.abs(audio_signal)

# Normalize the audio signal and the amplitude envelope
normalized_audio = audio_signal / np.max(np.abs(audio_signal))
normalized_envelope = amplitude_envelope / np.max(amplitude_envelope)

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-2]  # Duration of the EEG recording in seconds (use -2 to match the trimmed length)
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')

# Plot the amplitude envelope
plt.figure(figsize=(10, 6))
plt.plot(raw.times[:-1][:len(ica_data[0])], normalized_envelope)
plt.title('Amplitude Envelope - Channel: ' + channel_name)
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.show()

# Play the FM audio signal in Colab
Audio(audio_signal, rate=sample_rate)

hihg pass filter

[72]
10s
import mne

# Define the cutoff frequency for the high-pass filter
highpass_freq = 1.0  # Adjust as needed

# Apply the high-pass filter to the raw data
raw.filter(l_freq=highpass_freq, h_freq=None)

# Continue with ICA fitting and processing
ica = mne.preprocessing.ICA(n_components=33)
ica.fit(raw)

# Rest of your code...


Zer crossing

[73]
7s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

# Set the channel to process
channel_name = 'AF3'



[ ]
0s
import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, (int(sample_rate * duration), len(audio_signal)))

    stereo_audio = librosa.util.normalize(audio_signal) * volume

    return stereo_audio, sample_rate

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)  # Normalize the power spectrum

# Reshape the epoch into a 1D array for each channel
audio_signals = [normalized_psd[channel_index].ravel() for channel_index in range(normalized_psd.shape[0])]

# Rescale the audio signals to match the desired duration
audio_signals = [audio_signal * 0.5 for audio_signal in audio_signals]  # Adjust the volume as needed
audio_signals = [np.resize(audio_signal, int(sample_rate * duration)) for audio_signal in audio_signals]

# Generate the binaural beats for each channel
binaural_beats = []
for audio_signal in audio_signals:
    channel_binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)
    binaural_beats.append(channel_binaural_beats)

# Combine the binaural beats for all channels
stereo_audio = np.concatenate(binaural_beats, axis=1)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", stereo_audio, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

FM & AMP
[ ]
0s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

# Set the channel to process
channel_name = 'F3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Save the FM audio signal to a file
sf.write("fm_audio.wav", audio_signal, sample_rate, 'PCM_16')


# Play the FM audio signal in Colab
Audio("fm_audio.wav")

[ ]
0s
import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, int(sample_rate * duration))

    left_channel = audio_signal
    right_channel = audio_signal

    stereo_audio = np.vstack((left_channel, right_channel)).T
    stereo_audio = librosa.util.normalize(stereo_audio) * volume

    return stereo_audio, sample_rate


# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Generate the binaural beats
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 100  # Adjust the beat frequency as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

[ ]
0s
import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, int(sample_rate * duration))

    left_channel = audio_signal
    right_channel = audio_signal

    stereo_audio = np.vstack((left_channel, right_channel)).T
    stereo_audio = librosa.util.normalize(stereo_audio) * volume

    return stereo_audio, sample_rate


# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Set the parameters for FM synthesis
carrier_frequency = 220  # Adjust the carrier frequency as needed
modulation_index = 4  # Adjust the modulation index as needed

# Generate the FM audio signal
audio_signal = np.sin(2 * np.pi * carrier_frequency * raw.times + modulation_index * ica_data)

# Normalize the audio signal
normalized_audio = audio_signal / np.max(np.abs(audio_signal))

# Rescale the audio signal to match the desired duration
normalized_audio *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
audio_signal = np.resize(normalized_audio, int(sample_rate * duration))

# Generate the binaural beats
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 100  # Adjust the beat frequency as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

[ ]
0s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA
from mne.time_frequency import psd_array_welch

# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Obtain the power spectrum using psd_array_welch
psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)

# Calculate the frequency modulation (FM) signal
fm_signal = np.sin(2 * np.pi * freqs * normalized_psd)

# Rescale the FM signal to match the desired duration
fm_signal *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
duration = 10  # Duration of the audio signal in seconds
fm_signal = np.resize(fm_signal, int(sample_rate * duration))

# Generate the binaural beats
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 4  # Adjust the beat frequency as needed
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, fm_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

enire length

[ ]
0s
import numpy as np
import matplotlib.pyplot as plt
import librosa
import soundfile as sf
from IPython.display import Audio
from mne.preprocessing import ICA
from mne.time_frequency import psd_array_welch

# Set the channel to process
channel_name = 'AF3'

# Fit ICA on the raw data
ica = ICA(n_components=33)
ica.fit(raw)

# Get the index of the selected channel
channel_index = raw.ch_names.index(channel_name)

# Get the ICA-transformed data for the selected channel
ica_data = ica._transform_raw(raw, channel_index, len(raw.times))

# Obtain the power spectrum using psd_array_welch
psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)

# Flatten the normalized power spectrum into a 1D array
audio_signal = normalized_psd.ravel()

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 4  # Adjust the beat frequency as needed
duration = raw.times[-1]  # Duration of the EEG recording in seconds

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
sample_rate = 44100  # Adjust as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Generate the binaural beats for the entire EEG recording
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

[ ]
0s
#!pip install librosa
[ ]
0s
import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, int(sample_rate * duration))

    left_channel = audio_signal
    right_channel = audio_signal

    stereo_audio = np.vstack((left_channel, right_channel)).T
    stereo_audio = librosa.util.normalize(stereo_audio) * volume

    return stereo_audio, sample_rate

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)  # Normalize the power spectrum

# Reshape the epoch into a 1D array
audio_signal = normalized_psd.ravel()

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Generate the binaural beats
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

[ ]
0s
import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, (int(sample_rate * duration), len(audio_signal)))

    stereo_audio = librosa.util.normalize(audio_signal) * volume

    return stereo_audio, sample_rate

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)  # Normalize the power spectrum

# Reshape the epoch into a 1D array for each channel
audio_signals = [normalized_psd[channel_index].ravel() for channel_index in range(normalized_psd.shape[0])]

# Rescale the audio signals to match the desired duration
audio_signals = [audio_signal * 0.5 for audio_signal in audio_signals]  # Adjust the volume as needed
audio_signals = [np.resize(audio_signal, int(sample_rate * duration)) for audio_signal in audio_signals]

# Generate the binaural beats for each channel
binaural_beats = []
for audio_signal in audio_signals:
    channel_binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)
    binaural_beats.append(channel_binaural_beats)

# Combine the binaural beats for all channels
stereo_audio = np.concatenate(binaural_beats, axis=1)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", stereo_audio, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")

[ ]
0s
!pip install soundfile

[ ]
0s
import numpy as np
import sounddevice as sd
import matplotlib.pyplot as plt

# Assume you have the Morlet wavelet coefficients stored in the variable 'power'
# and the sampling frequency in the variable 'sfreq'
# and the carrier frequency is 440 Hz

# Generate the carrier signal
duration = len(power.times) / sfreq
t = np.linspace(0, duration, len(power.times))
carrier = np.sin(2 * np.pi * 440 * t)

# Modulate the carrier signal with the Morlet wavelet coefficients
modulated_signal = power.data * carrier[:, np.newaxis]

# Sum the modulated signals across time to obtain the audio waveform
audio = np.sum(modulated_signal, axis=1)

# Normalize the audio waveform
audio /= np.max(np.abs(audio))

# Plot the average power
power.plot([0], baseline=(0.0, 0.1), mode="mean", vmin=None, vmax=None, title="Using Morlet wavelets and EpochsTFR", show=False)

# Show the plot
plt.show()

# Play the audio
sd.play(audio, sfreq)
sd.wait()

[ ]
0s
import numpy as np
import soundfile as sf
import matplotlib.pyplot as plt

# Assume you have the Morlet wavelet coefficients stored in the variable 'power'
# and the sampling frequency in the variable 'sfreq'
# and the carrier frequency is 440 Hz

# Reshape the power data to (n_epochs, n_freqs, n_times)
power_data = power.data.reshape(power.data.shape[0], power.data.shape[1], -1)

# Sum the power across frequencies to obtain the audio waveform
audio = np.sum(power_data, axis=1)

# Normalize the audio waveform
audio /= np.max(np.abs(audio))

# Save the audio waveform as a WAV file using soundfile
sf.write('eeg_audio.wav', audio.T, sfreq)

# Plot the average power
#power.plot([0], baseline=(0.0, 0.1), mode="mean", vmin=None, vmax=None, title="Using Morlet wavelets and EpochsTFR", show=False)

# Show the plot
plt.show()

[ ]
0s
#Power density distribution
[ ]
0s
# Assuming you have the PSD data stored in a variable named 'psd'
psd2 = psd

[ ]
0s
normalized_psd = psd / np.max(psd)
works for audio
[ ]
0s
import numpy as np
import matplotlib.pyplot as plt
import soundfile as sf

# Assuming you have the PSD data stored in a variable named 'psd'
# Normalize the PSD
normalized_psd = psd2 / np.max(psd2)

# Define the duration and sample rate for the audio signal
duration = 10  # Duration of the audio signal in seconds
sample_rate = 44100  # Sample rate of the audio signal

# Reshape the normalized PSD into a 1D array
audio_signal = normalized_psd.ravel()

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Save the audio signal to a file
sf.write("psd_audio2.wav", audio_signal, sample_rate, 'PCM_16')

# Plot the PSD
plt.plot(audio_signal)
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.title('PSD Audio Signal')
plt.show()

[ ]
0s

[ ]
0s
import numpy as np
import librosa
import soundfile as sf

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Generate the binaural beats using the PSD as the audio signal
audio_signal = psd

# Generate the binaural beats
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats
Audio("binaural_beats.wav")

[ ]
0s

import mne
import numpy as np
import pandas as pd

# Define the electrode names
ch_pos=['AF3', 'AF4', 'F7', 'F3', 'F4', 'F8', 'FC5', 'FC6', 'T7', 'T8', 'P7', 'P8', 'O1', 'O2']

# Load the electrode positions from the EEGlab repository
loc_file = 'https://raw.githubusercontent.com/sccn/eeglab/develop/sample_locs/Standard-10-20-Cap81.locs'
montage = mne.channels.read_custom_montage(loc_file)

# Get the electrode positions from the montage
positions = montage.get_positions()

# Create a dictionary mapping electrode names to positions
electrode_positions = {ch: pos for ch, pos in zip(ch_pos, positions)}

# Create a montage using the electrode positions
montage = mne.channels.make_dig_montage(ch_pos=electrode_positions, coord_frame='head')

# Plot the electrode positions on the head
montage.plot()
plt.show()


[ ]
0s
scalings = {"eeg":2e-5}
epochs[("1", "2")].plot(title="Condition: Control",
                        scalings=scalings, n_epochs=1)
epochs[("41", "42")].plot(title="Condition: Swallow",
                          scalings=scalings, n_epochs=1);
Load and preprocess the data, extract events and create Epochs.

[ ]
0s
import mne

# Load raw data
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-3_raw.fif")

# Downsample the data to 256 Hz
raw.resample(256.)

# Apply high-pass filter
raw.filter(l_freq=2., h_freq=None)

# Set the reference to average reference
raw.set_eeg_reference("average")

# Pick EEG channels only
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False)
raw = raw.pick(picks)

# Set the channel locations
montage = mne.channels.make_standard_montage('standard_1005')
raw.set_montage(montage)

# Get the channel names from the raw data
channel_names = raw.info['ch_names']

# Print the original channel names
print("Original channel names:")
print(channel_names)

# Define the number of desired channels
num_desired_channels = 35  # Specify the number of desired channels

# Define the new channel names
new_channel_names = ['Channel {}'.format(i+1) for i in range(num_desired_channels)]

# Rename channels in the raw data
channel_mapping = {channel_names[i]: new_channel_names[i] for i in range(num_desired_channels)}
raw.rename_channels(channel_mapping)

# Get the updated channel names
updated_channel_names = raw.info['ch_names']

# Print the updated channel names
print("Updated channel names:")
print(updated_channel_names)

# Create a custom montage based on the updated channel names
ch_pos = {ch_name: raw.info['chs'][i]['loc'][:3] for i, ch_name in enumerate(updated_channel_names)}
montage = mne.channels.make_dig_montage(ch_pos=ch_pos)

raw.set_montage(montage)

# Continue with your analysis using the updated data


# Set the montage
raw.set_montage(montage)

# Save the updated data
# raw.save('/path/to/save/updated_data.fif', overwrite=True)

[ ]
17m
fname = "/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-1_raw.fif" 
[ ]
17m
epochs = mne.read_epochs(fname)
Double-click (or enter) to edit

[ ]
17m
import mne

# Load raw data
raw = mne.io.read_raw("/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/Guinea-Bissaufif/signal-1_raw.fif.gz")

# Get the channel names from the raw data
channel_names = raw.info['ch_names']

# Print the channel names
print("Original channel names:")
print(channel_names)

# Define the number of and name of the desired channels
num_desired_channels = 36  # Specify the number of desired channels
desired_channel_names = [f"Desired Channel {i+1}" for i in range(num_desired_channels)]  # Generate the desired channel names

# Select the desired channels based on the number
selected_channels = channel_names[:num_desired_channels]

# Create a dictionary for channel name mapping
channel_mapping = dict(zip(selected_channels, desired_channel_names))

# Rename channels in the raw data
raw.rename_channels(channel_mapping)

# Get the updated channel names
updated_channel_names = raw.info['ch_names']

# Print the updated channel names
print("Updated channel names:")
print(updated_channel_names)

# Create a custom montage based on the updated channel names
montage = mne.channels.make_standard_montage('standard_1005')
montage.ch_names = updated_channel_names[:len(montage.ch_names)]  # Use the same number of channel names as in the montage

# Update the number of positions in the montage
montage.pos = montage.pos[:num_desired_channels]

raw.set_montage(montage)

# Save the updated raw data file
output_raw_file = "/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau-FIF/updated_signal-1.fif.gz"
raw.save(output_raw_file, overwrite=True)

# Save the montage file
output_montage_file = "/content/drive/MyDrive/Colab-Notebooks/EEG-data/Guinea_bissau/EEGs_Guinea-Bissau-FIF/updated_montage.fif"
montage.save(output_montage_file)

# Continue with the rest of your analysis
# Perform macro analysis, micro analysis, and mid analysis as required

change channel names

First of all, we should inspect our epochs and have a look at the data. As you will see the swallowing condition contains disproportionately more noise than the control condition.

[ ]
17m
scalings = {"eeg":2e-5}
epochs[("1", "2")].plot(title="Condition: Control",
                        scalings=scalings, n_epochs=1)
epochs[("41", "42")].plot(title="Condition: Swallow",
                          scalings=scalings, n_epochs=1);
Now, we can fit an ICA to every type of noise and checkout how each type of noise looks like. This will just give out a list of all diferent conditions with the 5 largest components displayed in detail. After inspections, you might find that for all conditions, not all important components are made up of the respective noise condition, but often EOG/Blink noise is still the most prevalent (because of course people also blinked and moved their eyes during the other conditions). However, you can definitely find some really distinctive components for each type of noise.

[ ]
17m
# initialize ICA
ica = ICA(n_components=20) # We only look at few components to save time

# define the names of our condition and the according trials:
names = ["control conditon", "move your head", "move your jaw",
         "blink ", "swallow", "clench your teeth"]
s_events = [("1", "2"), ("11", "12"), ("21", "22"),
          ("31", "32"), ("41", "42"), ("51", "52")]

plot_dict = {}
for name, event in zip(names, s_events):

  # we fit the ICA on trials for one noise condition specifically
  ica.fit(epochs[event], verbose=0)
  
  # investigate the first 5 ICs from each noise condition
  print("\n\n\nCONDITION: " + name + "\n\n\n")
  plot_dict[name] = ica.plot_properties(epochs[event], np.arange(5), verbose=0);
Let's take a look at these interesting ICs above. Note that for most conditions, EOG movements/Blinks are still the most important/disruptive components. But usually, the specific noise conditions are clearly distinguishable in the first 4 components.

Below, I've collected some components (from different subjects) that I found to be very typical of a specific noise pattern:

Typical Alternating Current/ Line noise:

 

Typical EOG/Eye Movement

 

Typical Blinking:

 

Extensive Blinking:

 

Extensive Head Movement:

 

Extensive Jaw Movement:

 

Extensive Swallowing:

 

Extensive Tooth clenching:

 

PCA on ICA
Here, we use one of MNE-Python's example datasets to further investigate ICA (and to fool around a little). Can PCA and Clustering on the ICA's component topology or power spectrum help us identifying and selecting specific components for exclusion?

The following code is purely experimental and is not part of a conventional ICA analysis.

We start by loading the data.

[ ]
17m
# load the sample data from MNE datasets
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
                                    'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)

# crop data to 1 minute of recording time
raw.crop(tmax=60.)

# pick EEG channels only
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False)
raw = raw.pick(picks)

# load the data and apply a highpass filter above 1 Hz
raw.load_data().filter(l_freq=2., h_freq=None)
Now we fit an ICA to the data. Two important feature for spotting ICs are the topology and the frequency spectrum of a specific component. We can later use them to investigate features in component space.

[ ]
17m
# define and fit the ICA
ica = ICA(random_state=30042021)
ica.fit(raw)

# get the power spectrum of each component
ica_data = ica._transform_raw(raw, 0, len(raw.times))
psd, freqs = psd_array_welch(ica_data, raw.info["sfreq"], n_fft=64)

# get the topology of each component
topo = ica.get_components().T
Now let's see how the ICs can be differentiated in based on either their Power Spectrum or their Topology. For this, we apply a PCA over the extracted ICA features, allowing us to spatially separate and plot the ICs along the 2 most important axes:

[ ]
17m
fig, axes = plt.subplots(1, 2, figsize=(20, 10))
for name, data, ax in [("PSD", psd, axes[0]), ("Topology", topo, axes[1])]:
  # extract the main axes of our data for plotting purposes
  vis_pca = PCA(n_components=2)
  scatter_data = vis_pca.fit_transform(StandardScaler().fit_transform(data))

  # plot the ICs across the main separation axes
  ax.scatter(scatter_data[:, 0], scatter_data[:, 1])  # , c=[scatter_data[:, 2]])
  ax.set_xlabel("1st component")
  ax.set_ylabel("2nd component")

  # plot their labels to recognize each IC
  for idx, coords in enumerate(scatter_data[:, :2]):
    ax.annotate(ica._ica_names[idx], coords)

  ax.set_title("Spatially separating ICA components based on " + name)

#plt.colorbar(label="3rd component")
The pattern on the left looks really interesting. Theoretically, this would be something that we are looking for, due to its clear clustering. However, if we inspect the clusters and their components (shown below), we can't find any cluster which would purely consist of noisy components. There are some noisy IC clusters visible, but they aren't very clearly outlined. Looks like noise/not noise is not among the most important differenciation factors of our IC's topologies and frequency spectra. Also, both of these patterns will look completely different if we fit the ICA on the raw data from above insted of the MNE example data, meaning that even if there was something interesting to find here, these results would probably not generalize to a different EEG recording.

We can still combine the two features and see whether we can fit meaningful clusters on top of these ICs.

[ ]
17m
# select the input for the cluster
cluster_input = np.hstack([psd, topo])
#cluster_input = psd
#cluster_input = ica_data

# set the number of clusters
n_clusters = 3

# concatenate the input and fit the cluster
cluster = KMeans(n_clusters).fit(StandardScaler().fit_transform(cluster_input))
#cluster = OPTICS().fit(StandardScaler().fit_transform(cluster_input))
Now lets do a PCA again to spatially separate them well and plot what different types of ICs our clustering algorithm has found.

[ ]
17m
# extract the main axes of our data for plotting purposes
vis_pca = PCA(n_components=2)
scatter_data = vis_pca.fit_transform(StandardScaler().fit_transform(cluster_input))

# Number of clusters in labels
labels = cluster.labels_

# Define the colors four our clusters
unique_labels = set(labels)
colors = [plt.cm.Spectral(each)
          for each in np.linspace(0, 1, len(unique_labels))]

fig = plt.figure(figsize=(10,10))
# fplot all cluster classes in a different color
for k, col in zip(unique_labels, colors):

    class_member_mask = (labels == k)

    # plot the location and colors of our ICs
    xy = scatter_data[class_member_mask]
    plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
             markeredgecolor='k', markersize=8)

# plot the IC labels
for idx, coords in enumerate(scatter_data):
  plt.annotate(ica._ica_names[idx], coords)

plt.title("ICA components marked via clustering")
plt.xlabel("Main Clustering Axis 1")
plt.ylabel("Main Clustering Axis 2")
plt.legend(["Cluster " + str(i) for i in unique_labels])
To see if the clustering makes any sense, can pick some of the components and see how they are related:

[ ]
17m
ica.plot_properties(raw, picks=[0, 2, 7], verbose=0)
Similar to our PSD/Topology plots, our clustering algorithm might help us in selecting some more noisy components, however it's not clear to a degree where we could practically use this.

Sad :(((

Instead, we should probably stick to already existing helper libraries like ICLabel, a plugin for MATLAB's EEGLab (unfortunately this does not yet exist for Python, but is in progress).

[ ]
17m
!pip install numpy scipy sounddevice

PSD as audio

Double-click (or enter) to edit

[ ]
17m
!install sounddevice as sd
[ ]
17m
import numpy as np
from scipy.io import wavfile

# Define the audio parameters
sample_rate = 44100  # Adjust as needed
duration = 5  # Duration of the audio signal in seconds

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)  # Normalize the power spectrum

# Reshape the normalized power spectrum into a 1D array
audio_signal = normalized_psd.ravel()

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Convert the audio signal to the appropriate data type for writing to WAV
audio_signal = np.int16(audio_signal * 32767)

# Write the audio signal to a WAV file
wavfile.write('audio_signal3.wav', sample_rate, audio_signal)

# Play the audio signal
import sounddevice as sd
sd.play(audio_signal, sample_rate)


PSD as binaural beats

[ ]
17m
!pip install librosa

import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, int(sample_rate * duration))

    left_channel = audio_signal
    right_channel = audio_signal

    stereo_audio = np.vstack((left_channel, right_channel)).T
    stereo_audio = librosa.util.normalize(stereo_audio) * volume

    return stereo_audio, sample_rate

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Load the audio_signal from the file
sample_rate = 44100  # Adjust as needed
duration = 10  # Duration of the audio signal in seconds

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)  # Normalize the power spectrum

# Reshape the normalized power spectrum into a 1D array
audio_signal = normalized_psd.ravel()

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Generate the binaural beats
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")




alter the left_channel & right_channel so if you subtract left_channel from right_channel you get psd

[ ]
17m
import numpy as np
import soundfile as sf
from IPython.display import Audio

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    left_channel = audio_signal
    right_channel = audio_signal - audio_signal.mean()  # Subtract the mean of audio_signal from audio_signal

    stereo_audio = np.vstack((left_channel, right_channel)).T
    stereo_audio = stereo_audio * volume

    return stereo_audio, sample_rate

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Reshape the power spectrum into a 1D array
audio_signal = psd.ravel()

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Generate the binaural beats
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=sample_rate)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats
Audio("binaural_beats.wav")



Colab paid products - Cancel contracts here

check
1s
completed at 11:51 AM
Could not connect to the reCAPTCHA service. Please check your internet connection and reload to get a reCAPTCHA challenge.or k, col in zip(unique_labels, colors):

    class_member_mask = (labels == k)

    # plot the location and colors of our ICs
    xy = scatter_data[class_member_mask]
    plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
             markeredgecolor='k', markersize=8)

# plot the IC labels
for idx, coords in enumerate(scatter_data):
  plt.annotate(ica._ica_names[idx], coords)

plt.title("ICA components marked via clustering")
plt.xlabel("Main Clustering Axis 1")
plt.ylabel("Main Clustering Axis 2")
plt.legend(["Cluster " + str(i) for i in unique_labels])

To see if the clustering makes any sense, can pick some of the components and see how they are related:


[ ]

17m

ica.plot_properties(raw, picks=[0, 2, 7], verbose=0)

Similar to our PSD/Topology plots, our clustering algorithm might help us in selecting some more noisy components, however it’s not clear to a degree where we could practically use this.

Sad :(((

Instead, we should probably stick to already existing helper libraries like ICLabel, a plugin for MATLAB’s EEGLab (unfortunately this does not yet exist for Python, but is in progress).


[ ]

17m

!pip install numpy scipy sounddevice

PSD as audio


Double-click (or enter) to edit


[ ]

17m

!install sounddevice as sd

[ ]

17m

import numpy as np
from scipy.io import wavfile

# Define the audio parameters
sample_rate = 44100  # Adjust as needed
duration = 5  # Duration of the audio signal in seconds

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)  # Normalize the power spectrum

# Reshape the normalized power spectrum into a 1D array
audio_signal = normalized_psd.ravel()

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Convert the audio signal to the appropriate data type for writing to WAV
audio_signal = np.int16(audio_signal * 32767)

# Write the audio signal to a WAV file
wavfile.write('audio_signal3.wav', sample_rate, audio_signal)

# Play the audio signal
import sounddevice as sd
sd.play(audio_signal, sample_rate)


PSD as binaural beats


[ ]

17m

!pip install librosa

import numpy as np
import librosa
import soundfile as sf
from IPython.display import Audio

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    audio_signal = np.resize(audio_signal, int(sample_rate * duration))

    left_channel = audio_signal
    right_channel = audio_signal

    stereo_audio = np.vstack((left_channel, right_channel)).T
    stereo_audio = librosa.util.normalize(stereo_audio) * volume

    return stereo_audio, sample_rate

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Load the audio_signal from the file
sample_rate = 44100  # Adjust as needed
duration = 10  # Duration of the audio signal in seconds

# Normalize the power spectrum
normalized_psd = psd / np.max(psd)  # Normalize the power spectrum

# Reshape the normalized power spectrum into a 1D array
audio_signal = normalized_psd.ravel()

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Generate the binaural beats
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats in Colab
Audio("binaural_beats.wav")




alter the left_channel & right_channel so if you subtract left_channel from right_channel you get psd


[ ]

17m

import numpy as np
import soundfile as sf
from IPython.display import Audio

def generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=22050):
    t = np.linspace(0, duration, int(duration * sample_rate), endpoint=False)

    # Instead of sine waves, use audio_signal as the waveform
    left_channel = audio_signal
    right_channel = audio_signal - audio_signal.mean()  # Subtract the mean of audio_signal from audio_signal

    stereo_audio = np.vstack((left_channel, right_channel)).T
    stereo_audio = stereo_audio * volume

    return stereo_audio, sample_rate

# Set the desired frequencies and duration
base_frequency = 220  # Adjust the base frequency as needed
beat_frequency = 20  # Adjust the beat frequency as needed
duration = 10  # Adjust the duration (in seconds) as needed

# Reshape the power spectrum into a 1D array
audio_signal = psd.ravel()

# Rescale the audio signal to match the desired duration
audio_signal *= 0.5  # Adjust the volume as needed
audio_signal = np.resize(audio_signal, int(sample_rate * duration))

# Generate the binaural beats
binaural_beats, sample_rate = generate_binaural_beats(base_frequency, beat_frequency, duration, audio_signal, volume=0.5, sample_rate=sample_rate)

# Save the binaural beats to a file
sf.write("binaural_beats.wav", binaural_beats, sample_rate, 'PCM_16')

# Play the binaural beats
Audio("binaural_beats.wav")



dance-diffusion to interpolate between epileptic and control group https://colab.research.google.com/drive/1vhT-qxAZTM8fpDCUWdIzyByrXXVRwdkx?usp=sharing

Leave a comment