Question about OSC streaming and Seeking Advice on Data Handling Approaches

Post Reply
ojjjasdkpdk
Posts: 6
Joined: Mon Jan 08, 2024 7:33 am

Question about OSC streaming and Seeking Advice on Data Handling Approaches

Post by ojjjasdkpdk »

Hello! I'm currently working on a project similar to the one discussed here:viewtopic.php?t=1708 . My project involves:

1. Streaming absolute values from all frequency bands.
2. Once 10 values from each frequency are accumulated, I extract features from each column.
3. These extracted features are then fed into a pre-prepared Random Forest classifier. If the prediction is 1 (Mind Wandering), a sound is immediately played for 0.2 seconds.
4. This prediction process repeats with every new set of the latest 10 rows.

Here is my question
Currently, following the approach in the forum's "OSC Receiver.py", I'm recording the data into a CSV file, from which I extract the latest 10 rows for feature extraction, prediction, and feedback.

However, I'm considering an alternative approach using "OSC Receiver Audio Feedback.py". In this method, when each element in plot_data[wave] exceeds 10, I would use plot_data[wave] = plot_data[wave][-plot_val_count:] to obtain the latest 10 data points from each frequency band for feature extraction and prediction. This seems more straightforward and possibly faster. Could I get some advice on which approach might be better?

Also, If you have any additional advice on my code, please let me know. I still have not been able to run the code due to some router issue....

Here's my code:

Code: Select all

# Function to extract features
def extract_features(data):
    df = pd.DataFrame(data, columns=['Delta_TP9', 'Delta_AF7', 'Delta_AF8', 'Delta_TP10',
                                     'Theta_TP9', 'Theta_AF7', 'Theta_AF8', 'Theta_TP10',
                                     'Alpha_TP9', 'Alpha_AF7', 'Alpha_AF8', 'Alpha_TP10',
                                     'Beta_TP9', 'Beta_AF7', 'Beta_AF8', 'Beta_TP10',
                                     'Gamma_TP9', 'Gamma_AF7', 'Gamma_AF8', 'Gamma_TP10'])
    mean_signal = df.mean(axis=0)
    std_signal = df.std(axis=0)
    max_signal = df.max(axis=0)
    min_signal = df.min(axis=0)
    return np.concatenate((mean_signal, std_signal, max_signal, min_signal))

# Setting path for the CSV file
csv_file_path = 'CSV-file.csv'
f = open(csv_file_path, 'w+')

# Variable to hold the index of the last processed line
last_processed_index = -1

# Boolean variables to control the start and stop of data recording
recording = True
deltaReceived = thetaReceived = alphaReceived = betaReceived = gammaReceived = False

# Function to write column names as CSV file header
def writeFileHeader():
    headerString = "TimeStamp,Delta_TP9,Delta_AF7,Delta_AF8,Delta_TP10," \
                 "Theta_TP9,Theta_AF7,Theta_AF8,Theta_TP10," \
                 "Alpha_TP9,Alpha_AF7,Alpha_AF8,Alpha_TP10," \
                 "Beta_TP9,Beta_AF7,Beta_AF8,Beta_TP10," \
                 "Gamma_TP9,Gamma_AF7,Gamma_AF8,Gamma_TP10\n"
    f.write(headerString)
    #f.flush()

# Function called upon receiving data
def eeg_handler(address: str, *args):
    global last_processed_index
    global deltaReceived, thetaReceived, alphaReceived, betaReceived, gammaReceived

    # Write the header
    if not (deltaReceived and thetaReceived and alphaReceived and betaReceived and gammaReceived):
        writeFileHeader()

    if recording:
        # Create a data string for writing to CSV
        timestampStr = datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f")
        fileString = timestampStr + "," + ",".join(map(str, args)) + "\n"

        # Check if all waveform data has been received
        if deltaReceived and thetaReceived and alphaReceived and betaReceived and gammaReceived:
            f.write(fileString)
            #f.flush() 
            # Reset flags
            deltaReceived = thetaReceived = alphaReceived = betaReceived = gammaReceived = False

# Infinite loop to monitor the CSV file
while recording:
    # Read the CSV file
    df = pd.read_csv(csv_file_path)

    # Check if there are more than 10 new lines added
    if len(df) - last_processed_index > 10:
        # Get the latest 10 lines
        new_data = df.iloc[last_processed_index + 1:last_processed_index + 11]

        # Feature extraction and prediction
        features = extract_features(new_data)
        prediction = classifier.predict([features])
        print("Prediction:", prediction)

        if prediction == [1]:
            os.system('afplay -t 0.2 B.mp3')

        # Update the index of the processed line
        last_processed_index += 10

    # Wait for 1 second
    time.sleep(0.5)

# Setting and training the model
classifier = RandomForestClassifier(random_state=42)
classifier.fit(X, y)

# Configuring the OSC server
dispatcher = dispatcher.Dispatcher()
#dispatcher.map("/muse/eeg", handle_eeg, "EEG")
dispatcher.map("/muse/elements/delta_absolute", eeg_handler,0)
dispatcher.map("/muse/elements/theta_absolute", eeg_handler,1)
dispatcher.map("/muse/elements/alpha_absolute", eeg_handler,2)
dispatcher.map("/muse/elements/beta_absolute", eeg_handler,3)
dispatcher.map("/muse/elements/gamma_absolute", eeg_handler,4)

server = osc_server.ThreadingOSCUDPServer(("0.0.0.0", 5000), dispatcher)

# Running the OSC server in a separate thread
server_thread = threading.Thread(target=server.serve_forever)
server_thread.daemon = True
server_thread.start()
Thank you so much
Post Reply