Skip to content

EEG-based emotion decoding (LSTM on PSD features) paired with Neural Style Transfer (Adaptive Instance Normalization (AdaIN)) to generate emotion-driven, stylized artwork. Built using the EmoEEG-MC dataset.

Notifications You must be signed in to change notification settings

Ihssane5/brain-emotional-decoder

Repository files navigation

Brain Emotional Decoder: EEG-Driven Artistic Style Transfer

🧠 Project Overview

The Brain Emotional Decoder is an interdisciplinary project that integrates Affective Computing (decoding emotions from brain signals) with Computational Creativity (generating art).

The pipeline connects these two domains:

  1. A Long Short-Term Memory (LSTM) network classifies a dominant emotion from pre-processed EEG data.
  2. The resulting emotion dictates the Content Image for a subsequent Neural Style Transfer (NST) process.
  3. The style of a user-selected Painter's Artwork is then transferred to the emotion-matched content image using an NST implementation based on Adaptive Instance Normalization (AdaIN).

💾 Dataset

This project is built upon the EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding dataset.

  • Source: OpenNeuro
  • Accession Number: ds005540
  • Original Publication: Please cite the corresponding article:

    X. Xu, X. Shen, X. Chen, Q. Zhang, S. Wang, Y. Li, Z. Li, D. Zhang, M. Zhang, and Q. Liu, “A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding,” Scientific Data, 2024. (or the version corresponding to the time of use)

  • Key Features:
    • 64-channel EEG and peripheral physiological data from 60 participants.
    • Seven Emotional Categories: Joy, Inspiration, Tenderness, Fear, Disgust, Sadness, and Neutral Emotion.
    • Emotions are elicited in two contexts: video-induced and imagery-induced.

⚙️ Methodology

1. EEG Emotion Classification (LSTM on PSD Features)

Due to resource limitations for local EEG signal processing, this project operates on pre-extracted features.

  • Input: Pre-computed Power Spectral Density (PSD) features from the EEG signals, loaded file (e.g., a .psd.file).
  • Model: A Long Short-Term Memory (LSTM) network is used. LSTMs are effective for modeling the temporal dependencies and sequences inherent in EEG features.
  • Classification Strategy (Experimental):
    • Each trial (approximately 30 seconds) is processed as a sequence.
    • The model predicts an emotion distribution for the sequence.
    • The final predicted label is the most frequently predicted emotion across the sequence (the 'dominant emotion'). This is an experimental framing to simplify the problem from frame-tagging to a single summary classification per trial.
  • Output: The single dominant emotional label (e.g., 'Disgust', as seen in the example).

2. Emotional Artistic Style Transfer

The classified dominant emotion drives the final artistic output.

  • Content Image Selection: A pre-selected image corresponding to the predicted dominant emotion is retrieved from the file system.
    • (Note: While generating emotion-specific images with models like Diffusion Models is ideal, a simpler, file-based lookup is used here due to hardware constraints.)
  • Style Image Selection: The user specifies a painting (e.g., a Van Gogh) to provide the artistic texture and colors.
  • Neural Style Transfer (NST): The content and style images are merged using an NST model.
    • Core Technique: The model utilizes Adaptive Instance Normalization (AdaIN), which is known to accelerate and improve the quality of style transfer by normalizing the feature statistics of the content image to match those of the style image.
  • Output: The final stylized image.

✨ Project Visualization

Experience the Brain Emotional Decoder directly on Hugging Face Spaces!

🔗 Live Demo: [https://huggingface.co/spaces/Ihssane123/Brain_Emotion_Decoder]

Application Screenshots

Step 1: File Analysis and Style Selection Step 2: Generated Artwork
Screenshot of the Brain Emotion Decoder interface, showing PSD file selection, emotion distribution (Disgust is dominant), and selection of Van Gogh's The Starry Night. Screenshot of the Generated Artwork output, showing the content image corresponding to 'Disgust' and the Blended Style Image after applying the style of The Starry Night.

🤝 Credits and Acknowledgements

We acknowledge and are deeply grateful for the foundational work that made this project possible.

Component Acknowledged Source / Contributor Citation / Link
Dataset & Article The EmoEEG-MC Research Team OpenNeuro ds005540 Link, paper link
Neural Style Transfer Code The implementation of the Instance Normalization-based style transfer was adapted from the publicly available work by: pytorch-AdaIN Implementation Original Implementation

🚀 Installation and Usage

  1. Clone the repository:

    git clone https://github.com/Ihssane5/brain-emotional-decoder.git
    cd brain-emotional-decoder
  2. Install dependencies:

    pip install -r requirements.txt
  3. Run the script:

        gradio app.py
  4. Explore the interface:

    • Upload your EEG PSD data file
    • Select a style image from famous artists
    • View the emotion analysis and resulting artistic output

📝 Conclusion

This project demonstrates the intersection of neuroscience, emotional computing, and computational art. By bridging brain activity patterns with creative expression, we hope to inspire further exploration of how our internal emotional states can be visualized through AI-assisted artistic representation.

For questions, contributions, or feedback, please open an issue or submit a pull request.

About

EEG-based emotion decoding (LSTM on PSD features) paired with Neural Style Transfer (Adaptive Instance Normalization (AdaIN)) to generate emotion-driven, stylized artwork. Built using the EmoEEG-MC dataset.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages