EmoSynth Real Time Emotion-Driven Sound Texture Synthesis via Brain-Computer Interface

EmoSynth Real Time Emotion-Driven Sound Texture Synthesis via Brain-Computer Interface

ACM WeBIUM24 - 1st Workshop on Wearable Devices and Brain-Computer Interfaces for User Modelling - In conjunction with the 32nd ACM Conference on User Modeling, Adaptation and Personalization (UMAP 2024) - -2024

Authors

Colafiglio Tommaso, Lofù Domenico, Sorino Paolo, Lombardi Angela, Narducci Fedelucio, Festa Fabrizio, Di Noia Tommaso

Download: EmoSynth24_Preprint.pdf

DOI

https://doi.org/10.1007/978-3-030-85607-6_39

BibTex references

@Article{CLSLNFD24,
  author       = "Colafiglio, Tommaso and Lof\`u, Domenico and Sorino, Paolo and Lombardi, Angela and Narducci, Fedelucio and Festa, Fabrizio and Di Noia, Tommaso",
  title        = "EmoSynth Real Time Emotion-Driven Sound Texture Synthesis via Brain-Computer Interface",
  journal      = "ACM WeBIUM24 - 1st Workshop on Wearable Devices and Brain-Computer Interfaces for User Modelling - In conjunction with the 32nd ACM Conference on User Modeling, Adaptation and Personalization (UMAP 2024)",
  year         = "2024",
  url          = "http://sisinflab.poliba.it/Publications/2024/CLSLNFD24"

}