Publications

Selected Journal Publications since 2018

  • M. Littmann, K. Selig, L. Cohen, Y. Frank, P. Honigschmid, E. Kataka, A. Mosch, K. Qian, A. Ron, S. Schmid, A. Sorbie, L. Szlak, A. Dagan-Wiener, N. Ben-Tal, M. Y. Niv, D. Razansky, B. W. Schuller, D. Ankerst, T. Hertz, and B. Rost, “Validity of machine learning in biology and medicine increased through collaborations across fields of expertise,” Nature Machine Intelligence, vol. 2, 2020. 12 pages.
  • B. Schuller, “Micro-Expressions – A Chance for Computers to Beat Humans at Revealing Hidden Emotions?,” IEEE Computer Magazine, vol. 52, February 2019. 2 pages, to appear (IF: 1.940, 5-year IF: 2.113 (2017))
  • B. Schuller, “Responding to Uncertainty in Emotion Recognition,” Journal of Information, Communication & Ethics in Society, vol. 17, no. 2, 2019. 4 pages, invited contribution, to appear
  • D. Schuller and B. Schuller, “Speech Emotion Recognition: Three Recent Major Changes in Computational Modelling,” Emotion Review, Special Issue on Emotions and the Voice, vol. 11, 2019. 10 pages, invited contribution, to appear (IF: 3.780, 5-year IF: 5.129 (2017))
  • S. Amiriparian, N. Cummins, M. Gerczuk, S. Pugachevskiy, S. Ottl, and B. Schuller, ““Are You Playing a Shooter Again?!” Deep Representation Learning for Audio-based Video Game Genre Recognition,” IEEE Transactions on Games, vol. 11, 2019. 11 pages, to appear
  • S. Amiriparian, J. Han, M. Schmitt, A. Baird, A. MallolRagolta, M. Milling, M. Gerczuk, and B. Schuller, “Synchronisation in Interpersonal Speech,” Frontiers in Robotics and AI, section Humanoid Robotics, Special Issue on Computational Approaches for Human-Human and Human-Robot Social Interactions, vol. 6, 2019. 16 pages, Manuscript ID: 457845, to appear
  • F. Dong, K. Qian, Z. Ren, A. Baird, X. Li, Z. Dai, B. Dong, F. Metze, Y. Yamamoto, and B. Schuller, “Machine Listening for Heart Status Monitoring: Introducing and Benchmarking HSS – the Heart Sounds Shenzhen Corpus,” IEEE Journal of Biomedical and Health Informatics, vol. 23, 2019. 11 pages, to appear (IF: 4.217 (2018))
  • J. Han, Z. Zhang, N. Cummins, and B. Schuller, “Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives,” IEEE Computational Intelligence Magazine, Special Issue on Computational Intelligence for Affective Computing and Sentiment Analysis, vol. 14, pp. 68– 81, May 2019. (IF: 6.611 (2017))
  • J. Han, Z. Zhang, Z. Ren, and B. Schuller, “Exploring Perception Uncertainty for Emotion Recognition in Dyadic Conversation and Music Listening,” Cognitive Computation, Special Issue on Affect Recognition in Multimodal Language, vol. 11, 2019. 10 pages, to appear (IF: 4.287 (2018))
  • J. Han, Z. Zhang, Z. Ren, and B. Schuller, “EmoBed: Strengthening Monomodal Emotion Recognition via Training with Crossmodal Emotion Embeddings,” IEEE Transactions on Affective Computing, vol. 10, 2019. 12 pages, to appear (IF: 6.288 (2018))
  • G. Keren, S. Sabato, and B. Schuller, “Analysis of Loss Functions for Fast Single-Class Classification,” Knowledge and Information Systems, vol. 59, 2019. 12 pages, invited as one of best papers from ICDM 2018, to appear (IF: 2.247 (2017))
  • D. Kollias, P. Tzirakis, M. A. Nicolaou, A. Papaioannou, G. Zhao, B. Schuller, I. Kotsia, and S. Zafeiriou, “Deep Affect Prediction in-the-Wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond,” International Journal of Computer Vision, vol. 127, pp. 907–929, June 2019. (IF: 11.541 (2017))
  • J. Kossaifi, R. Walecki, Y. Panagakis, J. Shen, M. Schmitt, F. Ringeval, J. Han, V. Pandit, B. Schuller, K. Star, E. Hajiyev, and M. Pantic, “SEWA DB: A Rich Database for AudioVisual Emotion and Sentiment Research in the Wild,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, 2019. 17 pages, to appear (IF: 17.730 (2018))
  • E. Parada-Cabaleiro, G. Costantini, A. Batliner, M. Schmitt, and B. W. Schuller, “DEMoS – An Italian Emotional Speech Corpus – Elicitation methods, machine learning, and perception,” Language Resources and Evaluation, vol. 53, 2019. 43 pages, to appear (IF: 0.656 (2017))
  • K. Qian, M. Schmitt, C. Janott, Z. Zhang, C. Heiser, W. Hohenhorst, M. Herzog, W. Hemmert, and B. Schuller, “A Bag of Wavelet Features for Snore Sound Classification,” Annals of Biomedical Engineering, 2019. 16 pages, to appear (IF: 3.405 (2017))
  • Y. Xie, R. Liang, Z. Liang, C. Huang, C. Zou, and B. Schuller, “Speech Emotion Classification Using Attention-based LSTM,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 27, pp. 1675–1685, November 2019. (IF: 3.531 (2018))
  • X. Xu, J. Deng, E. Coutinho, C. Wu, L. Zhao, and B. Schuller, “Connecting Subspace Learning and Extreme Learning Machine in Speech Emotion Recognition,” IEEE Transactions on Multimedia, vol. 21, pp. 795–808, 3 2019. (IF: 3.509 (2016))
  • Y. Zhang, F. Weninger, A. Michi, J. Wagner, E. Andre, and B. Schuller, “A Generic Human-Machine Annotation Framework Using Dynamic Cooperative Learning with a Deep Learning-based Confidence Measure,” IEEE Transactions on Cybernetics, 2019. 11 pages, to appear (IF: 10.387 (2018))
  • Z. Zhang, J. Han, K. Qian, C. Janott, Y. Guo, and B. Schuller, “Snore-GANs: Improving Automatic Snore Sound Classification with Synthesized Data,” IEEE Journal of Biomedical and Health Informatics, vol. 23, 2019. 11 pages, to appear 4.217 (2018))
  • Z. Zhao, Z. Bao, Z. Zhang, J. Deng, N. Cummins, H. Wang, J. Tao, and B. Schuller, “Automatic Assessment of Depression from Speech via a Hierarchical Attention Transfer Network and Attention Autoencoders,” IEEE Journal of Selected Topics in Signal Processing, Special Issue on Automatic Assessment of Health Disorders Based on Voice, Speech and Language Processing, vol. 13, 2019. 11 pages, to appear (IF: 6.688 (2018))
  • Z. Zhao, Z. Bao, Y. Zhao, Z. Zhang, N. Cummins, Z. Ren, and B. Schuller, “Exploring Deep Spectrum Representations via Attention-based Recurrent and Convolutional Neural Networks for Speech Emotion Recognition,” IEEE Access, vol. 7, pp. 97515–97525, July 2019. (IF: 4.098 (2018))
  • B. Schuller, Y. Zhang, and F. Weninger, “Three Recent Trends in Paralinguistics on the Way to Omniscient Machine Intelligence,” Journal on Multimodal User Interfaces, Special Issue on Speech Communication, vol. 12, pp. 273–283, 2018. (IF: 1.140, 5-year IF: 0.872 (2017))
  • B. Schuller, F. Weninger, Y. Zhang, F. Ringeval, A. Batliner, S. Steidl, F. Eyben, E. Marchi, A. Vinciarelli, K. Scherer, M. Chetouani, and M. Mortillaro, “Affective and Behavioural Computing: Lessons Learnt from the First Computational Paralinguistics Challenge,” Computer Speech and Language, vol. 53, pp. 156–180, January 2019. (IF: 1.900, 5-year IF: 1.938 (2016))
  • B. Schuller, “Speech Emotion Recognition: Two Decades in a Nutshell, Benchmarks, and Ongoing Trends,” Communications of the ACM, vol. 61, pp. 90–99, May 2018. Feature Article (IF: 4.027, 5-year IF: 6.469 (2016))
  • B. Schuller, “What Affective Computing Reveals about Autistic Children’s Facial Expressions of Joy or Fear,” IEEE Computer Magazine, vol. 51, pp. 40–41, June 2018. (IF: 1.940, 5-year IF: 2.113 (2017))
  • A. Baird, S. H. Jorgensen, E. Parada-Cabaleiro, S. Hantke, N. Cummins, and B. Schuller, “Listener Perception of Vocal Traits in Synthesized Voices: Age, Gender, and HumanLikeness,” Journal of the Audio Engineering Society, Special Issue on Augmented and Participatory Sound and Music Interaction using Semantic Audio, vol. 66, 2018. 8 pages, to appear (IF: 0.707, 5-year IF: 0.832 (2016))
  • E. Coutinho, K. Gentsch, J. van Peer, K. R. Scherer, and B. Schuller, “Evidence of Emotion-Antecedent Appraisal Checks in Electroencephalography and Facial Electromyography,” PLoS ONE, vol. 13, pp. 1–19, January 2018. (IF: 2.806 (2016))
  • N. Cummins, B. W. Schuller, and A. Baird, “Speech analysis for health: Current state-of-the-art and the increasing impact of deep learning,” Methods, Special Issue on on Translational data analytics and health informatics, 2018. 25 pages, to appear (IF: 3.998, 5-year IF: 3.936 (2017))
  • J. Deng, X. Xu, Z. Zhang, S. Fruhholz, and B. Schuller, “Semi- ¨ Supervised Autoencoders for Speech Emotion Recognition,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 26, no. 1, pp. 31–43, 2018. (IF: 2.950, 5-year IF: 3.253 (2017))
  • M. Freitag, S. Amiriparian, S. Pugachevskiy, N. Cummins, and B. Schuller, “auDeep: Unsupervised Learning of Representations from Audio with Deep Recurrent Neural Networks,” Journal of Machine Learning Research, vol. 18, pp. 1–5, April 2018. (IF: 5.000, 5-year IF: 7.649 (2016))
  • K. Grabowski, A. Rynkiewicz, A. Lassalle, S. Baron-Cohen, B. Schuller, N. Cummins, A. E. Baird, J. Podgorska-Bednarz, A. Pieniazek, and I. Lucka, “Emotional expression in psychiatric conditions – new technology for clinicians,” Psychiatry and Clinical Neurosciences, vol. 1, 2018. 17 pages, to appear (IF: 3.199 (2017))
  • J. Han, Z. Zhang, N. Cummins, and B. Schuller, “Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Prospectives,” IEEE Computational Intelligence Magazine, Special Issue on Computational Intelligence for Affective Computing and Sentiment Analysis, 2018. 13 pages, to appear (IF: 6.611, 5-year IF: 5.000 (2017))
  • J. Han, Z. Zhang, G. Keren, and B. Schuller, “Emotion Recognition in Speech with Latent Discriminative Representations Learning,” Acta Acustica united with Acustica, vol. 104, pp. 737–740, September 2018. (IF: 1.119, 5-year IF: 0.971 (2016))
  • S. Hantke, T. Olenyi, C. Hausner, and B. Schuller, “LargeScale Data Collection and Analysis via a Gamified Intelligent Crowdsourcing Platform,” International Journal of Automation and Computing, vol. 15, 2018. 10 pages, invited as one of 8 % best papers of ACII Asia 2018, to appear
  • S. Hantke, A. Abstreiter, N. Cummins, and B. Schuller, “Trustability-based Dynamic Active Learning for Crowdsourced Labelling of Emotional Audio Data,” IEEE Access, vol. 6, July 2018. (IF: 3.557, 5-year IF: 4.199 (2017))
  • C. Janott, M. Schmitt, Y. Zhang, K. Qian, V. Pandit, Z. Zhang, C. Heiser, W. Hohenhorst, M. Herzog, W. Hemmert, and B. Schuller, “Snoring Classified: The Munich Passau Snore Sound Corpus,” Computers in Biology and Medicine, vol. 94, pp. 106–118, March 2018. (IF: 2.115, 5-year IF: 2.168 (2017))
  • S. Jing, X. Mao, L. Chen, M. C. Comes, A. Mencattini, G. Raguso, F. Ringeval, B. Schuller, C. D. Natale, and E. Martinelli, “A closed-form solution to the graph total variation problem for continuous emotion profiling in noisy environment,” Speech Communication, vol. 104, pp. 66–72, November 2018. (acceptance rate: 38 %, IF: 1.585, 5-year IF: 1.660 (2017))
  • G. Keren, N. Cummins, and B. Schuller, “Calibrated Prediction Intervals for Neural Network Regressors,” IEEE Access, vol. 6, 2018. 9 pages, to appear (IF: 3.557, 5-year IF: 4.199 (2017))
  • F. Lingenfelser, J. Wagner, J. Deng, R. Brueckner, B. Schuller, and E. Andre, “Asynchronous and Event-based Fusion Systems for Affect Recognition on Naturalistic Data in Comparison to Conventional Approaches,” IEEE Transactions on Affective Computing, vol. 9, pp. 410–423, October – December 2018. (IF: 4.585, 5-year IF: 5.977 (2017))
  • E. Marchi, B. Schuller, A. Baird, S. Baron-Cohen, A. Lassalle, H. O’Reilly, D. Pigat, P. Robinson, I. Davies, T. Baltrusaitis, O. Golan, S. Fridenson-Hayo, S. Tal, S. Newman, N. MeirGoren, A. Camurri, S. Piana, S. Bolte, M. Sezgin, N. Alyuz, A. Rynkiewicz, and A. Baranger, “The ASC-Inclusion Perceptual Serious Gaming Platform for Autistic Children,” IEEE Transactions on Computational Intelligence and AI in Games, Special Issue on Computational Intelligence in Serious Digital Games, 2018. 12 pages, to appear (IF: 1.113, 5-year IF: 2.165 (2016))
  • A. Mencattini, F. Mosciano, M. Colomba Comes, T. De Gregorio, G. Raguso, E. Daprati, F. Ringeval, B. Schuller, and E. Martinelli, “An emotional modulation model as signature for the identification of children developmental disorders,” Nature Scientific Reports, vol. 8, no. Article ID: 14487, pp. 1–12, 2018. (IF: 4.122 (2017))
  • F. B. Pokorny, K. D. Bartl-Pokornya, C. Einspieler, D. Zhang, R. Vollmann, S. Bolte, H. Tager-Flusberg, M. Gugatschka, B. W. Schuller, and P. B. Marschik, “Typical vs. atypical: Combining auditory Gestalt perception and acoustic analysis of early vocalisations in Rett syndrome,” Research in Developmental Disabilities, vol. 82, pp. 109–119, November 2018. (IF: 1.820, 5-year IF: 2.344 (2017))
  • K. Qian, C. Janott, Z. Zhang, J. Deng, A. Baird, C. Heiser, W. Hohenhorst, M. Herzog, W. Hemmer, and B. Schuller, “Teaching Machines on Snoring: A Benchmark on Computer Audition for Snore Sound Excitation Localisation,” Archives of Acoustics, vol. 43, no. 3, pp. 465–475, 2018. (IF: 0.917, 5-year IF: 0.819 (2017))
  • Z. Ren, K. Qian, Z. Zhang, V. Pandit, A. Baird, and B. Schuller, “Deep Scalogram Representations for Acoustic Scene Classification,” IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 3, pp. 662–669, 2018. invited contribution
  • L. Roche, D. Zhang, F. B. Pokorny, B. W. Schuller, G. Esposito, S. Bolte, H. Roeyers, L. Poustka, K. D. Bartl-Pokorny, M. Gugatschka, H. Waddington, R. Vollmann, C. Einspieler, and P. B. Marschik, “Early Vocal Development in Autism Spectrum Disorders, Rett Syndrome, and Fragile X Syndrome: Insights from Studies using Retrospective Video Analysis,” Advances in Neurodevelopmental Disorders, vol. 2, pp. 49–61, March 2018
  • O. Rudovic, J. Lee, M. Dai, B. Schuller, and R. W. Picard, “Personalized machine learning for robot perception of affect and engagement in autism therapy,” Science Robotics, vol. 3, June 2018. 12 pages
  • D. Schuller and B. Schuller, “Speech Emotion Recognition – An Overview on Recent Trends and Future Avenues,” International Journal of Automation and Computing, vol. 15, 2018. 10 pages, invited contribution, to appear
  • D. Schuller and B. Schuller, “The Age of Artificial Emotional Intelligence,” IEEE Computer Magazine, Special Issue on The Future of Artificial Intelligence, vol. 51, pp. 38–46, September 2018. cover feature (IF: 1.940, 5-year IF: 2.113 (2017))
  • G. Trigeorgis, M. A. Nicolaou, B. Schuller, and S. Zafeiriou, “Deep Canonical Time Warping for simultaneous alignment and representation learning of sequences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, pp. 1128– 1138, May 2018. (IF: 9.455, 5-year IF: 13.229 (2017))
  • X. Xu, J. Deng, E. Coutinho, C. Wu, L. Zhao, and B. Schuller, “Connecting Subspace Learning and Extreme Learning Machine in Speech Emotion Recognition,” IEEE Transactions on Multimedia, vol. 20, 2018. 13 pages, to appear (IF: 3.509, 5-year IF: 4.103 (2016))
  • Z. Zhang, J. Han, E. Coutinho, and B. Schuller, “Dynamic Difficulty Awareness Training for Continuous Emotion Prediction,” IEEE Transactions on Multimedia, vol. 20, 2018. 14 pages, to appear (IF: 3.509, 5-year IF: 4.103 (2016))
  • Z. Zhang, J. T. Geiger, J. Pohjalainen, A. E. Mousa, W. Jin, and B. Schuller, “Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments,” ACM Transactions on Intelligent Systems and Technology, vol. 9, no. 5, Article No. 49, 2018. 14 pages (IF: 2.973, 5- year IF: 3.381 (2017))
  • Z. Zhang, J. Han, J. Deng, X. Xu, F. Ringeval, and B. Schuller, “Leveraging Unlabelled Data for Emotion Recognition with Enhanced Collaborative Semi-Supervised Learning,” IEEE Access, vol. 6, pp. 22196–22209, April 2018. (IF: 3.557, 5-year IF: 4.199 (2017))

Selected Earlier Journal Publications

  • K. Qian, Z. Zhang, A. Baird, and B. Schuller, “Active Learning for Bird Sound Classification via a Kernel-based Extreme Learning Machine,” Journal of the Acoustical Society of America,
    vol. 134, 2017. 12 pages, to appear
  • J. Deng, X. Xu, Z. Zhang, S. Fruhholz, and B. Schuller, “Semi-Supervised Autoencoders for Speech Emotion Recognition,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 25, 2017. 12 pages
  • P. Tzirakis, S. Zafeiriou, and B. Schuller, “Real-world automatic continuous affect recognition from audiovisual signals,” in Multi-modal Behavior Analysis in the Wild: Advances and Challenges (X. Alameda-Pineda, E. Ricci, and N. Sebe, eds.), Elsevier, 2017
  • L. Carstens, and F. Toni, “Using Argumentation to improve classification in Natural Language problems,“ ACM Transactions on Internet Technology, ACM
  • O. Rudovic, J. Lee, L. Mascarell-Maricic, B. W. Schuller, and R. Picard, “Measuring Engagement in Autism Therapy with Social Robots: a Cross-cultural Study,” Frontiers in Robotics and AI, section Humanoid Robotics, Special Issue on Affective and Social Signals for HRI, 2017.
  • O. Cocarascu, and F. Toni, “Identifying attack and support argumentative relations using deep learning,“ in Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), Copenhagen, Denmark, September 2017.
  • Y. Zhang, H. S. O. Stromfelt, and B. Schuller, “Emotion-Augmented Machine Learning: Overview of an Emerging Domain,” in Proc. 7th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2017), (San Antionio, TX), AAAC, IEEE, October 2017. 8 pages.
  • S. Amiriparian, S. Pugachevskiy, N. Cummins, S. Hantke, J. Pohjalainen, G. Keren, and B. Schuller, “CAST a database: Rapid targeted large-scale big data acquisition via small-world modelling of social media platforms,” in Proc. 7th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2017), (San Antionio, TX), AAAC, IEEE, October 2017. 6 pages.
  • H. Sagha, J. Deng, and B. Schuller, “The effect of personality trait, age, and gender on the performance of automatic speech valence recognition,” in Proc. 7th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2017), (San Antionio, TX), AAAC, IEEE, October 2017. 6 pages.
  • N. Cummins, S. Amiriparian, G. Hagerer, A. Batliner, S. Steidl, and B. Schuller, “An Image-based Deep Spectrum Feature Representation for the Recognition of Emotional Speech,” in Proceedings of the 25th ACM International Conference on Multimedia, MM 2017, (Mountain View, CA), ACM, October 2012. 7 pages. (oral acceptance rate: 7.5 %)
  • J. Han, Z. Zhang, M. Schmitt, M. Pantic, and B. Schuller, “From Hard to Soft: Towards more Human-like Emotion Recognition by Modelling the Perception Uncertainty,” in Proceedings of the 25th ACM International Conference on Multimedia, MM 2017, (Mountain View, CA), ACM, October 2012. 8 pages. (acceptance rate: 28 %)
  • Y. Zhang, W. McGehee, M. Schmitt, F. Eyben, and B. Schuller, “A Paralinguistic Approach To Holistic Speaker Diarisation – Using Age, Gender, Voice Likability and Personality Traits,”
    in Proceedings of the 25th ACM International Conference on Multimedia, MM 2017, (Mountain View, CA), ACM, October 2012. 6 pages. (acceptance rate: 28 %)
  • E. Parada-Cabaleiro, A. Batliner, A. E. Baird, and B. Schuller, “The SEILS dataset: Symbolically Encoded Scores in ModernAncient Notation for Computational Musicology,” in Proceedings 18th International Society for Music Information Retrieval Conference, ISMIR 2017, (Suzhou, P. R. China), ISMIR, October 2017. 5 pages.