Publications

Selected Journal Publications since 2018

  • S. Amiriparian, N. Cummins, M. Gerczuk, S. Pu- gachevskiy, S. Ottl, and B. Schuller, ““Are You Play- ing a Shooter Again?!” Deep Representation Learning for Audio-based Video Game Genre Recognition,” IEEE Transactions on Games, vol. 12, pp. 145–154, June 2020. (IF: 1.886 (2019))
  • A. Baird and B. Schuller, “Considerations for a More Ethical Approach to Data in AI: on Data Representation and Infrastructure,” Frontiers in Big Data, section Machine Learning and Artificial Intelligence, Special Issue on Ethical Machine Learning and Artificial Intelligence (AI), 2020. 15 pages, Manuscript ID: 527486, to appear
  • A. Batliner, S. Hantke, and B. Schuller, “Ethics and Good Practice in Computational Paralinguistics,” IEEE Transactions on Affective Computing, vol. 11, 2020. 19 pages, to appear (IF: 7.512 (2019))
  • N. Cummins and B. Schuller, “Five Crucial Challenges in Digital Health,” Frontiers in Digital Health, vol. 1, 2020. 6 pages, to appear
  • J. Deng, B. Schuller, F. Eyben, D. Schuller, Z. Zhang, H. Francois, and E. Oh, “Exploiting time-frequency pat- terns with LSTM RNNs for low-bitrate audio restoration,” Neural Computing and Applications, Special Issue on Deep Learning for Music and Audio, vol. 32, no. 4, pp. 1095– 1107, 2019. (IF: 4.774 (2019))
  • F. Dong, K. Qian, Z. Ren, A. Baird, X. Li, Z. Dai, B. Dong, F. Metze, Y. Yamamoto, and B. Schuller, “Machine Listening for Heart Status Monitoring: Introducing and Benchmarking HSS – the Heart Sounds Shenzhen Corpus,” IEEE Journal of Biomedical and Health Informatics, vol. 24, pp. 2082–2093, July 2020. (IF: 5.223 (2019))
  • J. Han, Z. Zhang, M. Pantic, and B. Schuller, “Internet of Emotional People: Towards Continual Affective Computing cross Cultures via Audiovisual Signals,” Future Generation Computing Systems, no. 1, 2020. 14 pages, to appear (IF: 6.125 (2019))
  • G. Keren, S. Sabato, and B. Schuller, “Analysis of Loss Functions for Fast Single-Class Classification,” Knowledge and Information Systems, vol. 62, pp. 337––358, 2020. invited as one of best papers from ICDM 2018 (IF: 2.936 (2019))
  • S. Latif, R. Rana, S. Khalifa, R. Jurdak, J. Epps, and B. W. Schuller, “Multi-Task Semi-Supervised Adversarial Autoencoding for Speech Emotion Recognition,” IEEE Transactions on Affective Computing, vol. 11, 2020. 14 pages, to appear (IF: 7.512 (2019))
  • E. Parada-Cabaleiro, G. Costantini, A. Batliner, M. Schmitt, and B. W. Schuller, “DEMoS – An Italian Emotional Speech Corpus – Elicitation methods, machine learning, and perception,” Language Resources and Evaluation, vol. 54, pp. 341–383, February 2020. (IF: 1.014 (2019))
  • E. Parada-Cabaleiro, A. Batliner, and B. Schuller, “The effect of music in anxiety reduction: A psychological and physiological assessment,” Psychology of Music, 2020. 10 pages, to appear (IF: 1.712 (2019))
  • E. Parada-Cabaleiro, A. Batliner, A. Baird, and B. W. Schuller, “The Perception of Emotional Cues by Children in Artificial Background Noise,” International Journal of Speech Technology, vol. 23, pp. 169–182, January 2020
  • V. Pandit, M. Schmitt, N. Cummins, and B. W. Schuller, “I see it in your eyes: Training the shallowest-possible CNN to recognise emotions and pain from muted web- assisted in-the-wild video-chats in real-time,” Information Processing and Management, vol. 57, 2020. 36 pages, to appear (IF: 3.892 (2018))
  • K. Qian, X. Li, H. Li, S. Li, W. Li, Z. Ning, S. Yu, L. Hou, G. Tang, J. Lu, F. Li, S. Duan, C. Du, Y. Cheng, Y. Wang, L. Gan, Y. Yamamoto, and B. W. Schuller, “Computer Audition for Healthcare: Opportunities and Challenges,” Frontiers in Digital Health, vol. 2, pp. 1–4, June 2020. Article ID 5
  • M. Littmann, K. Selig, L. Cohen, Y. Frank, P. Honigschmid, E. Kataka, A. Mosch, K. Qian, A. Ron, S. Schmid, A. Sorbie, L. Szlak, A. Dagan-Wiener, N. Ben-Tal, M. Y. Niv, D. Razansky, B. W. Schuller, D. Ankerst, T. Hertz, and B. Rost, “Validity of machine learning in biology and medicine increased through collaborations across fields of expertise,” Nature Machine Intelligence, vol. 2, 2020. 12 pages.
  • B. Schuller, “Micro-Expressions – A Chance for Computers to Beat Humans at Revealing Hidden Emotions?,” IEEE Computer Magazine, vol. 52, February 2019. 2 pages, to appear (IF: 1.940, 5-year IF: 2.113 (2017))
  • B. Schuller, “Responding to Uncertainty in Emotion Recognition,” Journal of Information, Communication & Ethics in Society, vol. 17, no. 2, 2019. 4 pages, invited contribution, to appear
  • D. Schuller and B. Schuller, “Speech Emotion Recognition: Three Recent Major Changes in Computational Modelling,” Emotion Review, Special Issue on Emotions and the Voice, vol. 11, 2019. 10 pages, invited contribution, to appear (IF: 3.780, 5-year IF: 5.129 (2017))
  • S. Amiriparian, J. Han, M. Schmitt, A. Baird, A. MallolRagolta, M. Milling, M. Gerczuk, and B. Schuller, “Synchronisation in Interpersonal Speech,” Frontiers in Robotics and AI, section Humanoid Robotics, Special Issue on Computational Approaches for Human-Human and Human-Robot Social Interactions, vol. 6, 2019. 16 pages, Manuscript ID: 457845, to appear
  • J. Han, Z. Zhang, N. Cummins, and B. Schuller, “Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives,” IEEE Computational Intelligence Magazine, Special Issue on Computational Intelligence for Affective Computing and Sentiment Analysis, vol. 14, pp. 68– 81, May 2019. (IF: 6.611 (2017))
  • J. Han, Z. Zhang, Z. Ren, and B. Schuller, “Exploring Perception Uncertainty for Emotion Recognition in Dyadic Conversation and Music Listening,” Cognitive Computation, Special Issue on Affect Recognition in Multimodal Language, vol. 11, 2019. 10 pages, to appear (IF: 4.287 (2018))
  • J. Han, Z. Zhang, Z. Ren, and B. Schuller, “EmoBed: Strengthening Monomodal Emotion Recognition via Training with Crossmodal Emotion Embeddings,” IEEE Transactions on Affective Computing, vol. 10, 2019. 12 pages, to appear (IF: 6.288 (2018))
  • D. Kollias, P. Tzirakis, M. A. Nicolaou, A. Papaioannou, G. Zhao, B. Schuller, I. Kotsia, and S. Zafeiriou, “Deep Affect Prediction in-the-Wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond,” International Journal of Computer Vision, vol. 127, pp. 907–929, June 2019. (IF: 11.541 (2017))
  • J. Kossaifi, R. Walecki, Y. Panagakis, J. Shen, M. Schmitt, F. Ringeval, J. Han, V. Pandit, B. Schuller, K. Star, E. Hajiyev, and M. Pantic, “SEWA DB: A Rich Database for AudioVisual Emotion and Sentiment Research in the Wild,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, 2019. 17 pages, to appear (IF: 17.730 (2018))
  • K. Qian, M. Schmitt, C. Janott, Z. Zhang, C. Heiser, W. Hohenhorst, M. Herzog, W. Hemmert, and B. Schuller, “A Bag of Wavelet Features for Snore Sound Classification,” Annals of Biomedical Engineering, 2019. 16 pages, to appear (IF: 3.405 (2017))
  • Y. Xie, R. Liang, Z. Liang, C. Huang, C. Zou, and B. Schuller, “Speech Emotion Classification Using Attention-based LSTM,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 27, pp. 1675–1685, November 2019. (IF: 3.531 (2018))
  • X. Xu, J. Deng, E. Coutinho, C. Wu, L. Zhao, and B. Schuller, “Connecting Subspace Learning and Extreme Learning Machine in Speech Emotion Recognition,” IEEE Transactions on Multimedia, vol. 21, pp. 795–808, 3 2019. (IF: 3.509 (2016))
  • Y. Zhang, F. Weninger, A. Michi, J. Wagner, E. Andre, and B. Schuller, “A Generic Human-Machine Annotation Framework Using Dynamic Cooperative Learning with a Deep Learning-based Confidence Measure,” IEEE Transactions on Cybernetics, 2019. 11 pages, to appear (IF: 10.387 (2018))
  • Z. Zhang, J. Han, K. Qian, C. Janott, Y. Guo, and B. Schuller, “Snore-GANs: Improving Automatic Snore Sound Classification with Synthesized Data,” IEEE Journal of Biomedical and Health Informatics, vol. 23, 2019. 11 pages, to appear 4.217 (2018))
  • Z. Zhao, Z. Bao, Z. Zhang, J. Deng, N. Cummins, H. Wang, J. Tao, and B. Schuller, “Automatic Assessment of Depression from Speech via a Hierarchical Attention Transfer Network and Attention Autoencoders,” IEEE Journal of Selected Topics in Signal Processing, Special Issue on Automatic Assessment of Health Disorders Based on Voice, Speech and Language Processing, vol. 13, 2019. 11 pages, to appear (IF: 6.688 (2018))
  • Z. Zhao, Z. Bao, Y. Zhao, Z. Zhang, N. Cummins, Z. Ren, and B. Schuller, “Exploring Deep Spectrum Representations via Attention-based Recurrent and Convolutional Neural Networks for Speech Emotion Recognition,” IEEE Access, vol. 7, pp. 97515–97525, July 2019. (IF: 4.098 (2018))
  • B. Schuller, Y. Zhang, and F. Weninger, “Three Recent Trends in Paralinguistics on the Way to Omniscient Machine Intelligence,” Journal on Multimodal User Interfaces, Special Issue on Speech Communication, vol. 12, pp. 273–283, 2018. (IF: 1.140, 5-year IF: 0.872 (2017))
  • B. Schuller, F. Weninger, Y. Zhang, F. Ringeval, A. Batliner, S. Steidl, F. Eyben, E. Marchi, A. Vinciarelli, K. Scherer, M. Chetouani, and M. Mortillaro, “Affective and Behavioural Computing: Lessons Learnt from the First Computational Paralinguistics Challenge,” Computer Speech and Language, vol. 53, pp. 156–180, January 2019. (IF: 1.900, 5-year IF: 1.938 (2016))
  • B. Schuller, “Speech Emotion Recognition: Two Decades in a Nutshell, Benchmarks, and Ongoing Trends,” Communications of the ACM, vol. 61, pp. 90–99, May 2018. Feature Article (IF: 4.027, 5-year IF: 6.469 (2016))
  • B. Schuller, “What Affective Computing Reveals about Autistic Children’s Facial Expressions of Joy or Fear,” IEEE Computer Magazine, vol. 51, pp. 40–41, June 2018. (IF: 1.940, 5-year IF: 2.113 (2017))
  • A. Baird, S. H. Jorgensen, E. Parada-Cabaleiro, S. Hantke, N. Cummins, and B. Schuller, “Listener Perception of Vocal Traits in Synthesized Voices: Age, Gender, and HumanLikeness,” Journal of the Audio Engineering Society, Special Issue on Augmented and Participatory Sound and Music Interaction using Semantic Audio, vol. 66, 2018. 8 pages, to appear (IF: 0.707, 5-year IF: 0.832 (2016))
  • E. Coutinho, K. Gentsch, J. van Peer, K. R. Scherer, and B. Schuller, “Evidence of Emotion-Antecedent Appraisal Checks in Electroencephalography and Facial Electromyography,” PLoS ONE, vol. 13, pp. 1–19, January 2018. (IF: 2.806 (2016))
  • N. Cummins, B. W. Schuller, and A. Baird, “Speech analysis for health: Current state-of-the-art and the increasing impact of deep learning,” Methods, Special Issue on on Translational data analytics and health informatics, 2018. 25 pages, to appear (IF: 3.998, 5-year IF: 3.936 (2017))
  • J. Deng, X. Xu, Z. Zhang, S. Fruhholz, and B. Schuller, “Semi- ¨ Supervised Autoencoders for Speech Emotion Recognition,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 26, no. 1, pp. 31–43, 2018. (IF: 2.950, 5-year IF: 3.253 (2017))
  • M. Freitag, S. Amiriparian, S. Pugachevskiy, N. Cummins, and B. Schuller, “auDeep: Unsupervised Learning of Representations from Audio with Deep Recurrent Neural Networks,” Journal of Machine Learning Research, vol. 18, pp. 1–5, April 2018. (IF: 5.000, 5-year IF: 7.649 (2016))
  • K. Grabowski, A. Rynkiewicz, A. Lassalle, S. Baron-Cohen, B. Schuller, N. Cummins, A. E. Baird, J. Podgorska-Bednarz, A. Pieniazek, and I. Lucka, “Emotional expression in psychiatric conditions – new technology for clinicians,” Psychiatry and Clinical Neurosciences, vol. 1, 2018. 17 pages, to appear (IF: 3.199 (2017))
  • J. Han, Z. Zhang, N. Cummins, and B. Schuller, “Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Prospectives,” IEEE Computational Intelligence Magazine, Special Issue on Computational Intelligence for Affective Computing and Sentiment Analysis, 2018. 13 pages, to appear (IF: 6.611, 5-year IF: 5.000 (2017))
  • J. Han, Z. Zhang, G. Keren, and B. Schuller, “Emotion Recognition in Speech with Latent Discriminative Representations Learning,” Acta Acustica united with Acustica, vol. 104, pp. 737–740, September 2018. (IF: 1.119, 5-year IF: 0.971 (2016))
  • S. Hantke, T. Olenyi, C. Hausner, and B. Schuller, “LargeScale Data Collection and Analysis via a Gamified Intelligent Crowdsourcing Platform,” International Journal of Automation and Computing, vol. 15, 2018. 10 pages, invited as one of 8 % best papers of ACII Asia 2018, to appear
  • S. Hantke, A. Abstreiter, N. Cummins, and B. Schuller, “Trustability-based Dynamic Active Learning for Crowdsourced Labelling of Emotional Audio Data,” IEEE Access, vol. 6, July 2018. (IF: 3.557, 5-year IF: 4.199 (2017))
  • C. Janott, M. Schmitt, Y. Zhang, K. Qian, V. Pandit, Z. Zhang, C. Heiser, W. Hohenhorst, M. Herzog, W. Hemmert, and B. Schuller, “Snoring Classified: The Munich Passau Snore Sound Corpus,” Computers in Biology and Medicine, vol. 94, pp. 106–118, March 2018. (IF: 2.115, 5-year IF: 2.168 (2017))
  • S. Jing, X. Mao, L. Chen, M. C. Comes, A. Mencattini, G. Raguso, F. Ringeval, B. Schuller, C. D. Natale, and E. Martinelli, “A closed-form solution to the graph total variation problem for continuous emotion profiling in noisy environment,” Speech Communication, vol. 104, pp. 66–72, November 2018. (acceptance rate: 38 %, IF: 1.585, 5-year IF: 1.660 (2017))
  • G. Keren, N. Cummins, and B. Schuller, “Calibrated Prediction Intervals for Neural Network Regressors,” IEEE Access, vol. 6, 2018. 9 pages, to appear (IF: 3.557, 5-year IF: 4.199 (2017))
  • F. Lingenfelser, J. Wagner, J. Deng, R. Brueckner, B. Schuller, and E. Andre, “Asynchronous and Event-based Fusion Systems for Affect Recognition on Naturalistic Data in Comparison to Conventional Approaches,” IEEE Transactions on Affective Computing, vol. 9, pp. 410–423, October – December 2018. (IF: 4.585, 5-year IF: 5.977 (2017))
  • E. Marchi, B. Schuller, A. Baird, S. Baron-Cohen, A. Lassalle, H. O’Reilly, D. Pigat, P. Robinson, I. Davies, T. Baltrusaitis, O. Golan, S. Fridenson-Hayo, S. Tal, S. Newman, N. MeirGoren, A. Camurri, S. Piana, S. Bolte, M. Sezgin, N. Alyuz, A. Rynkiewicz, and A. Baranger, “The ASC-Inclusion Perceptual Serious Gaming Platform for Autistic Children,” IEEE Transactions on Computational Intelligence and AI in Games, Special Issue on Computational Intelligence in Serious Digital Games, 2018. 12 pages, to appear (IF: 1.113, 5-year IF: 2.165 (2016))
  • A. Mencattini, F. Mosciano, M. Colomba Comes, T. De Gregorio, G. Raguso, E. Daprati, F. Ringeval, B. Schuller, and E. Martinelli, “An emotional modulation model as signature for the identification of children developmental disorders,” Nature Scientific Reports, vol. 8, no. Article ID: 14487, pp. 1–12, 2018. (IF: 4.122 (2017))
  • F. B. Pokorny, K. D. Bartl-Pokornya, C. Einspieler, D. Zhang, R. Vollmann, S. Bolte, H. Tager-Flusberg, M. Gugatschka, B. W. Schuller, and P. B. Marschik, “Typical vs. atypical: Combining auditory Gestalt perception and acoustic analysis of early vocalisations in Rett syndrome,” Research in Developmental Disabilities, vol. 82, pp. 109–119, November 2018. (IF: 1.820, 5-year IF: 2.344 (2017))
  • K. Qian, C. Janott, Z. Zhang, J. Deng, A. Baird, C. Heiser, W. Hohenhorst, M. Herzog, W. Hemmer, and B. Schuller, “Teaching Machines on Snoring: A Benchmark on Computer Audition for Snore Sound Excitation Localisation,” Archives of Acoustics, vol. 43, no. 3, pp. 465–475, 2018. (IF: 0.917, 5-year IF: 0.819 (2017))
  • Z. Ren, K. Qian, Z. Zhang, V. Pandit, A. Baird, and B. Schuller, “Deep Scalogram Representations for Acoustic Scene Classification,” IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 3, pp. 662–669, 2018. invited contribution
  • L. Roche, D. Zhang, F. B. Pokorny, B. W. Schuller, G. Esposito, S. Bolte, H. Roeyers, L. Poustka, K. D. Bartl-Pokorny, M. Gugatschka, H. Waddington, R. Vollmann, C. Einspieler, and P. B. Marschik, “Early Vocal Development in Autism Spectrum Disorders, Rett Syndrome, and Fragile X Syndrome: Insights from Studies using Retrospective Video Analysis,” Advances in Neurodevelopmental Disorders, vol. 2, pp. 49–61, March 2018
  • O. Rudovic, J. Lee, M. Dai, B. Schuller, and R. W. Picard, “Personalized machine learning for robot perception of affect and engagement in autism therapy,” Science Robotics, vol. 3, June 2018. 12 pages
  • D. Schuller and B. Schuller, “Speech Emotion Recognition – An Overview on Recent Trends and Future Avenues,” International Journal of Automation and Computing, vol. 15, 2018. 10 pages, invited contribution, to appear
  • D. Schuller and B. Schuller, “The Age of Artificial Emotional Intelligence,” IEEE Computer Magazine, Special Issue on The Future of Artificial Intelligence, vol. 51, pp. 38–46, September 2018. cover feature (IF: 1.940, 5-year IF: 2.113 (2017))
  • G. Trigeorgis, M. A. Nicolaou, B. Schuller, and S. Zafeiriou, “Deep Canonical Time Warping for simultaneous alignment and representation learning of sequences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, pp. 1128– 1138, May 2018. (IF: 9.455, 5-year IF: 13.229 (2017))
  • X. Xu, J. Deng, E. Coutinho, C. Wu, L. Zhao, and B. Schuller, “Connecting Subspace Learning and Extreme Learning Machine in Speech Emotion Recognition,” IEEE Transactions on Multimedia, vol. 20, 2018. 13 pages, to appear (IF: 3.509, 5-year IF: 4.103 (2016))
  • Z. Zhang, J. Han, E. Coutinho, and B. Schuller, “Dynamic Difficulty Awareness Training for Continuous Emotion Prediction,” IEEE Transactions on Multimedia, vol. 20, 2018. 14 pages, to appear (IF: 3.509, 5-year IF: 4.103 (2016))
  • Z. Zhang, J. T. Geiger, J. Pohjalainen, A. E. Mousa, W. Jin, and B. Schuller, “Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments,” ACM Transactions on Intelligent Systems and Technology, vol. 9, no. 5, Article No. 49, 2018. 14 pages (IF: 2.973, 5- year IF: 3.381 (2017))
  • Z. Zhang, J. Han, J. Deng, X. Xu, F. Ringeval, and B. Schuller, “Leveraging Unlabelled Data for Emotion Recognition with Enhanced Collaborative Semi-Supervised Learning,” IEEE Access, vol. 6, pp. 22196–22209, April 2018. (IF: 3.557, 5-year IF: 4.199 (2017))