SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Zhao H, Zhou X, Xiao Y. International Journal of Computational Science and Engineering 2019; 19(2): 169-176.

Copyright

(Copyright © 2019)

DOI

10.1504/IJCSE.2019.100237

PMID

unavailable

Abstract

With the development of networks and social media, audio and video have become a more popular way to communicate. Those audio and video can spread information to create some negative effect, e.g., negative sentiment with suicide tendency or threatening messages to make people panic. In order to keep a safe network environment, it is necessary to recognise emotion in dialogues. To improve recognition of continuous emotion in dialogues, we propose to combine DISfluencies and non-verbal vocalisations (DIS-NV) features with bidirectional long short-term memory (BLSTM) model to predict continuous emotion. DIS-NV features are effective emotion features, including filled pauses features, fillers features, shutters features, laughter feature and breath feature. Bidirectional long short-term memory (BLSTM) can learn past information and use future information. State-of-the-art recognition attains 62% accuracy. Our experimental method can increase accuracy to 76%. © 2019 Inderscience Enterprises Ltd.


Language: en

Keywords

Brain; Dialogue; AVEC2012; Bidirectional long short-term memory; BLSTM; Continuous emotion; DiS-NV; Discretisation; Disfluencies; DiSfluencies and non-verbal vocalisation; Knowledge-inspired features; LLD; Long short-term memory; Low level descriptors; Low-level descriptors; Safe network; Safe network environment; Speech emotion recognition; Speech recognition

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print