Show simple item record

dc.contributor.authorMahmud, Altaf
dc.contributor.authorAhmed, Kazi Zubair
dc.contributor.authorKhan, Mumit
dc.date.accessioned2011-01-17T05:10:50Z
dc.date.available2011-01-17T05:10:50Z
dc.date.copyright2008
dc.date.issued2008-12
dc.identifier.urihttp://hdl.handle.net/10361/714
dc.descriptionIncludes bibliographical references (page 10).
dc.description.abstractWhile the internet has become the leading source of information, it is also become the medium for flames, insults and other forms of abusive language, which add nothing to the quality of information available. A human reader can easily distinguish between what is information and what is a flame or any other form of abuse. It is however much more difficult for a language processor to do this automatically. This paper describes a new approach for an automated system to distinguish between information and personal attacks containing insulting or abusive expressions in a given document. In linguistics, insulting or abusive messages are viewed as an extreme subset of the subjective language because of its extreme nature. We create a set of rules to extract the semantic information of a given sentence from the general semantic structure of that sentence to separate information from abusive language.en_US
dc.description.statementofresponsibilityAltaf Mahmud
dc.description.statementofresponsibilityKazi Zubair Ahmed
dc.description.statementofresponsibilityMumit Khan
dc.format.extent10 pages
dc.language.isoenen_US
dc.publisherBRAC Universityen_US
dc.subjectLanguage processing
dc.titleDetecting flames and insults in texten_US
dc.typeArticleen_US
dc.contributor.departmentCenter for research on Bangla language processing (CRBLP), BRAC University


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record