AI systems could fight cyber bullying
"I have been bullied my entire
life. About how I look like a whale and how im not pretty enough. I cant
get boyfriends because i refuse to have sex until I am married. I just
dont know what to do anymore...:\" - Samantha, 16
Pleas for help like this one appear on
social media and internet forums every day, written by desperate
teenagers who live their entire lives online. Knowing you're not alone
can help. That's the idea behind new software that matches up such
messages with similar posts from other worried teenagers, letting them
know that what they're experiencing isn't unusual. It might also be
possible to spot bullying behaviour as it happens online.
Recent high-profile cases have made cyberbullying front page news. In January, 15-year-old Amanda Diane Cummings
died after jumping in front of a bus on Staten Island, New York. She'd
been subjected to a campaign of bullying on Facebook by other pupils at
her school. Last September, Jamey Rodemeyer,
a 15-year-old boy from Buffalo, New York, killed himself after being
teased online about his sexuality. The cases sparked lawmakers to push
through legislation, passed by the state senate last week, that makes cyberbullying a crime.
To help tackle one part of the problem, Karthik Dinakar
at the Massachusetts Institute of Technology and colleagues have been
working on a project that analyses the posts written by teenagers on A
Thin Line, a website run by MTV. The site encourages teenagers to post
their problems anonymously and other teenagers leave comments giving
advice. Many of the posts concern bullying and worries about sex.
Each of the website's 5500 posts were
fed through an algorithm trained to recognise certain clusters of words
and then categorise each post according to one or more of 30 themes,
ranging from "duration of a relationship" to "using naked pictures of
girlfriend". The words "boyf" "trust" "cheat" "break" "upset" in the
same story might indicate the post was about a relationship ending, for
example. Once a label was assigned, the algorithm picked another story
on the site that covered the same themes.
"All these teenagers are still growing
emotionally, and there's a tendency to think that their experience is
singular to themselves," says Dinakar. "It can let them know that they
are not alone in their plight."
The software was tested usinga set of
new stories written by volunteers, which it analysed and matched with
stories from the website. The volunteers rated the system very
positively. They felt that the stories picked using the thematic
algorithm were always a much closer match than those chosen using a
basic algorithm that just matched keywords. The system was presented at a
conference on social media in Dublin, Ireland, earlier this month. MTV
now plans to start using it to match stories live on the site, so
teenagers can read about those in a similar plight.
Can artificial intelligence also stop
cyberbullying at its source? After Amanda Cummings died, her memorial
Facebook page was filled with offensive comments, leaving her parents
understandably distraught. So Dinakar is also developing software that
will help spot online bullying as it happens.
Facebook has taken steps to stop cyberbullying, but it primarily relies on users flagging up comments as inappropriate.
To find less-obvious forms of abuse, Dinakar built software that compares online posts to an open-source database called ConceptNet.
This is a network of phrases and words and the relationships between
them that lets computers understand what humans are talking about. This
way the system can work out what might be a bullying comment, even
though it contains no abusive words. For example, it would know that:
"Put on a wig and lipstick and be who you really are" aimed at a boy
might be a negative comment on his sexuality, because ConceptNet knows
that girls usually wear make-up, while boys do not.
The idea is that software like this
could be integrated into a social network. If it spots patterns of
bullying behaviour, it may either flash up a box warning the bully, ban
offending posts, or offer help and advice to the victim. Dinakar wants
to combine his two projects to create a detector that can pick up even
the subtlest of attacks, such as "liking" a negative Facebook status to
make a nasty point, for example. The research is due to appear in the
journal ACM Transactions on Interactive Intelligent Systems in July.
Danah Boyd
of Microsoft Research in Cambridge, Massachusetts, says that although
this kind of work won't solve the problem of online bullying, it will
help to improve our understanding of what happens online.
"I'm glad that these researchers are
working to identify different types of meanness and cruelty," she says.
"I am very hopeful that these kinds of techniques will lead to a more
holistic understanding of the problem."
No comments:
Post a Comment