Had the opportunity tonight to hear an excellent local talk by:
- Dacher Keitner: Professor of Psychology at UC Berkeley and director of the Greater Good Science Center
- Aturo Bejar: Engineering Director at Facebook
Keitner and Bejar are working together to make Facebook tools such as removal of unwanted photo tags, bullying indicators, and crisis support more effective by using emotional language in prompts and dialog boxes, fine-tuned by age ranges, gender and other indicators. By using human-ese rather than engineer-ese, their experiments are making various Facebook tools more effective at resolving intended and unintended human conflicts. Here are a few scattered notes from tonight’s talk. Many of the points below were supported by charts and graphs not shown here.
People are exchanging info/pictures/emotions on a scale that’s never been seen before – how can we tap into the wisdom of the ages? Facebook is like the reast of humanity – all the good, the bad, the complexities that humans are prone to. What constitutes kind speech?
Facebook is now the biggest photo provider in the world – 219 billion photos as of 12/12. Tagging (or other people) was the one thing that made Facebook different from other photo sharing services. But people sometimes don’t want to be tagged, for various reasons. How can we let tag-ees remove tags and simultaneously communicate why with the tagger? And what if people want to politely ask that these photos be removed entirely?
Same problem with Reporting (harassment, drug use, nudity etc.) Interestingly, a fraction of the incoming reports are over completely innocent things.
When Facebook changed the language on the Untag feature to use emotional words like “”It’s embarassing / It makes me sad, etc.” usage of the untag feature increased 28%. Facebook arrived at these words by studying the words people were using in their actual removal requests. By moving these words directly into their dialog boxes, un-tagging effectiveness shot up dramatically.
In addition, 65% of people receiving “Please also remove this photo” messages now have a POSITIVE reaction to the requesting person – only 10% negative. People don’t intend to embarrass each other (they’re supposedly friends, after all) and almost everyone appreciates open, honest dialog.
However, hyper-polite language did not work in Israel – they want people to get to the point!
Aside: After the age of 55, people no longer get embarassed on Facebook! They’re less concerned about their social image, about perception of themselves in “unflattering” photos, etc.
Facebook’s role is to provide the tools to enforce the rules of your own community, whether Amish or Swedish (different rules for different contexts).
Facebook is trying to become more artistic, and have been doing work to “reinvent the emoticon” – great slideshow of sketches for dozens of new ones.
“We’re immigrants to online life.”
When reporting harassment/bullying, the most urgent thing is to be able to talk to a trusted person. Developers assumed this would be an adult but it turns out that younger users trust older teenagers the most. So now it’s possible for younger people to have a trusted teen as an advocate on Facebook.
“Report” is a bad word – teens don’t want to click that – it doesn’t work. So they changed it to “This post is a problem.” We went from “S/he is harassing me” to “S/he said mean things about me.” Huge uptick of success. But it’s age-dependent – for 15/16 year olds, they display “S/he disrespected me” (Facebook shows different text in the same spot for users of different age ranges and genders, all based on data analysis of effectiveness across millions of users).
So lying about your age is a really bad idea – you’ll get the wrong UI, wrong language.
Many bullying reports are difficult to parse because of context. “Hey you look really great today” can be misinterpreted if you feel like you look crappy today. So you have to trust the problem reporter – we can’t just auto-parse the language.
Many reports are more about annoyance than actual harassment, but Facebook provides tools for responding to these. Language in the dialogs: If you say something made you feel afraid, the next dialog will ask “How afraid are you? (Exteremely, slightly, etc.) We tap a range of human emotions here to make the communication more effective, which gives better results when the message is received by the original poster.
During problem report, the dialog says things like “It makes sense that you are feeling afraid.” Facebook validates your experience.
Emotion intensity is strongly correlated with mesaging, especially for girls.
In almost all cases, the person uploading the embarassing photo did not intend any harm – almost always accidental.
“Cyber bullying” is a much more nuanced problem than public perception of it.
If you get the language right, you can trust teens to use the tools correctly.
If you think someone wants to hurt themselves, Facebook provides a flow for bringing active support networks to the person where/as possible. Facebook provides tools for community to people in distress.
Reporting: The mere act of putting these emotions down and sending it off to your friend makes the person feel better about the “offending” friend. By end of process, 50% feel positive about the friend and 25% neutral (this is all measured and extracted from millions of data samples!)
Audience comment: “It’s interesting that Facebook goes to such extreme lengths to work on subtleties of language, but has these big crude buckets like “Friend” (could be anyone) or “Like” (which could mean anything).”
Conclusion: Language really matters. We struggle in the home to get kids to use Please and Thank you, and Facebook is doing similar work. Everything you’ve learned in your life about how to navigate issues is completely applicable online.
Best advice for unraveling conflict: Re-read the whole thread but take the social network out of the picture – imagine it happened in a real-world conversation. It makes a difference.