How To Remove And Disable Ads From Skype

Peter Strain

 Brittan Heller doesn’t know quite what caused it.

Maybe she turned a man down for a date too quickly, bruising his pride. Maybe she just bothered him in some way.

Whatever it was, Heller inadvertently unleashed waves of attacks from a fellow Yale law student when she did whatever she did a decade ago.

Back then Facebook didn’t have the reach it currently has. So Heller’s tormentor raised an online mob on AutoAdmit.com, a message board for law students and lawyers. Soon, posts appeared accusing her of using drugs and of trading sexual favors for admission to the elite school.

That sucked her into a larger maelstrom raging on the message board. Other female students at Yale were being accused of sleeping with professors to get better grades. Behind pseudonyms, some posters said they hoped the women would be raped.

Often, this is where the story ends. The women, harassed and degraded, close their accounts or drop out of school, anything to put distance between themselves and the anonymous hatred.

Heller, now a lawyer for the Anti-Defamation League, and her peers chose to fight, suing AutoAdmit to reveal the names of their harassers. They eventually settled. The terms of the settlement are confidential, Heller says, but the experience set her on the path toward a career fighting hate speech.

“My work would be a success if no one ever needed me,” Heller says. But so far, it’s the opposite. “We’re in a growth industry.”

Hate is everywhere these days. It’s hurled at people of different skin colors, religions and sexual orientations. It isn’t limited by political view; it’s not hard to find hateful words and acts on the left and the right. And it takes place everywhere: airports, shopping malls and, of course, on the internet.

Hate groups have taken up residence online. The hateful meet up with like-minded gangs on sites like Reddit, Voat and 4Chan, terrorizing people they don’t like or agree with. Because much of the internet is public, the medium magnifies the hateful messages as it distributes them. 

The ADL a civil rights group, found that about 1,600 online accounts were responsible for the 68 percent of the roughly 19,000 anti-Semitic tweets targeting Jewish journalists between August 2015 and July 2016. During the same period, 2.6 million anti-Jewish tweets may have been viewed as many as 10 billion times, the ADL says.

It would be bad enough if digital hate stayed locked up online. But it doesn’t. It feeds real-world violence. In May, a University of Maryland student who reportedly belonged to a Facebook page where white supremacists shared memes was arrested in the stabbing death of a black Army lieutenant. A few days later, a man who had reportedly posted Nazi imagery and white nationalist ideology to his Facebook page went on a stabbing spree in Portland, Oregon, after threatening two women, one of whom was wearing a Muslim head dress. Two Good Samaritans were killed. The man who opened fire on a Republican representatives baseball practice was reportedly a member of Facebook groups with names such as “The Road to Hell Is Paved with Republicans” and “Terminate the Republican Party.”

And that doesn’t count the garden variety taunts people get because of how they look, or the bomb threats or vandalized cemeteries.

The legal response has varied from place to place. In the US, where freedom of speech includes the expression of hate, activists are pushing lawmakers to draw a line at harassment, and treat it the same whether it’s in real life or over the internet.

Leave a Reply

Your email address will not be published. Required fields are marked *