Studying the impact of innovation on business and society

Bullying, Hate, Conspiracies, Fake News, Negatively Impacts User Experiences and Advertising Value

“Social media has made too many of us comfortable with disrespecting people and not getting punched in the mouth for it.” – Ice T

In real life…

Free speech is not free from consequence.

Spreading misinformation whether you aim to sow discord or because you actually believe it, is not free from consequence.

Exchanging conflict for views, popularity and financial gain is not free from consequence.

Inflicting emotional, psychological or emotional pain and aguish on another human being for any reason is not free from consequence

Online however, it seems that most of what’s referenced above occurs freely and largely absent of reasonable accountability.

For the sake of humanity, we, meaning you and me, have to change. None of this is good for any of us in the short and long term. None of this scales society in any way that’s healthy, productive and meaningful. None of this is really any good for business, partners or supporting ecosystems.

Yet, we’re emboldened, entitled and self-centered and we can’t even see it.

We point fingers as if we are certain everything we know is absolute.

We take stances and build walls as if mirrors lost every ability to reflect our true self-image.

We don’t see the world as it is, we see the world as we are.

Someone has to take the first step.

That’s just what happened recently when I joined Two Hat Security CEO Chris Priebe and Venture Beat’s Stewart Rogers for an eye-opening (at least to me) conversation on the importance and commercial viability of putting the social back in social media.  It’s available on-demand here.

Following the conversation, Venture Beat published a compelling story about the topic. It serves as a mouthwatering amuse bouche for the webcast.

Nasty online rhetoric hurts brands and business, not just our sense of niceness

Efficient moderation and positive reinforcement boosts online community retention and growth. Catch up on this talk featuring analyst and author Brian Solis, along with Two Hat Security CEO Chris Priebe, about the changing landscape of online conversations, and how artificial intelligence paired with human interaction is solving current content moderation challenges.

“I want to quote the great philosopher Ice-T, who said recently, social media has made too many of us comfortable with disrespecting people and not getting punched in the mouth for it,” says Brian Solis, principal digital analyst at Altimeter and the author of Life Scale. “Somehow this behavior has just become the new normal.”

It seems like hate speech, abuse, and extremism is the cost of being online today, but it came out swinging back at the dawn of the internet, says Chris Priebe, CEO and founder at Two Hat Security. Anyone can add content to the internet, and what that was supposed to offer the world was cool things like Wikipedia — everyone contributing their thoughts in this great knowledge share that makes us strong. But that’s not what we got.

“Instead we ended up learning, don’t read the comments,” Priebe says. “The dream of what we could do didn’t become reality. We just came to accept in the 90s that this is the cost of being online. It’s something that happens as a side effect of the benefits of the internet.”

And from the beginning, it’s been building on itself, Solis says, as social media and other online communities have given more people more places to interact online, and more people emboldened to say and do things they would never do in the real world.

“It’s also being subsidized by some of the most popular brands and advertisers out there, without necessarily realizing that this is what they’re subsidizing,” he adds. “We’re creating this online society, these online norms and behaviors, that are being reinforced in the worst possible way without any kind of consequences or regulation or management. I think it’s just gone on way too long, without having this conversation.”

Common sense used to tell us to be the best person online that you are in the real world, he continues, but something happened along the way where this just became the new normal, where people don’t even care about the consequence of losing friendships and family members, or destroying relationships, because they feel that the need to express whatever’s on their mind, whatever they feel, is more important than anything else.

“That’s the effect of having platforms with zero guidelines or consequences or policies that reinforce positive behavior and punish negative behavior,” Solis says. “We wanted that freedom of speech. We wanted that ability to say and do anything. These platforms needed us to talk and interact with one another, because that’s how they monetize those platforms. But at the end of the day, this conversation is important.”

“We reward people for the most outrageous content,” Priebe agrees. “You want to get more views, more likes, those kinds of things. If you can write the most incredible insult to someone, and really burn them, that kind of thing can get more eyeballs. Unfortunately, the products are designed in a way where if they get more eyeballs, they get more advertising dollars.”

Moderation isn’t about whitewashing the internet — it’s about allowing real, meaningful conversations to actually happen without constant derailment.

“We don’t actually have free speech on the internet right now,” says Priebe. “The people who are destroying it are all these toxic trolls. They’re not allowing us to share our true thoughts. We’re not getting the engagement that we really need from the internet.”

Two Hat studies have found that people who have a positive social experience are three times more likely to come back on day two, and then three times more likely to come back on day seven. People stay longer if they find community and a sense of belonging. Other studies have shown that if users run into a bunch of toxic and hateful content, they’re 320 percent more likely to leave, as well.

“We have to stop trading short-term wins,” Priebe adds. “When someone adds content, just because a whole bunch of people engage with it because it’s hateful and creates a bunch of ‘I can’t believe this is happening’ responses, that’s not actually good eyeballs or good advertising spend. We have to find the content that causes people to engage deeper.”

“The communities themselves have to be accountable for the type of interaction and the content that is shared on those networks, to bring out the best in society,” Solis says.” “It has to come down to the platforms to say, what kind of community do we want to have? And advertisers to say, what kind of communities do we want to support? That’s a good place to start, at least.”

There are three lines of defense for online communities: applying a filter, backed by known libraries of specifically damaging content keywords. The second line of defense helps the filter narrow down on abusive language, by using the reputation of your users — by making the filter more restrictive for known harassers. The third line of defense is asking users to report content, which is actually becoming required across multiple jurisdictions, and community owners are being required to deal with those reports.

“The way I would tackle it or add to it would be on the human side of it,” Solis adds. “We have to reward the type of behaviors that we want, the type of engagement that we want. The value to users has to take incredible priority, but also to the right users. What kind of users do you want? You can’t just go after the market for everyone anymore. I don’t think that’s good enough. Also, bringing quality engagement and understanding that the numbers might be lower, but they’re more valuable to advertisers, so that advertisers want to reinforce that type of engagement. It really starts with having an introspective conversation about the community itself, and then taking the steps to reinforce that behavior.”

To learn more about the role that AI and machine learning is playing in accurate, effective content moderating, the challenges platforms from Facebook to YouTube to LinkedIn are having on- and offline, and the ROI of safe communities, catch up now on this VB Live event.

Photo Credit: Jeremy Yap, Unsplash

Brian Solis

Brian Solis is principal analyst and futurist at Altimeter, the digital analyst group at Prophet, Brian is a world renowned keynote speakerand 8x best-selling author. In his new book, Lifescale: How to live a more creative, productive and happy life, Brian tackles the struggles of living in a world rife with constant digital distractions. His model for “Lifescaling” helps readers overcome the unforeseen consequences of living a digital life to break away from diversions, focus on what’s important, spark newfound creativity and unlock new possibilities. His previous book, X: The Experience When Business Meets Design, explores the future of brand and customer engagement through experience design.

Please, invite him to speak at your event or bring him in to inspire colleagues and fellow executives/boards.

Follow Brian Solis!

Twitter: @briansolis
Facebook: TheBrianSolis
LinkedIn: BrianSolis
Instagram: BrianSolis
Pinterest: BrianSolis
Youtube: BrianSolisTV
Newsletter: Please Subscribe
Speaking Inquiries: Contact Him Directly Here 

____________________________

Follow Lifescale!

Main Newsletter: Please Subscribe
Coaches Newsletter: Please Subscribe
Twitter: @LifescaleU
Instagram: @LifescaleU
Facebook: Lifescale University

ONE COMMENT ON THIS POST To “Bullying, Hate, Conspiracies, Fake News, Negatively Impacts User Experiences and Advertising Value”

Leave a Reply

Your email address will not be published. Required fields are marked *

Join Our Mailing List

You have Successfully Subscribed!

Stay Connected