TikTok enhances its Safety Center materials following study on detrimental difficulties.
TikTok has developed a bad reputation for harboring dangerous viral “challenges” in its application, which in the worst case, have resulted in serious injury or death, as in the case of the blackout challenge that prompted Italian regulators to take action. against the social network to remove underage users.
More recently, the app has made headlines for challenges that have encouraged students to beat up teachers and destroy school property. As the potential threat of further regulation looms, TikTok today shares the results of its research on viral challenges and hoaxes, as well as how it is taking action.
To date, TikTok has often tried to minimize its participation in viral challenges.
In October, TikTok denied that the challenge to “slap a teacher” was a TikTok trend, for example.
After a child died while attempting the blackout challenge, TikTok released a statement saying the company found no evidence of any choking-related challenge on its platform.
He reiterated this assertion during a recent Senate hearing on the issue of the safety of minors on social platforms. But Sen. Blackburn (R-TN) told the TikTok representative that her staff had found the “cast videos,” along with other disturbing content.
By making the use of data and artificial intelligence an everyday behavior, Dataiku unleashes creativity within individual employees to drive collective success in companies of all sizes and in all industries.
Today, TikTok publishes the results of its commissioned research on harmful scams and challenges.
The company said it launched a global project on this topic a few months ago, which included a survey of more than 10,000 teenagers, parents and teachers from Argentina, Australia, Brazil, Germany, Italy, Indonesia, Mexico, the United Kingdom, the United States and Vietnam.
It also commissioned an independent safeguarding agency, Praesidio Safeguarding, to write a report detailing the findings and its recommendations. Additionally, a panel of 12 leading teen safety experts was asked to review the report and provide their own comments. Finally, TikTok partnered with Dr. Richard Graham, a clinical child psychiatrist specializing in healthy adolescent development, and Dr. Gretchen Brion-Meisels, a behavioral scientist specializing in adolescent risk prevention, to offer further guidance.
The data the report uncovered is worth examining as it talks about how social media can be a breeding ground for harmful content, such as these viral challenges, due to how social platforms are so widely used by young people. And young people have a much greater appetite for risk because of where they are in terms of their psychological development.
As Dr. Graham explained, puberty is this extraordinary period that serves to prepare the child for the transition to adult life. It’s a time of “massive brain development,” he said.
“There is a lot of focus now on understanding why teens do the things they do, because those judgment centers are being re-examined in preparation for more complex thinking and decision making in the future,” he explained. Dr. Graham said that young people’s brains are developing in terms of abstract thinking, the recognition of more complex psychological and emotional states, and a more sophisticated consideration of relationships.
And as all of this is going on, their desire to learn about the world around them increases, and this may include the desire, at times, to engage in riskier activities and to prove themselves or win the approval of their own. companions.
Sometimes these “dangerous” activities are relatively harmless, like watching a horror movie or riding a roller coaster. But other times, teens and other young people may choose to commit to something that they believe will actually stretch them in some way, and this draws them into riskier challenges.
“Sometimes they will … bite off more than they can chew, and they will have an experience that will somehow traumatize them at least in the short term, but the aspiration [of teens] is to grow up,” he noted. Additionally, viral challenges, in general, can appeal to teens’ desire for approval from friends and peers because they generate likes and views.
But the way teens assess whether challenges are safe is flawed: They tend to simply watch more videos or ask their friends for advice. Meanwhile, parents and teachers have often been hesitant to speak out about challenges and deceptions for fear of sparking more interest in them.
The study found that most teens do not participate in the most dangerous challenges. Only 21% of teens around the world participated in challenges and only 2% participated in those that are considered risky. An even smaller 0.3% have participated in a challenge that they considered “really dangerous”.
Most thought participation in challenges was neutral (54%) or positive (34%), not negative (11%). And 64% said that participation had a positive impact on their friendships and relationships.
The research also examined cheating challenges, such as Blue Whale or Momo, that propagate the belief that there is a bad actor directing children to engage in harmful activities that lead to self-harm or suicide. 61% of teens said they look for more information about hoaxes when they come across them to try to verify if they are real, but a lot of confusion tends to remain around hoaxes.
Teens suspect that people who republish hoaxes are doing so because of likes and views (62% believe this) or because they believe the hoax is real (60% believe it). Forty-six percent of teens exposed to hoaxes sought support or advice, indicating that teens would benefit from resources to help them understand hoax material.
Although research indicates that there is much more to be done for social media platforms to address user safety related issues, TikTok’s response here is fairly minimal.
The company announced today that it is adding a new section dedicated to challenges and hoaxes to its Safety Center, and is reviewing some of the language used in its warning labels that appear when people search for hoaxes linked to suicide or self-harm, according to the researchers. ‘suggestions.
Considering the relatively minor nature of these changes – essentially a help document and some revised text – it’s interesting to note that TikTok livestreamed a presentation on their research to the media for an hour on the Monday before today’s announcement, and emailed the entire 38 pages. investigative report to journalists.
In doing so, it appears that TikTok wants to differentiate itself from Facebook, where the convicting internal investigation was kept quiet until whistleblower Frances Haugen produced thousands of documents indicating that Facebook was aware of its issues and took no action.
TikTok wants to be seen as a significant participant in the investigation process, but ultimately, the changes it is making do not solve the problem. At the end of the day, harmful content is a challenge to moderation and a problem that stems from the nature of social media itself.
Removing harmful content would require a system designed from the ground up so as not to incentivize the shocking or outrageous in exchange for likes and views. This is not how most social networks are built.
TikTok also said that it will look for sudden increases in infringing content, including potentially dangerous behavior, linked to hashtags. For example, if a hashtag like #FoodChallenge, which is commonly used for sharing food recipes and culinary inspiration, began to see peaks where more content violated TikTok policies, then the moderation team would be alerted to look for the causes of this spike. . they could act.
Or, in other words, TikTok says it will now moderate content better, something users thought the company was doing anyway.
The full research report is below.
Praesidio Report – Exploring Effective Prevention Education Responses to Dangerous Online Challenges by TechCrunch on Scribd