META introduces additional safeguards for Instagram accounts managed by adults mainly have children, the company announced In a post on the blog on Wednesday. These accounts will be automatically placed on the stricter application settings of the app to prevent unwanted messages and will feature the platform’s “Hidden Words” feature that will be activated to filter off the offensive comments. The company is also releasing new security features for adolescent accounts.
Accounts to be placed on the new, stricter messages include those who are managed by adults who regularly share photos and videos of their children, along with accounts managed by parents or talent managers representing children.
“While these accounts are overwhelmingly used in benign ways, unfortunately, there are people who may try to abuse them, leaving sexual comments under their positions or asking for sexual images in DM, in a clear violation of our rules,” the company writes on their blog. “Today we announce steps to prevent this abuse.”
Meta says she will try to prevent potentially suspicious adults, such as people who have already been blocked by adolescents, by finding accounts with mainly children. Meta will avoid recommending suspicious adults on these Instagram accounts and vice versa and find it difficult to find each other in the Instagram search.
Today’s announcement comes as Meta and Instagram has taken steps last year to address social health concerns associated with social media. These concerns were created by the US General Surgeon and various states, some of which still have arrived so much that it is required Parental consent for access to social media.
Changes will significantly affect the accounts of family vloggers/creators and parents executing accounts for “Kidfluencers”, and both have faced criticism of the relevant dangers by sharing the lives of children in social media. A New York Times survey published last year found that the Parents often know of their child’s exploitation or even participated in it, selling photos or clothes wearing their child. In the NYT examination for 5,000 parent accounts, they found 32 million connections with male fans.
The company reports that the accounts placed on these stricter arrangements will see a notice at the top of their Instagram flow, notifying that the social network has updated their security arrangements. The notice will also ask them to review their account privacy settings.
The Meta notes that it has deducted almost 135,000 Instagram accounts that were sexual accounts that have mainly children, as well as 500,000 Instagram and Facebook accounts connected to the original accounts they had removed.
Along with today’s announcement, META also brings new security features to DMS to adolescents, the experience of applying with integrated protections for adolescents.
Teenagers will now see new options to see security tips, reminding them to carefully check the profiles and take care of what they share. In addition, the month and year that the account joined Instagram will appear at the top of the new talks. In addition, Instagram has added a new block and reference option that allows users to do both things at the same time.
The new features are designed to give adolescents more about messages and to help them locate potential scammers, Meta says.
“These new features complement the security warnings we show that we remind people to be careful in private messages and to block and report anything that makes them uncomfortable – and encourage to see teenagers respond to them,” Meta writes in the blog. “Only in June, they block accounts 1 million times and reported another 1 million after they saw a security notice.”
Meta also provided an update on the nude protection filter, noting that 99% of people, including adolescents, maintained it. Last month, over 40% of the blurry images taken at DMS remained blurred, the company said.
