Meta, the parent company of Instagram and Facebook, is introducing new parental monitoring tools and privacy measures on its platforms amidst escalating scrutiny over the impact of social media on adolescent mental health.
However, the effectiveness of these features, which require minors and their parents to opt-in, is questionable. For example, Instagram will now prompt teenagers to allow their parents to “oversee” their account after blocking someone, capitalizing on moments when they might be more receptive to parental advice.
Upon a teenager’s agreement, the system will grant parents the ability to set usage time limits, monitor their child’s followers and the accounts they follow, and track the duration their child spends on Instagram, excluding access to message content.
Instagram initiated parental oversight tools last year to assist families in navigating the platform and finding resources and guidance. A key obstacle, however, is the need for children to register for their accounts to be supervised by their parents. The number of teenage users who have chosen this option remains undisclosed by Meta.
This kind of supervision enables parents to observe the mutual friends between their children and the accounts they follow or are followed by. This can trigger a warning if the child is followed by someone not followed by any of their friends, suggesting that the teen may not know the person offline.
Meta claims this will “help parents ascertain the depth of their teen’s familiarity with these accounts and stimulate real-world discussions about these connections.”
Meta is extending parental monitoring features already on Instagram and its virtual reality products to Messenger. This opt-in functionality lets parents track their child’s time spent on the messaging service and view details such as their contact lists and privacy settings, but it does not reveal their chat partners.
Such functionalities may benefit families where parents actively participate in their child’s online life and activities, scenario experts argue is far from the norm.
The U.S. Surgeon General, Vivek Murthy, warned last month that there is insufficient evidence to prove that social media is safe for children and teenagers, urging tech companies to act to safeguard young users immediately.
In a conversation with The Associated Press, Murthy acknowledged the safety efforts of social media companies but emphasized that these were inadequate. For instance, although children under 13 are technically prohibited from using social media, many access Instagram, TikTok, and other apps by falsely stating their age, sometimes even with parental consent.
Murthy criticized the expectation for parents to manage their children’s engagement with rapidly changing technology that “fundamentally alters their children’s self-perception, friendship-building, and world experience — technology that previous generations never had to handle.”
He lamented, “We’re offloading all this responsibility onto parents, which is patently unfair.”
Starting Tuesday, Meta will encourage — but not mandate — youngsters to take breaks from Facebook, mirroring its existing practice on Instagram. After 20 minutes, teenage users will receive a notification to take time off the app. However, they can ignore this by simply closing the message. TikTok also recently imposed a 60-minute usage limit for users under 18, which can be overridden by inputting a passcode set by either the teen or their parent if the child is under 13.
Diana Williams, the executive in charge of product modifications for youth and families at Meta, stated, “Our focus is on a range of tools to support parents and teens in engaging in safe and suitable online experiences. We’re also striving to develop tools that teens can utilize themselves to understand and manage how they spend their time. So, things like ‘take a break’ and ‘quiet mode’ in the evenings are our focus.”
As the digital era continues to evolve rapidly, the necessity for tools to protect young social media users and foster responsible online habits becomes ever more pressing. Meta’s recent advancements in parental monitoring features on Instagram and Facebook are an attempt to meet this need. However, these measures’ overall efficacy and acceptance will undoubtedly be subject to continued scrutiny and debate. The coming months will be pivotal in demonstrating whether these tools can truly make a substantial difference in teenage users’ safety and mental health or if more extensive steps need to be taken.