Artificial intelligence was in cross stitch of governments concerned about how it can be misused for fraud, disinformation and other malicious online activity; now in the UK, a regulator is preparing to investigate how artificial intelligence is being used to combat some of the same, especially in relation to content which is harmful to children.
Ofcomthe regulator charged with enforcing the UK measures Internet Safety Actannounced that it plans to launch a consultation on how artificial intelligence and other automated tools are used today, and may be used in the future, to proactively identify and remove illegal content online, especially to protect children from harmful content and to identifying child sexual abuse material that was previously difficult to locate.
The tools will be part of a wider set of proposals that Ofcom is putting together focusing on keeping children safe online. Consultation on the full proposals will start in the coming weeks with the consultation on artificial intelligence coming later this year, Ofcom said.
Mark Bunting, director at Ofcom’s Online Safety Group, says his interest in AI starts with a look at how well it is used as a control tool today.
“Some services are already using these tools to identify and protect children from this content,” he told TechCrunch. “But there is not much information about how accurate and effective these tools are. We want to look at ways in which we can ensure that the industry evaluates [that] when using them, ensuring that risks to free expression and privacy are addressed.”
One possible outcome will be Ofcom proposing how and which platforms they should assess, which could potentially lead not only to platforms adopting more sophisticated tools, but also to fines if they fail to make improvements in either content blocking or in creating better ways to retain younger users. from seeing it.
“As with many internet security regulations, the onus is on companies to make sure they are taking the right steps and using the right tools to protect users,” he said.
There will be both critics and supporters of the movements. AI researchers are finding increasingly sophisticated ways to use AI detect, for example, deepfakes, as well as to verify users online. However, there are just as many skeptics who note that AI detection is far from foolproof.
Ofcom announced the consultation on AI tools at the same time it published its latest research into how children engage online in the UK, which found that overall, there are more younger children online than ever before, both so much so that Ofcom is now collapsing activity among increasingly younger age groups.
Almost a quarter, 24%, of all 5- to 7-year-olds now have their own smartphone, and when tablets are included, the numbers rise to 76%, according to a survey of US parents. The same age group also uses much more media on these devices: 65% have made voice and video calls (vs. 59% just a year ago) and half of kids (vs. 39% a year ago) watch media flow.
Age restrictions around some major social media apps are getting lower, but whatever the limits, in the UK they don’t seem to be being enforced anyway. Around 38% of 5- to 7-year-olds use social media, according to Ofcom. Meta’s WhatsApp, with 37%, is the most popular app among them. And in the first instance of Meta’s flagship image app being relieved from being less popular than viral sensation ByteDance, TikTok was found to be used by 30% of 5- to 7-year-olds, with Instagram at “just » 22%. Discord rounded out the list, but is significantly less popular at just 4%.
About a third, 32% of children this age go online themselves and 30% of parents said they were fine with their minor children having social media profiles. YouTube Kids remains the most popular network for younger users, with 48%.
Gaming, a perennial favorite among children, has grown and is used by 41% of 5- to 7-year-olds, with 15% of children of this age playing shooting games.
While 76% of parents surveyed said they talked to their young children about staying safe online, there are questions, Ofcom points out, between what a child sees and what that child might report. In the survey of older children aged 8-17, Ofcom interviewed them directly. It found that 32% of children reported seeing disturbing content online, but only 20% of their parents said they reported anything.
Even allowing for some inconsistencies in reporting, “Research suggests a disconnect between older children’s exposure to potentially harmful online content and what they share with their parents about their online experiences,” Ofcom writes. And disturbing content is only one challenge: deepfakes are also an issue. Among 16-17-year-olds, Ofcom said, 25% said they were unsure about telling the difference between fake and real online.