internet safety – Techdirt (original) (raw)
KOSA Won’t Make The Internet Safer For Kids. So What Will?
from the let's-think-this-through dept
I’ve been asked a few times now what to do about online safety if the Kids Online Safety Act is no good. I will take it as a given that not enough is being done to make the Internet safe, especially for children. I think there is enough evidence to show that while the Internet can be a positive for many young people, especially marginalized youth that find support online, there are also significant negatives that correlate to real world harms that lead to suffering.
As I see it, there are three separate but related problems:
- Most Internet companies make money off engagement, and so there can be misaligned incentives especially when some toxic things can drive engagement.
- Trust & Safety is the linchpin of efforts to improve online safety, but it represents a significant cost to companies without a direct connection to profit.
- The tools used by Trust & Safety, like content moderation, have become a culture war football and many – including political leaders – are trying to work the refs.
I think #1 tends to be overstated, but X/Twitter is a natural experiment on whether this model is successful in the long run so we may soon have a better answer. I think #2 is understated, but it’s a bit hard to find government solutions here – especially those that don’t run into First Amendment concerns. And #3 is a bit of a confounding problem that taints all proposed solutions. There is a tendency to want to use “online safety” as an excuse to win culture wars, or at least tack culture war battles onto legitimate attempts to make the Internet safer. These efforts run headfirst into the First Amendment, because they are almost exclusively about regulating speech.
KOSA’s main gambit is to discourage #1 and maybe even incentivize #2 by creating a sort of nebulous duty of care that basically says if companies don’t have users’ best interests at heart in six described areas then they can be sued by the FTC and State AGs. The problem is that the duty of care is largely directed at whether minors are being exposed to certain kinds of content, and this invites problem #3 in a big way. In fact, we’ve already seen politically connected anti-LGBTQ organizations like Heritage openly call for KOSA to be used against LGBTQ content and Senator Blackburn, a KOSA co-author, connected the bill with protecting “minor children from the transgender.” This also means that this part of KOSA is likely to eventually fall to the First Amendment, as the California Age Appropriate Design Code (a bill KOSA borrows from) did.
So what can be done? I honestly don’t think we have enough information yet to really solve many online safety problems. But that doesn’t mean we have to sit around doing nothing. Here are some ideas of things that can be done today to make the Internet safer or prepare for better solutions in the future:
Ideas for Solving Problem #1
- Stronger Privacy: Having a strong baseline of privacy protections for all users is good for many reasons. One of them is breaking the ability of platforms to use information gathered about you to keep you on the platform longer. Many of the recommendation engines that set people down a bad path are algorithms powered by personal information and tuned to increase engagement. These algorithms don’t really care about how their recommendations affect you, and can send you in directions you don’t want to go but have trouble turning away from. I experienced some of this myself when using YouTube to get into shape during the pandemic. I was eventually recommended videos that body shamed and recommended pretty severe diets to “show off” your muscles. I was able to reorient the algorithm towards more positive and health-centered videos, but it took some degree of effort and understanding how things worked. If the algorithm wasn’t powered by my entire history, and instead had to be more user directed, I don’t think I’d be offered the same content. And if I did look for that content, I’d be able to do so more deliberately and carefully. Strong privacy controls would force companies to redesign in that way.
- An FTC 6(b) study: The FTC has the authority to conduct wide-ranging industry studies that don’t need a specific law enforcement purpose. In fact, they’ve used their 6(b) authority to study industries and produce reports that help Congress legislate. This 6(b) authority includes subpoena power to get information that independent researchers currently can’t. KOSA has a section that allows independent researchers to better study harms related to the design of online platforms, and I think that’s a pretty good idea, but the FTC can start this work now. A 6(b) study doesn’t need Congressional action to start, which is good considering the House is tied up at the moment. They can examine how companies work through safety concerns in product design, look for hot docs that show they made certain design decisions despite known risks, or look for mid docs that show they refused to look into safety concerns.
- Enhance FTC Section 5 Authority: The FTC has already successfully obtained a settlement based on the argument that certain harmful design choices violate Section 5’s prohibition of “unfair or deceptive” business practices. The settlement required Epic to turn off voice and text chat in the game Fortnite for children and teens by default. Congress could enhance this power by clarifying that Section 5 includes dangerous online product design more generally and require the FTC to create a division for enforcement in this area (and also increase the FTC’s budget for such staffing). A 6(b) study would also lay the groundwork for the FTC to take more actions in this area. However, any legislation should be drafted in a way that does not undercut the FTC’s argument that it already has much of this authority, as doing so would discourage the FTC from pursuing more actions on its own. This is another option that likely does not need Congressional action, but budget allocations and an affirmative directive to address this area would certainly help.
- NIH/other agency studies: Another way to help the FTC to pursue Section 5 complaints against dangerous design, and improve the conversation generally, is to invest in studies from medical and psychological health experts on how various design choices impact mental health. This can set a baseline of good practices from which any significant deviation could be pursued by the FTC as a Section 5 violation. It could also help policy discussions coalesce around rules concerning actual product design rather than content. The NTIA’s current request for information on Kids Online Health might be a start to that. KOSA’s section on creating a Kids Online Safety Council is another decent way of accomplishing this goal. Although, the Biden administration could simply create such a Council without Congressional action, and that might be a better path considering the current troubles in the House. I should also point out that this option is ripe for special interest capture, and that any efforts to study these problems should include experts and voices from marginalized and politically targeted communities.
- Better User Tools: I’ve written before on concerns I had with an earlier draft of KOSA’s parental tools requirements. I think that section of the bill is in a much better place now. Generally, I think it’s good to improve the resources parents have to work with their kids to build a positive online social environment. It would also be good to have tools for users to help them have a say in what content they are served and how the service interacts with them (i.e. turning off nudges). That might come from a law establishing a baseline for user tools. It might also come from an agency hosting discussions on and fostering the development of best practices for such tools. I will again caution though that not all parents have their kids’ best interests at heart, and kids are entitled to privacy and First Amendment rights. Any work on this should keep that in mind, and some minors may need tools to protect themselves from their parents.
- Interoperability: One of the biggest problems for users who want to abandon a social media platform is how hard it is to rebuild their network elsewhere. X/Twitter is a good example of this, and I know many people that want to leave but have trouble rebuilding the same engagement elsewhere. Bluesky and Mastodon are examples of newer services that offer some degree of interoperability and portability of your social graph. The advantages of that are obvious, creating more competition and user choice. This is again something the government could support by encouraging standards or requiring interoperability. However, as Bluesky and Mastodon have shown, there has been a problem with interoperable platforms and content moderation because it’s a large cost not directly related to profit. This remains a problem to be solved. Ideally a strong market for effective third party content moderation should be created, but this is not something the government can be involved in because of the obvious First Amendment problems.
Ideas for Solving Problem #2
- Information sharing: When I went to TrustCon this year the number one thing I heard was that T&S professionals need better information sharing – especially between platforms. This makes perfect sense: it lowers the cost of enforcement and improves the quality of enforcement. The kind of information we are talking about are emerging threats and the most effective ways of dealing with them. For example, coded language people are adopting to get around filters to catch sexual predation on platforms with minors. There are ways that the government can foster this information sharing at the agency level by, for example, hosting workshops, roundtables, and conferences geared towards T&S professionals on online safety. It would also be helpful for agencies to encourage “open source” information for T&S teams to make it easier for smaller companies.
- Best Practices: Related to other solutions above, a government agency could engage the industry and foster the development of best practices (as long as they are content-agnostic), and a significant departure of those best practices could be challenged as a violation of Section 5 of the FTC Act. Those best practices should include some kind of minimum for T&S investment and capabilities. I think this could be done under existing authority (like the Fortnite case), although that authority will almost certainly be challenged at some point. It might be better for Congress to affirmatively task agencies with this duty and allocate appropriate funding for them to succeed.
Ideas for Solving Problem #3
- Keeping the focus on product design: Problem #3 is never going away, but the best way to minimize its impacts AND lower the risk of efforts getting tossed on First Amendment grounds is to keep every public action on online safety firmly grounded in product design. That means every study, every proposed rulemaking, and every introduced bill needs to be first examined with a basic question: “does this directly or indirectly create requirements based on speech, or suggest the adoption of practices that will impact speech.” Having a good answer to this question is important, because the industry will challenge laws and regulations on First Amendment grounds, so any laws and regulations must be able to survive those challenges.
- Don’t Undermine Section 230: Section 230 is what enables content moderation work at scale, and online safety is mostly a content moderation problem. Without Section 230 companies won’t be able to experiment with different approaches to content moderation to see what works. This is obviously a problem because we want them to adopt better approaches. I mention this here because some political leaders have been threatening Section 230 specifically as part of their attempts to work the refs and get social media companies to change their content moderation policies to suit their own political goals.
Matthew Lane is a Senior Director at InSight Public Affairs.
Filed Under: children, internet safety, kosa, parental tools, privacy
State Attorneys General Trash Internet Safety Study, But Still Can't Provide Data To Counter It
from the maybe-it's-not-such-a-big-threat-after-all dept
Last month, a wide-ranging panel of experts did a big study and found out that the risks of online predators stalking kids on social networks was totally overhyped — something that we’d seen in previous studies, though none as wide-ranging and comprehensive. These results shocked and upset the group of 49 state attorneys general who have been pushing hard to force social networks to implement a variety of mechanisms to “protect” against this threat that really isn’t that big. It’s not surprising that these AGs want to push this. It makes it look like they’re doing something to “protect the children,” at little cost to themselves. The public imagination, helped along by politicians and the press, have been falsely led to believe that these sites are crawling with child predators tricking children, but the truth is that such cases are extremely rare. That’s not to play down the seriousness of the few cases where it happens, but it’s hardly a major epidemic.
Still, the state AGs were none too pleased with the report’s results, and some of the more vocal social network haters have been trashing it for using out-dated data. Of course, these AGs haven’t actually provided the up-to-date data that contradicts the report’s findings. So, one well-respected online safety researcher, Nancy Willard, went out and found some recent data to look at. Adam Thierer summarizes her findings — but the quick version is that the recent data does, in fact, support the study’s original conclusion: there just isn’t that much predatorial behavior happening on social networks. In fact, the report found that general chat rooms were much more risky than social networks. The key point:
The incidents of online sexual predation are rare. Far more children and teens are being sexually abused by family members and acquaintances. It is imperative that we remain focused on the issue of child sexual abuse — regardless of how the abusive relationship is initiated.
Focusing on social networks as being a problem is taking away resources from where the real threats are… all in an effort for some AGs to get some easy headlines.
Filed Under: attorneys general, internet safety, stats