A Smart Conversation With A Smart Person About What "Good" Government Regulation of Games Might Look Like
It's going to be much easier to do it poorly than it will be to do it well—but it can be done.
This week, I published a deep dive into the state of regulation aimed at games in the United States, as pressure mounts here and abroad against what’s often called Big Tech. Some of the most popular video game spaces, like Roblox or Minecraft, are also social networking spaces where children hang out as much as they play video games.
My seven-year-old, for example, rarely plays Roblox on her own. It’s almost always when her friend down the street pings her on Messenger Kids, prompting them to use Facebook’s chat program as a way to talk with one another, ala Discord, and play Roblox simultaneously. On weekends, this kicks off early in the morning, before the parents are ready for neighborhood kids to be in the house. And if they’re lucky enough to be hanging out in the evening while the parents are having a drink, their own evenings typically end with them hanging out next to each other on Roblox.
While looking for people to chat with about what well-intended regulation could look like and what unintended consequences might unfurl from the current legislation in Congress, I ended up chatting with Mike Pappas, CEO at a company called Modulate.
Modulate pitches itself as “empowering game studios to protect their players.”
It uses AI, but we’re not talking generative art. It’s tools like “ToxMod,” which uses machine learning to analyze voice chat to “flag bad behavior, analyzes the nuances of each conversation to determine toxicity, and enables moderators to quickly respond to each inside by supplying relevant and accurate context.” Modulate also helps companies track—ding, ding, ding—changing regulatory compliance worldwide.
I started chatting with Mike because of another person I was talking to for the story, research psychologist Rachel Kowert. After Rachel said a bunch of smart things, I asked if she knew any other smart people who could also speak to the topic at hand and Mike’s name came up. You’d be surprised how much of journalism is chatting with someone who knows a lot and then asking who else they respect and admire.
A lot of what Mike told me ended up in the final piece, but the conversation was interesting enough that I figured “hey, it’s probably worth sharing the whole thing.”
What follows is the entirety of our conversation, though admittedly it’s mostly Mike!
Mike, let's start here: What do you think this proposed legislation, what it gets right and what it gets wrong, says about the seemingly inevitable future of sharper legislation, both here and abroad, directed at video games played by kids? What do you hope is taken away from this legislation, when eyes turn towards the Robloxes of the world?
Mike Pappas: Starting with an easy one, I see ;)
My sense is that legislators in the US are used to saying "I want the world to be like this, so I'll just legislate those outcomes." That worked well back when the world was simpler, and is understandably hard to get deeper than this, but this causes more and more issues as we get into a complicated digital world, because what seems like a plausible way the world could be to a non-technical legislator is growing increasingly divorced from what is actually in fact possible to achieve with technology.
One example I've cited recently around KOSA—it's easy and intuitive to say "protect privacy and safety please!" It's understandable why legislators would make this mistake. But the truth on the ground is that this is just like saying "I want the purplest, greenest car you've got." It can't really be BOTH maximally purple and maximally green - there's a tradeoff someone needs to make, and if legislators are not sophisticated enough to make it, then we end up in a situation where each platform is guessing for themselves how the courts will interpret this tradeoff - which leads to a lot of confusion, a lot of unwarranted costs, and ultimately a less effective bill.
So, what do I think this legislation gets right? I think they've identified real problems—things like addictive design, problematic recommended content, and platforms neglecting their duty of care are real and important.
But I think they've gotten wrong the way to solve those problems, which is unsurprising because these problems are really genuinely hard, not just from a technical standpoint but also from a systems-design problem. (At what point does a feature become "addictive" compared to just being "fun"? How do you design for "fun" without ever allowing anything resembling "addictiveness"?)
I personally don't think legislators should be trying to solve this hard of a problem. I think instead they should be focusing on transparency obligations, requiring platforms to openly, honestly, and clearly advertise (a) what data they collect, and why; (b) how opaque systems like matchmaking, recommendations, and other major features work [at a high level, no need to share secret sauce]; and (c) real data on safety outcomes on their platform, and what they are doing to stop it. This information gives consumers the real freedom, to choose for themselves what kind of platform to engage with, based on a real understanding of the actual risks.
KOSA and COPPA 2.0 both have elements of this—KOSA mandates transparency reports and funds additional research, and COPPA 2.0 of course requires data collection notices and parental consent. But they (KOSA in particular) also mandate that platforms use specific approaches which may or may not be well-conceived or even well-defined. The costs of figuring out what KOSA is actually requiring, especially when 50 different state AGs have interpretative discretion, is a massive one...so realistically, I expect passing this bill to just result in a lot of platforms deciding that the cost of catering to kids is far too high, and kids ultimately having far less choice of platforms to use. (Which is especially dangerous because it means kids are more concentrated on those few remaining platforms, so someone exploiting such a platform can do truly massive damage.)
Honestly, regulatory fragmentation—the competing and often conflicting requirements of states, the federal government, and international authorities - is a much larger problem; but the way KOSA defers to state AGs is a prime example of it. Games are hit particularly hard by this fragmentation, too, because the laws are typically written with social media in mind and leave gaming as an afterthought - meaning there's more ambiguity around how games are supposed to comply, and more room for unanticipated conflict between different bills.
Taking a more transparency-based approach would be far more reliable, empowering for end users—and it's something games largely already are familiar with! You already see ESRB ratings like "E for Everyone"—but these ratings exclude any review of online experiences like voice or text chat. So what I'd most like to see are transparency requirements that ensure that kids aren't lured onto the platforms with the best or most confusing marketing, but instead empower kids and parents to genuinely understand the experiences on offer and choose the ones that are truly most suitable for them.
One follow-up I'd have in this regard is that while I agree with you about transparency, parents are already overwhelmed. Part of the reason I started Crossplay is because parents, even gaming-literate ones, are hard pressed to fully understand how parental controls work across multiple devices. It's my job to know about how all these systems work, and the idea of more transparency? Do you see the tension I'm getting at here? I don't think most parents want regulation so they can ditch parenting their kids, but these systems are deeply complicated to fully grasp.
Pappas: To your points on transparency, I certainly see where you're coming from...but honestly, I mostly see this as a consequence of bad regulation, or "malicious compliance" (Though I'll caveat that I don't think there's actual malice here, more a combination of overworked policy folks and risk-averse lawyers diluting everything.)
It's possible to be transparent while distilling things down into something meaningful and easy to grasp for the average person. The ESRB does this with age ratings for games (though these sadly don't extend to online interactions today.) I think Riot has a good example of this even in legal docs—their terms of service include easy-to-understand summaries of each section before the legalese.
To make it easy for parents to determine which platforms are safe, they need a way to understand the safety/privacy features of each platform. Which in turn means they'd ideally get those answers using a simple vocabulary shared by each online platform. And means these answers must be available somewhere, either in a centralized location or in an easy-to-find place on each platform.
Those challenges—uniform vocabulary, obvious places to go to get information—those are exactly the sort of coordinating-many-groups challenges that governments are made for! So while I agree we don't have those resources right now, I'd much prefer to see regulators focused on solving these problems, which largely cannot be solved without a central organizer like the government, rather than taking the approach of restricting what each platform can do (which requires the central organizer to know the details of each platform—a challenge uniquely unsuited to a centralized government!)
Oh, finally, I want to quote from someone else I chatted with about this piece, get your take on it:
"There is a "policy triangle" that is often discussed in the world of trust and safety. The points of the triangle are privacy, safety, and self-expression. If you pull on one end of the triangle, you are inevitably losing ground on the other two. I am perpetually concerned that if we pull too hard on safety, we lose privacy and self-expression."
Pappas: Regarding the policy triangle, I firmly agree with the three points that were outlined here (Alice Hunsberger had a great post about this a few weeks back in her newsletter). And I similarly agree that pulling on any of these three in excess will come at a real cost to the other two.
That said, I don't share the commenter's implicit beliefs that (a) we're currently pulling hardest on safety, or (b) that any pull on the triangle comes at the cost to the other pieces.
Re: (a), I think most people would flatly agree that Europe is most passionate about privacy, and the US is most passionate about self-expression. Safety, to my eye, seems to very rarely be favored at the cost of either of the other two. I'll grant that KOSA certainly would promote safety (in some ways, at least) at the cost of these two...but KOSA is also receiving fierce pushback on the basis of both privacy and self-expression! I'll confess I haven't researched this myself, but I feel relatively confident that if you looked at the discourse surrounding the passage of e.g. GDPR, you'll find considerably fewer people arguing about how this obsession with privacy could put safety at risk.
Re: (b), I first want to call out that it's possible to sometimes get wins in any of these areas without compromising on others. ToxMod, our voice moderation tool, specifically is designed to catch the same amount of safety violations while reviewing and collecting less user data—meaning using ToxMod as compared to other tools can provide a straightforward privacy win at no cost to safety or self-expression. This is possible because right now, a lot of the things we're doing aren't actually optimal for safety, privacy, or self-expression. In other words, if we were already only collecting the minimum necessary data to achieve X level of safety, there would be no way to get more safety without sacrificing some privacy.
But today, many platforms are collecting far more than the minimum necessary data to achieve X level of safety, so we can achieve a pure win by just collecting less of the unnecessary stuff (which is what ToxMod does.)
More broadly, we should look at this from an ecosystem level. Any individual platform, in my opinion, should have the right to, say, collect all their users' data they can; or restrict any discussion of certain topics; or punish even unintentional offenses with permanent bans; or anything else. But in order for this to be possible, we would need two things. First, transparency, so that users would know what they were signing up for. And secondly, alternatives, so that users who aren't happy with the restrictions can go somewhere else.
This is why I don't blink when users of more family-friendly games complain that we're restricting their "freedom of speech" by stopping them from using racial slurs as part of their trash talk. There are plenty of other platforms that allow that kind of trash talk. (Compare to the physical world. If someone showed up at a children's playground and started loudly talking about the history of racial slurs, I think most people would consider it reasonable to ask that person to go somewhere else—there are plenty of other places where they can have that conversation without putting children at risk!)
Have a story idea? Want to share a tip? Got a funny parenting story? Drop Patrick an email.
Also:
Transparency is important, but good transparently with good tools is different. It will be easy for platforms to provide information, but is it useful information?
This is just me, but if content is directed at children, in-app purchases should be as heavily regulated as advertising, aka there shouldn’t be much, if any at all.
To that point, for a little while I was on the ad tier of Netflix because my wife and I weren’t watching much there, while the kids were able to watch everything in their section without a single ad showing up. Everything should be that way!
Thanks for posting the full interview. I agree with you that a "market-based"/parental choice solution isn't really tenable---people are bombarded with too much information already and additional transparency isn't going to help with that.
I'm also skeptical of any solution that requires collecting MORE data on users. Consider what I think is an analogous problem: age verification laws for using porn sites (404media has probably the best encapsulation of it: https://www.404media.co/age-verification-laws-will-drag-us-back-to-the-dark-ages-of-online-porn/). Additional sites/services that have giant databases with personal/private information because of the increasing number of know-your-customer laws just creates additional points of failure in a hack or leak.
I think it would interesting to talk to someone like Cory Doctorow to get his perspective on KOSA/COPPA 2.0 from a data privacy angle. https://pluralistic.net/2023/12/06/privacy-first/