Skip to content
Apr, 06, 2026

Predators, Algorithms, and Profit: How New Mexico Took Down Meta

0:00 0:00
View Transcript

Summary

In this episode of Straight White American Jesus, we sit down with New Mexico Attorney General Raúl Torrez for a wide-ranging conversation about his office’s landmark case against Meta and what it reveals about the dangers embedded in today’s social media platforms. At the center of the case is “Issa,” a fictional teenage user created as part of an undercover operation that exposed just how quickly young users can be targeted with explicit content and sexual solicitations. Torres walks us through how what once existed in the darkest corners of the internet has migrated onto mainstream platforms—and how Meta’s own algorithms and product design not only failed to stop it, but in some cases appeared to amplify it. By focusing on design choices rather than user-generated content, Torres and his team were able to sidestep Section 230 protections and argue that the platform itself plays an active role in facilitating harm.

The conversation also explores the broader implications of the case, from the addictive nature of social media to its parallels with Big Tobacco. Torrez argues that waiting for definitive long-term studies on harm is a luxury we can’t afford, pointing instead to the immediate psychological, social, and physical risks facing young users. Looking ahead, he outlines potential remedies—including age verification, algorithmic reform, and independent oversight—as well as ongoing litigation against other platforms like Snapchat. The discussion closes with a warning about the next frontier: artificial intelligence. Without clear accountability and proactive regulation, Torrez suggests, the harms posed by AI could eclipse those of social media. This case, then, may represent not just a legal victory, but the beginning of a broader shift toward tech accountability in the United States. Learn more about your ad choices.

Meet The Guest

Raúl Torrez

Raúl Torrez is New Mexico’s 32nd Attorney General. A former federal prosecutor and senior advisor in the U.S. Department of Justice, he has dedicated his career to public service and strengthening the rule of law. Prior to being elected Attorney General, he served as the elected District Attorney for the Albuquerque metro area, where he led one of the state’s largest law offices.

Transcript

Brad Onishi: Welcome to Straight White American Jesus. I'm Brad Onishi, author of American Caesar, founder of Axis Mundi Media. Today, at the beginning of this week, I usually do a solo episode to break down some headlines or an issue or an article, but I have an interview that I just have to share with you today, and I wanted to make sure we got it out quickly. So today I'm joined by Attorney General Raúl Torrez of New Mexico. He's New Mexico's 32nd Attorney General. And this is the AG who you might have seen a headline, you might have read an article, some of you might have dug into this deeply, but this is the AG who took on Meta in court and won just a couple of weeks ago—a landmark decision about Meta's lack of protections for young people, a settlement of over $300 million, and a second court appearance and a second half of the case to be decided here in about a month that will involve more deliberation about exactly how much money will be awarded and how much Meta will need to pay. But as I get into with—representative, or excuse me, with Attorney General Torrez—this is a case that could be the kind of Big Tobacco moment for Big Tech and for social media, because instead of talking about them as kind of platforms or as a bulletin board for content where they can skirt responsibility for the stuff that is posted, the gross, disgusting content, the predatory content, the ways their products expose underage people to predators, to harmful images, videos and so on—this may change the game. Now, some of you may not be convinced of that, but I did ask him about this.

I asked him about the nuts and bolts of the case, the way that they created a fake profile of an underage girl to demonstrate the ways that young people are preyed upon on Meta platforms, but also what the product design means and the addictive propensities of some of these products, and how all that fits into the mix. I want to thank some of you in our Discord community who really helped me flesh out some of my questions and thoughts about this interview, and I hope that it's something that will shed some light on where we're headed. I think you know that for me, Big Tech is a constant concern because of the fascist elements that are now in Silicon Valley. It's a big part of my forthcoming book American Caesar. So having a chance to talk about this with Attorney General Torrez, for me, was important. It taught me a lot, and I hope it does for you, too. Before we go to the interview, I want to ask you to do a couple things. Think about subscribing to our newsletter. Want to ask you to go subscribe to One Million Neighbors, a new podcast series from Axis Mundi Media about how Americans in the Midwest, specifically the Twin Cities, helped to resettle one million Southeast Asian refugees in the 1970s and how that set the table for the activism and neighborism we are seeing today in St. Paul and in Minneapolis. Want to ask also for you to think about becoming a subscriber. That's the only way we can do this show. You can find that in the show notes. It's 50 bucks a year, and it is exactly why we are here so often, bringing you interviews, coverage, analysis, It's in the Code and everything else. Appreciate all of you. Hope that you learn a lot from our conversation. Here we go.

As I just said, we just have an extra special guest today, somebody who I just can't wait to talk to, and who has done something that I think a lot of folks felt like maybe would never happen, and that is held Meta accountable in some way. So that is Attorney General Torrez from the great state of New Mexico. Thank you for joining me.

Raúl Torrez: Thanks for having me. Appreciate it. It's great to have you here.

Brad: And I have so many questions about this case and so many issues I want to try to flesh out here. I know people are intensely interested in this. I know you're doing interviews all over the place to talk about it, but the case really centers on a teenage girl named Issa who signed up for Facebook, was ready to kind of get on there, like all of us, and be on social media. Soon, she is inundated with unwanted messages, people in her inbox sending her X-rated photos. But the problem is, Issa is not real. Issa is part of Operation MetaPhile. Tell me about that.

Raúl: Well, it actually starts with my work about 20 years ago as an Internet Crimes prosecutor in this same agency. I used to work on child pornography and child solicitation cases, and it was a real eye-opening sort of moment for me to realize that what was occurring 20 years ago in the deepest and sort of darkest corners of the internet had all migrated onto some of the biggest social media platforms. I think there was a growing sense of awareness around the psychological impact, the addictive nature of these products, the way in which it amplified the possibility of body images and suicidal ideation and self-harm, but it was the element of potential sexual exploitation that really got me interested, and what that led to was the development of a slightly different approach from a litigation standpoint, that really honed in on what it would be like to be a young girl in these spaces, and that's how the undercover account of Issa was developed. It was developed using the same techniques that we use in criminal investigations. As you said, she was inundated, absolutely flooded, with requests for graphic sexual material, sexual solicitations. But what was even more shocking is that in response to that explosive growth, rather than raising some concern, Meta had actually delivered information to the account about how Issa could grow her following, how she could amplify and monetize that growth. And I think that was the moment where it really came through that this was a much deeper and darker problem inside the company, and one that only grew as we got further and further along in the case.

Brad: One of the terms that came out of the case was a virtue that Facebook acts as a virtual victim identification service. And I'm wondering if you can help us understand what that means.

Raúl: Well, it really comes down to the mechanics of the algorithm and product design. Arturo Bejar, who was one of the leading whistleblowers, who worked on the safety team and did some research inside the company and then came forward with revelations about their lack of concern or the lack of response to some of the things that he issued—when he was on the stand, he said, "Look, these products are very good at connecting people with their interests. And if you have an interest in young girls, the product will be very good at connecting you with young girls." And if you think about it, most people who are on social media platforms, they do so—you know, they go there trying to connect with friends, trying to have social interaction—but at the same time, they're creating a digital sort of representation of the things that interest them, the things that they are motivated to look at, the content that they are drawn to. Those same mechanics, you know, for somebody who's just in the space operating in a normal way, it may connect them with a vacation that they may be interested in, or a car that someone's trying to sell them, or a pair of sneakers. But if it's a predator, what it's going to do is connect them with others—with young people, with other users on the platform—and in that space that the platform knows and can identify as fitting an interest for someone who's engaged in predatory behavior. And it's those mechanics that drive exploitation in these spaces, and frankly, those mechanics that were at the heart of the product liability case that we presented in Santa Fe.

Brad: Correct me if I'm wrong, from what I understand, Facebook employees were aware of the scope of this kind of going back to 2018, pre-pandemic.

Raúl: There's been a deep awareness in the company for some time, both as to the addictive nature of the product itself, which was another key component of the presentation we made to the jury. This is a company that employs, you know, behavioral scientists, folks that are constantly trying to adjust the user interface and add features that will increase engagement. At the same time, there are aspects and elements of that experience that lend themselves to the kind of predatory behavior that we've identified. And one of the things that we were primarily concerned about is the way in which adult users were able to communicate with underage users. Ironically and sadly, there used to be a place where referrals to law enforcement used to occur, where we used to see some of that communication traffic within the company's own messaging apps. The day after we filed the lawsuit, the company actually made the decision to implement end-to-end encryption, and what that effectively meant is that they blinded themselves to the nature of the communication that was at issue in our case. They have since pulled back from that. Right before we got a verdict in the case, they announced that they were going to stop doing end-to-end encryption, I think in part because they recognized that that was going to be a really damning thing, not only in the eyes of the jury, but in the eyes of the public. Because that's the kind of design choice and feature that was implemented under the guise of protecting people's privacy, but what it really meant is blinding themselves to a widespread traffic of sexual exploitation. By their own accounts, something on the order of half a million children every single day all over the planet are exposed to sexually explicit or sexually exploitative material, and that's just based on what we know from their own accounts. What we also know is they just haven't made nearly enough in terms of the investments needed to make those spaces safer.

Brad: So in order to win this case, you had to do something that has proved really difficult to do, and that is circumvent or somehow reckon with Section 230. So folks listening, some of you are going to be highly aware of this. You're product designers. You're in the space. Some of you are not. Section 230—it goes back to 1934, renewed in the late '90s, in the mid-'90s. But it basically says, if you're a computer service provider, if you're somebody who's a bulletin board for social media posts, online content, et cetera, you cannot be held liable for the information that's posted by somebody else, somebody who's using your platform, somebody who's using your virtual bulletin board. And usually, in these cases, Meta and other platforms have been able to say, "We are the bulletin board. We are not the content creators. We're not the authors." How did your strategy in this case get around the kind of usual Section 230 defense?

Raúl: Yeah, so Section 230 was amended and updated in the Communications Decency Act in 1997. So for context, I mean, this is back at a time when I was still waiting for a dial-up tone, you know, through AOL. There were no smartphones, there were no social media platforms, and the technological landscape had evolved dramatically. So the first and most important thing for people to understand is that the lawsuit wasn't about content. It's not about the third-party content of individual people, even individual people who have an abhorrent and criminal fascination with children. What it was focused on were the specific design choices, the features of the product itself, and in that sense, it's very analogous to what had happened in Big Tobacco. You know, in the late 1990s this was a product that was knowingly and intentionally engineered to be addictive. It was knowingly a product that was dangerous to young people, and the company, despite being well aware of those dangers and those known potential harms, misled and lied to the public, including parents and young people, about the relative safety. And so what we did is we focused on how the product is designed with respect to any content. In other words, it's content-neutral. Infinite scroll is a good example—the idea that you are given or fed an automatic display of video doesn't have anything to do with the content, but they know it's a feature that is explicitly designed to engage a developing brain in a way that plays on their propensity for addictive behavior. We also can point to specific features that enhance and facilitate the connection between predators and underage people. And so by focusing on design and by focusing on communications from the company about that design, we were able to prevail in a motion to dismiss under Section 230. It was certainly something that they have hidden behind for years. They tried to get out from this lawsuit using that same theory. Fortunately, each state's consumer protection laws are something that I think are going to be a vehicle for advancing this kind of litigation, not only here, but across the country.

Brad: To take on a company like Meta—how do you do it? You're a state Attorney General. We talked about how big Meta is. How do you even go about finding the capacity, the resources to take that on?

Raúl: Well, the reality is they have an incredible legal machine. I think at certain points in time, they had something like 12 or 15 different law firms. They have seemingly unlimited resources. They have the finest and most expensive lawyers they can find. In terms of the financial components or the resource challenges, I will give a lot of credit to my counterparts across the country. New Mexico—we might not have as much money as the state of California or the State of New York, but we were in collaboration with dozens of other states, and that made a big difference for us to be able to assemble a case that could really take on some of the most sophisticated lawyers in the country. I will also give a lot of credit to just our work here in this office. I'm not sure that they took us seriously, at least initially, because of where we are as a state, because of the fact that we are a relatively small and poor state. And I think underestimating New Mexico is typically not a very good idea.

Brad: I mean, I want to go back to—you said 12 to 15 law firms at once. I mean, I think that may sound to folks, that's not even like, well, they have the best lawyers. They would have the best law firm. Like, they have, you know, 12 to 15 of the best law firms on this. And yeah, I was a little bit like, that sounds impossible to even take on. You also spoke, and your office has testified in front of congressional hearings. You've done this and so forth. And I wonder if—and I'm not saying this is definitely how you felt, but I wonder if you did have a sense at any point like, "We may not win this, but we need to testify. We need to be in this fight to hopefully move a needle in some way for protections for young folks." Did any of that go into your thinking? Or was it always, "Yeah, we're here to play hardball. We're gonna win"?

Raúl: I mean, I think you have to do a couple things. You have to have a lot of confidence, and you have to tell yourself you're going to win from day one. The reality is it's a big, scary bet, and the gamble we were taking was really extraordinary, and something that I am sure that, you know, there were many members of my team, many of my colleagues, who had concerns about the direction we had decided to go in and concerns about whether or not we could sort of see it through. Because at the end of the day, if you don't have the right result, and you know, you can put all of that together, and you end up with a decision that goes in the wrong direction, or an outcome that means there's no accountability for Meta, then we would be back to square one. And so, you know, there were those concerns. But I also think that we felt incredibly confident about the case itself, and we felt confident about our position, about the evidence we had, and about the strategies that we were taking. And part of the reason why I was pretty comfortable with it is because I knew the company had a lot of baggage, and I knew and had a pretty good sense that there were people who we could turn to who could speak to some of the internal problems inside the company, people who had access to information, documents that showed what the company knew and the way they knew it. And so from that perspective, I actually was always felt pretty confident. And you know, you have to have that, obviously, as you're going into a long process, and you're spending lots of time, energy and resources. You have to believe in your case, or else you're not going to be able to sustain it.

Brad: Part of what, you know, when we think about what platforms are and what they do, when there's content on there, I think a lot of people became more familiar and conversant with this—in terms of thinking about this Section 230 kind of stuff—when Trump was kicked off of platforms, and there were a lot of discussions about free speech and censorship and these kinds of things. But what you did was target—it wasn't "the problem is the content." It's "the product itself is the problem," and then we're gonna go after Meta and say, "You have a product defect." So when did you kind of make that turn? Like, "Wow, I gotta go after the product instead of the content"?

Raúl: Well, I think, you know, that was part of the early strategic discussions, is that I had real skepticism about the legal challenges that were there, especially from the immunity angle. As you said, these companies have spent decades litigating pretty heavily and defending their content or their platform immunity, and they've been pretty successful at it. So that was a real consideration, frankly, all along. And so when we were looking at how to structure our case, we spent a lot of time thinking about what happened in Big Tobacco and how that led to major policy reforms in that space. But if you look at Big Tobacco, a lot of it was also about the way that they designed and engaged in predatory practices as it related to product design, the way they modified nicotine to make it more addictive. It was the way they lied to young people about the harms of the products. Those same mechanics, I think, are very present in the social media space, and here with Meta. And so that was always sort of my preferred approach, is I wanted to go into this space and take on Meta using the language and under a theory that I did not think would be challenged in the same way by 230. I wanted to make it very clear that we were not going after them because they were hosting speech. We were going after them because they had knowingly designed a product to be addictive and destructive, and that's a fundamentally different type of liability. And so that was our approach. And I think it was more persuasive. I think it was more grounded in common sense, from the perspective of most people that I've talked to. Everyone will say, "Look, I intuitively understand what I am getting when I am using a social media platform, but I also understand at an intuitive level that it's too addictive. Why does my three-year-old know how to scroll through TikTok? Why does my five-year-old figure out the algorithm? And how do I keep them away from YouTube Shorts? And how do I stop them from constantly trying to access the platforms and the screen time?" And I think if you talk to people at sort of a very instinctual, very basic level, they understand that these products are fundamentally addictive and potentially harmful. And so for me, that's just always felt like a better argument, a better case.

Brad: Yeah, I think the addictive thing is so important, and I think you're absolutely right. I mean, I will say there are certain things that I've covered over the years, and I'll go back years later and look at things I wrote that now feel like very obvious, but at that time, they weren't obvious, or they weren't in the conversation. And I think in years to come, maybe even in months to come, the addictive aspect of all this is going to be something that's just like a given. But I think most people now know it. We know it. We might not have been thinking about it in terms of a case or in terms of law, but to say someone picks up their phone 200 to 250 times a day—we know those stories, and I think that the kind of intuitive sense you're tapping into is so important. If we think about the verdict of $300 million—New Mexico will be able to use this for technology programs in schools to, I'm sure, in the area of keeping kids safe, and so forth. But it seems like there's a second part of the verdict yet to come in about a month. Can you tell us what happens next?

Raúl: So most of the settlement was civil penalties. The penalties are designed with punishing Meta in mind, not necessarily, you know, as a mechanism to get funding to some program, more so ensuring that the company is paying for their violation of state laws, and the law sets out that those have to be paid. The interesting part of it is we've entered into a consent decree that requires a number of things. Most critical among them is that they are required to appoint a third-party auditor, and that auditor will have full access to the way in which they are managing safety and security, protection of young people, enforcement of terms of service, and they will have five years during which they will have to report back on a semi-annual basis what they are finding, and they can come back to the court and they can bring claims against Meta for any kind of violation, and they can expose that publicly. That, to me, was the biggest piece that we were fighting for, is that transparency, ensuring that they are now under a mandate and under a restriction that if they are not complying, we can hold them accountable in a court of law, and everybody can see what they're doing and to what extent they are failing to meet the commitments they made to keep kids safe. The additional component that is currently scheduled for about a month from now, mid to late February, is a second case where we'll be arguing for an injunction. That's designed to force the company to make policy and technical changes. Some of those changes will be things like greater transparency in certain areas, but other changes will be more technical in nature, and they would require the company to make fundamental changes to the way that product is designed, the way that their algorithms work, the ways they handle accounts that target, exploit, and operate in predatory ways in terms of targeting young children. The question is, to what extent do we win injunctions and to what extent do we lose injunctions on that second round? That's not yet decided.

Brad: What about a state or two right next door? I saw New Mexico, of course, is adjacent to five different states. California being one of them. I just wonder about this being one state in the union. Is this the beginning of domino situations that will happen in different jurisdictions, and this kind of thing continues to build, and we could see a year from now 15 states have done this against Meta, or are at various stages of this?

Raúl: I hope so. I think, you know, as I said at the beginning, the case against Meta was never really in a vacuum. It was a coordinated effort across the country. Almost three dozen states are part of this litigation. Some of them aren't going to trial. They settled early on, but that still is sort of indicative and reflective of the broad consensus on this issue. But what's even more interesting to me is what this signals to other companies. I think what it should signal is, if you're operating in a space, especially if you're hosting young people, and if you are misleading parents and the public about what you're doing to keep those kids safe, that's not acceptable. Companies can no longer think that they can be dishonest with the public, that they can make claims that "We're taking care of this. We don't allow underage users. We don't allow predators. We take care of this on our own. We have a safety team"—but meanwhile they know it's occurring, that they're lying to the public. That can't be the standard, because they will be held accountable. And you can see it is starting to have some traction in some of these spaces. TikTok, I think, is very aware that they're at risk too. YouTube has some of those same exposures. So I do hope that the verdict against Meta is something that is going to have a broader conversation on technology, social media, AI, and otherwise, because I think part of the policy problem is we've allowed these companies to develop with a sense of invincibility, and because they're unaccountable, they haven't taken their obligation seriously. And so it was incredibly important to me that we send that accountability signal, that someone's going to take a harder look in this space if you're not responsible.

Brad: You mentioned TikTok. I'm glad you did. Just want to go back to something you said earlier. You were a prosecutor in Internet Crime, in these kinds of things. I've been following this story, not as a prosecutor, but you know, watching, and just last week the Supreme Court heard oral arguments about TikTok. And I was really interested in how they were thinking about the platforms in this case, in TikTok, as either content carriers or neutral platforms that are just kind of hosting stuff in kind of an impartial way, and many of the justices were questioning that idea from the beginning, that these are not just platforms. These are not just places where people post, and you know this company has no say in what they're putting up there. The algorithm—all of it works against that. And I'm just curious to hear your thoughts on how that might relate to kind of what you're doing, the verdict, the kind of signal Meta has that they've received around this, how that might speak to a case like TikTok.

Raúl: Yeah, you're exactly right. I mean, you know, there's this disconnect as it relates to the obligations of companies in these spaces, and you get that from policymakers and legislators, but you also get it from the judicial system and the courts and judges who sometimes seem lost when you try to explain the mechanics of the way these platforms operate. It's been an incredibly sort of blind spot in the policy and political dialogue for a long time, and it has to change, partly because we're running out of time. I mean, in the case of social media now, you know, some of these platforms have been around for 15, 20 years. Many of the reforms that should have occurred at the inception of them—you know, there were some cautionary flags raised early on about where this was all headed, and we didn't see a lot of response. The fact that these cases have been successful, the fact that you're starting to see some political pressure around it, that's what we need to get to some sort of bipartisan sort of consensus. But part of the problem with policymakers is they're still thinking of these companies as if they were dumb platforms, that they were not directing content, that they're just allowing people to post content, people read content, and they have no say in the matter. And that has not been the case for a long time, if ever, but it's certainly not the case now. And you can see it. Anyone can see it. All it takes for you to say, "How do I look up something particular that I'm interested in on YouTube?" And I can tell you, within two or three videos, you can see it is directing you in a particular direction, and you can even see it go directly to a darker corner of the internet, depending on what you're loading up. And so those algorithms and those elements of product design that make decisions that direct content in particular ways, those are editorial decisions. These companies are not neutral platforms. And you know, I keep being surprised that more people don't understand that fundamental reality. Now, maybe some of the justices on the Supreme Court in that oral argument—you got some sense that they at least have a basic level of awareness, and I think that's important. I think those kinds of decisions are going to start to reflect that. And I suspect, whether it's on tech issues or on other tech issues, you'll see probably pretty shortly some decisions that are pretty different from the way some of these cases resolved 20, 30 years ago, when we were in a different age with different dynamics and we did not have the lived reality of just what these platforms can do and have done in terms of destroying individuals' lives and damaging our body politic.

Brad: Yeah, I was thinking about The Social Dilemma, which I know a lot of listeners have watched—the documentary on Netflix. I think at this point it's several years old, and I was gonna ask if you'd seen it. And it sounds like you don't need to see it, but for folks that have seen it, there's a specific part of it where they're talking about the ways that people are creating algorithms, and one of the ways they do that is to think about how to keep people on a platform as long as possible. And that very premise allows for these products to be addictive, or pushes you to where things get more extreme as you're clicking on things. And when a platform is created on that basis, or algorithms that kind of run a platform are created on that basis, that's when I think, you know, there should be concern. I mean, it's not neutral. Of course, you're doing something to make money off of user data to sell to advertisers, to third parties, and to do that, you got to keep them on the platform as long as possible. Anyway, I don't know if you'd want to respond to that.

Raúl: No, I think that's exactly right. And if there's one fundamental truth, it's all of these companies are profit-driven. That's just, you know, it's not a criticism. It's just that's the reality. They are there to make money, and to make money, they have to give advertisers, they have to give marketers some reason to go with them, as opposed to go somewhere else. They have to find some mechanism. And they've certainly created a reality where they can target people, and they can deliver content to individuals that should be the most compelling, the most salient based on the advertiser's interest and what they're hoping to push. And so, you know, if you think about it from a financial perspective, all of the decisions that are about user engagement, all of the decisions about addiction, all the decisions about taking a young person into as deep a part of the platform as you can, they are all driven by profit. And so that makes sense from the company's perspective and their own interest in staying competitive and making lots of money. But from our standpoint, as a public, as a society, that is clearly operating against our interests. And I think at some point you have to confront it. I don't think that means we got to go in and regulate all of them out of existence. But I do think at a very basic level, requiring some modicum of transparency and accountability about product design and its impact, that shouldn't be something beyond the pale. That shouldn't be a very hard lift. And my hope is that we can get to a place where—both from a litigation standpoint, from a policy standpoint, from a political standpoint—that we can actually get some people to come together around a very basic understanding that these companies have to be transparent and accountable, and right now they're not.

Brad: It seems like there's a place there. You know, there are folks in the Senate and folks in the House. There's bipartisan energy around this. I just wonder about, you know, how much Big Tech money—I mean, I know when we think about Big Oil or Big Pharma, these kinds of things you just brought up, Big Tobacco—a lot of money that's getting thrown around. I just wonder about, when you think about this getting done in Congress, getting done in state legislatures, how do you think about the fact that in the end, there's a lot of money going to candidates from Big Tech that would kind of tell them it's not worth it to come after us. It's better to just let us operate and protect you, politically?

Raúl: That's a real problem. And you know, it is—that same problem exists in Big Tobacco and with Big Pharma, and it's going to exist in technology. I have more optimism that we'll be able to get something done, partly because we're at—you know, we've already gotten a number of states that have passed laws in this space. There is political pressure on members of Congress to do something. And you know this issue—I don't think the people of this country are on the side of the companies. And generally, that's, you know, you can see the polls. That's not a very common thing. Usually you get a lot of political division about who's aligned and who's not aligned and for whom people are going to place their support. I mean, parents, Democrats, Republicans, they all feel like they can't keep their kids safe. Democrats, Republicans, they all feel that these companies have lied to them and need to be accountable. From my experience, when you have political consensus and public consensus at the same time, you can't stop it. Even if money comes in, you're not going to be able to kill it. You might slow it down, but you're not going to be able to stop it. And so my hope is that we get to a point where there's enough political pressure and enough public awareness that we're able to push through something and get some real progress.

Brad: Just quickly, as a kind of a follow-on to that, is it—one of the frustrations that, you know, Big Tech is being friendly with the incoming administration. Folks, including Zuckerberg, are going to the inauguration. They're giving substantial amounts of money and so forth. Is that kind of stuff—does that make you just a little bit worried that we may hit a wall?

Raúl: Well, I think if you zoom back and you take a look at the last administration, you see a lot of alignment between the Justice Department and Elon Musk. You see a lot of sort of friendliness with the Silicon Valley billionaires, but you also see a lot of the legal victories. You see them getting held accountable in different forums with juries. And so it may be that that relationship with the incoming administration delays congressional action. It may be that it slows some federal level regulatory stuff. But I do think the market, meaning the legal market, the litigation arena, I think those kinds of cases are not going to stop. And I also don't think state level legislative work is going to stop. And so I think that we have to continue moving the ball forward. You know, it might be more difficult to get certain things passed through at a federal level, but I do believe that we just have to keep pushing. And, you know, if there's one dynamic, as you say, with Musk and with Zuckerberg, you know, getting more and more friendly with the incoming administration, I think that becomes an even bigger and bigger liability for some of these companies, in terms of their branding and in terms of their image. And to the extent that they are being viewed more and more as extensions of the MAGA or the incoming administration, I think that's going to lead to more and more people getting concerned about oversight and where that power resides and how do we constrain it. So it might be something that leads to more problems for them, as opposed to fewer. I think there'll be some protections, but I think it could become a liability. Because I don't think anybody wants to live in a space where the richest man in the world is essentially going to be sitting in the White House and directing policy and government. And I think more people understand that that's dangerous. And if that's the thing, then we are seeing an even more extreme version of what we saw in the first Trump administration.

Brad: Yeah, or just billionaire kind of oligarchy in general, which obviously has tremendous implications for democratic culture. So let me ask you, going forward, you are aware of the kinds of stuff I cover, the kinds of issues I talk about. We talk a lot about Christian nationalism, religious extremism here, and I know on Friday there's a case where you won, I think, in the 10th Circuit, around abortion, medication abortion, and that's a big victory for New Mexico. It involves Walgreens. Can you share anything about that, and what that verdict, what that victory means?

Raúl: Well, I think the big issue, and we've been litigating it now for some time, is Walgreens has allowed individual pharmacists to deny patients access to medication based on their religious views. Now we are very sensitive to freedom of conscience and freedom of faith, but my view, and the view of the state, is that you can't allow individual employees to stand in the way of their employer's obligation to dispense pharmaceuticals and dispense medication to people who need it, and to do so because of somebody's individual religious belief. And so we sued Walgreens for allowing that to happen in certain parts of the country, and they kept challenging those laws, kept challenging that suit. And now we've gone to the appellate level, and we've scored another win in terms of our ability to continue litigating that case. So I think you're going to see more and more—the key piece there is, I think the defense that a lot of these companies come to is freedom of religion. "Well, we're protecting the individual pharmacist's right to express their faith and in their work and to follow their religious values in this space." That just can't be the excuse anymore for shirking those obligations. At a very basic level, it's just not acceptable. And I think you're going to see more and more litigation in this space around, you know, contraceptive access and abortion access or otherwise. But the key piece is going to be ensuring that we draw those lines that we think are pretty important in terms of access to this kind of care.

Brad: Yeah, I mean, and the same principle applies. I used to do pharmacy work, and the same principle applies around Narcan. You know, people will come in, "I need Narcan." "Oh, you know, you're probably an addict, or so-and-so, I'm not gonna give you that." And that obviously cannot stand. And the number of lives that have been saved, and will be saved through Narcan access is tremendous. And it's the same kind of question, a religious conscience question a lot of times. So when you have talked about this, we've talked about addictive properties, we've talked about the ways predators can be met with folks more easily through platforms. If you think about it being that addictive and then think about all of us adults, it's more intense for young people. But if you think about it being that addictive, then you think about the fact that the predators can meet them, because these folks are there all the time. And you have to worry about the algorithms being able to move young people toward content that is more and more harmful and so forth. I wonder, just like, wrapping this up, but I'm curious if—is there anything you'd want to say to parents, to school administrators, to folks who are just like, "I'm just trying to protect my kids." Is there any advice you'd give them as you wrap up this case? And you know, you have kids yourself. I'm a dad. Just for me, like, what does someone on your end of this do with tech for their kids, I guess?

Raúl: Well, I mean, I think the first thing is, you're not—you know, if you think you can keep up with the technology yourself or monitor all of it yourself, you're just in denial. Like you just can't do it. And so the critical piece is be aware of the kinds of spaces they're operating in. Have someone who's a little more tech literate. I mean, I am in this space now for 20 years, and I still rely on people more tech savvy than me to help me sort of navigate this. The key piece here is these products are not safe. And so you have to, if you're going to allow your kids to operate in those spaces, you have to be taking steps to be aware of what they're experiencing. But also, don't assume, and don't expect, that Meta or that YouTube or that TikTok or any of them, they're going to protect your kids. They are not doing that. They say they're doing it. They are telling you they have measures in place, but you know they don't. You have to understand that. And so that's part of what we're talking about in this case, is you have to stop taking them at their word, and you have to stop believing that they're going to do that. There are a lot of people working on building stuff in terms of some additional layers of protection, some of those controls. I think things are starting to be developed. There's lots of pieces like that, but there are lots of limitations for a lot of that. But the key piece is, be aware of the risk, take it seriously. Think about, you know, how would you behave in a different way if you knew your kids were in an unsafe environment, or if you were sending them out somewhere? And don't assume that the platform is going to take care of business for you. They are not.

Brad: A long way. Just to go back to something you said earlier, we hear a lot about protecting children on both sides of the aisle, often in different ways and via different issues. This seems to be a bipartisan place where one might find Congress could actually—folks could reach across the aisle and actually get something done. I know we need to wind down. Let me ask you one more question. You are very concerned about AI. I've seen coming out of your office, a lot of work to battle—you know, what might be called the threat of AI. How do you see the threat of AI related to what we've talked about today, addictive propensities in technology and predatory possibilities for underage people?

Raúl: Well, you see some of the same parallels, lack of disclosure, lack of clarity from the companies as to, you know, being clear with customers and users about the nature of the potential harms and also the limitations of the technology itself. You see some of the same problems with policymakers running into a complex problem that they don't fully understand. And so the solution is, you know, this response of, "Well, let's do nothing." I don't think either of those is acceptable. We introduced an AI accountability, a deep synthetic, deepfake bill. Unfortunately, didn't get a lot of traction in the last legislative session. We'll be introducing that. It's all predicated on clear lines of accountability and potential liability for people who are misleading customers, misleading vulnerable populations about this. But I think if you take a step back and you look at the verdict that we just had against Meta, it's incredibly important that we signal accountability now, as we approach this sort of the end of the first, or second age of social media, before we are fully immersed in the new wave of artificial intelligence. We cannot allow this idea that companies like this can act with impunity, that they are unaccountable. We have to set guardrails and signals to market actors now so that they understand when they're developing the tools and technologies of the future, including and especially AI, that they understand at a very basic level, they will be held accountable for the product design choices that they make, and they will be held accountable for the things that they lie to the public about. And if they understand that, and if they understood that back at the inception of the social media age, I think we'd be in a very different space. So I view it very much as my responsibility to start sending those signals now and then, also assuming some responsibility for trying to navigate as best I can what basic parameters and safeguards we can put in place, because the harms that are potentially there from artificial intelligence, I think, will probably dwarf what currently exists in social media. So it's incredibly important that we act with a sense of purpose and urgency in this moment.

Brad: If we've lived through the second Gilded Age, I hope this is the beginning of the new progressive era, and we would have never had a 40-hour work week or Labor Day or Saturdays off, if not for all of those reforms and regulations then, and I hope this is the beginning of some of that now. So I know you're back to court in about a month. AG Torrez, thank you for your time. We appreciate all the information and perspective you've given. Are there things people can look out for on the horizon, in addition to that second part of this case, other things that are happening on these frontiers?

Raúl: Well, to your point, we have active investigations on a number of AI companies right now, and I anticipate some formal legal action to be taken in that space in the very near future.

Brad: That's great. Thank you so much. We appreciate your time. Thank you.

All right, all that'll do it for us today. Thanks for listening. Send in your feedback, give us a comment, put it on our Discord, let us know what you think about what we talked about, and if you can go subscribe and hit a review button, five stars, tell us what you think on Apple Podcasts, it really does help. Go to straightwhiteamericanjesus.com—it's a brand new website. We're super proud of it, and it has so much info there. You can also go to axismundi.us—we're a podcast network built for public scholars to bring their expertise into the public square so that everybody can learn, understand and see the wonders of religion, the pro-social and pro-democracy aspects, but also the threats that religion poses to our democracy. We'll see you Wednesday for It's in the Code. See you Friday for the weekly roundup. Thanks for being here.

Back to Top