This daily update about Meta's low EU average revenue per user and the Supreme Court in Section 230 was published on Tuesday, May 23rd, 2023. Good morning. On this morning's dithering, John and I discussed Meta's EU fine and the potential wide-ranging impacts on tech. You can add dithering to 15 minute episodes per week, not a second less, not a second more using the link at the bottom of this email.
这篇关于 Meta 公司每个欧盟用户的低收入以及第230条款最高法院的最新消息是在2023年5月23日星期二发布的。早上好。在今天的 dithering 中,约翰和我讨论了 Meta 公司在欧盟的罚款以及对科技领域可能产生的广泛影响。如果您想听每周15分钟的 dithering,您可以使用本电子邮件底部的链接,不多一秒钟,不少一秒钟。
On to the update. Meta's low EU average revenue per user, Arpo. Thursday's Techary interview will be with Eric Sufert. I mentioned that today because he mentioned something in the course of our recording that I think is worth calling out as an addendum to yesterday's update.
From Meta's earnings call last month, this excerpt is from a two-part question. I'm only including the relevant second part. And then on the regulatory front, can you just help explain how changes to the project planning, data transfer rules may impact your European business as it goes into effect? Just trying to get a sense if you have a potentially an idea-based situation with a signal loss here again. Thank you.
So, first, I want to emphasize we continue to be hopeful that the new EU-US privacy framework will be implemented before a deadline for suspension. But if it comes to that, there's a lot that we don't know in terms of the specifics of a final order and how long a suspension order would last, which would be important variables in determining the overall impact. What we do know is that roughly 10 percent of worldwide ad revenue comes from ads delivered to Facebook users in EU countries. But there are more details that we would need to understand, including the impact on advertisers in EU countries before we'd be able to really provide a more accurate or fulsome estimate of that impact.
Yesterday, I know that meta wouldn't be abounding the EU over a $1 billion fine, plus I'll rematch it my cost to abide by the EU's various rules and regulations. Given it made $25.79 billion in Europe last year, that's 22 percent of revenue. That you will note is a much bigger number than roughly 10 percent, which is to say that it appears that meta makes more money in the non-EU European countries than it makes in the EU as a whole.
Roughly 10 percent of its 2022 revenue is only $11.7 billion. That's a surprisingly low number, and increases the credibility of meta's threat to potentially leave the market. I'm still skeptical to be clear, because the value to a social network goes beyond ad revenue. Meta users in the rest of the world, including say the UK, have a better product by virtue of people in the EU being on the platform. That value goes the other way, of course. Meta's platforms are valuable to EU citizens because the rest of the world is already on them. That by extension raises an interesting possibility. Could meta make its platform subscription based in the EU?
It's not as outlandish of propositions you might think. Here meta's relevant numbers by region for last quarter. Click the link in your show notes to see the full table, but the pert in number here is that the US and Canada makes $48.85 per user. Europe makes $15.51 per user. Asia Pacific makes $4.52 per user, and the rest of the world makes $3.35 per user. Keep in mind though, that more than half of European revenue comes from non-EU countries, even though those countries have a combined population, $302 million, that is 2 thirds the size of the EU's, $4.47 million.
If we incorporate least 10% figure in split active users proportionally, we can separate the EU and non-EU segments in the table above. Click through to see the full table, but the relevant numbers here are US and Canada has an Rpool of $48.85. The EU has an Rpool of only $11.47. Non-EU European countries have an Rpool of $20.84. Asia Pacific is $4.52, and rest of the world is $3.35. My understanding is the UK is the biggest driver of meta-revenue in Europe, but I didn't realize just how much the EU trailed. In fact, EU Rpool is closer to Asia Pacific in the rest of the world than they are to the US and Canada, or even non-EU countries, which again, suggests that the UK is probably closer to that US and Canada mark.
Now keep in mind these are quarterly figures. Meta would need to charge around $5 a month to make up that revenue, including App Store fees, but of course a lot of people would drop off, which means that number would be even higher in reality. That's obviously not going to happen, and besides, this is a social network. Of course meta isn't going to charge for a subscription. Instead the company is giving EU users the option of filling out a web form to opt out of personalized advertising. It remains to be seen if that holds up in court.
Still, I thought this was a useful exercise. I never realized just how low EU Rpool was, which also has explained the EU's approach. Equally isn't that more than EU businesses, so it's no big deal to EU regulators that make using meta for business less useful.
The Supreme Court in Section 230, from CNBC. The Supreme Court declined to address the legal liability shield that protects tech platforms from being held responsible for their users' posts, the court said in an unside opinion Thursday. The decision leaves in place for now. A broad liability shield that protects companies like Twitter, Meta's Facebook, and Instagram as well as Google's YouTube for being held liable for their users' speech on their platforms. The court's decision in these cases will serve as a big sigh of relief for tech platforms for now, but many members of Congress are still itching to reform the legal liability shield.
In the case, Gonzales v. Google, the court said it would, quote, decline to address the application, end quote, of Section 230 of the Communications Dizancy Act, the law that protects platforms from their users' speech, and also allows the services to moderate or remove users' posts. The court said it made that decision because the complaint, quote, appears to state little, if any, plausible claim for relief, end quote. The Supreme Court will send the case back to a lower court to reconsider in light of its decision on a separate but similar case, Twitter v. Tom Neh.
In that case, a family of an American victim of a terrorist attack sought to hold Twitter accountable under anti-terrorism law for allegedly aiding and abetting the attack by failing to take enough action against terror's content on its platform. In a decision written by Justice Clarence Thomas, the court ruled that such a claim could not be brought under that statute. This is a technically correct reading of the Supreme Court's decision in these cases.
The actual opinion in Twitter v. Tom Neh, though, which is not about Section 230, does in fact both very well for Section 230 going forward. I wrote about Gonzales v. Google the week it was argued in order that the only real open questions about whether algorithms recovered under Section 230, quote, the crux of this case goes to the second paragraph, an algorithmic timeline of recommendation engines.
Well, it is noteworthy that the genesis of Section 230 was primarily about protecting kids from porn, at least as far as congressional intentions are concerned. It is pretty set a lot at this point that platforms are not liable for the content posts on them by third parties, in the US anyways. The questioning in Gonzales v. Google, though, is whether platforms are liable for the recommendations.
On one hand, from a purely legalistic perspective, I can definitely see the case for yes. Simply hosting content is distinct from promoting content into a user's feet. Well that promotion decision is made by an algorithm and not by a human. It is editorial in nature. On the other hand, moderating some content but not other content is itself an editorial decision. And the entire point of Section 230 was to make clear that a good faith effort to moderate content did not mean that the moderator assumed liability for all the content on the platform.
Should that apply to the inverse? From a product perspective, meanwhile, or call it the reality perspective if you wish, a win for Gonzales in this case would be a disaster for the way current platforms work. The fact of the matter is that one of the implications of there being zero marginal cost in terms of the production and distribution of content is that there is an overwhelming amount of content. This means that a lot of content, including spam, needs to be deleted. It also means that a superior user experience comes from the platform recommending content that you might be interested in.
End quote. Again, Twitter VTamna, which was the actual case that was decided, was not about Section 230, but these two paragraphs from Justice Clarence Homs' opinion are very pertinent. The mere creation of those platforms, however, is not culpable. To be sure, it might be that bad actors like ISIS are able to use platforms like defendants for illegal and sometimes terrible ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large.
Nor do we think that such providers would normally be described as eating and abetting, for example, illegal drug deals broke out over cell phones, even if the provider's conference call or video call features made the sale easier.
To be sure, plaintiffs assert that defendants' quote-unquote recommendation algorithms go beyond passive aid and constitute active substantial assistance. We disagree. By plaintiffs' own telling, their claim is based on defendants' quote, provision of the infrastructure which provides material support to ISIS, end quote. Viewed properly, defendants' quote-unquote recommendation algorithms are merely part of that infrastructure.
All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content, including ISIS content, with any user who is more likely to view that content. The fact that these algorithms match some ISIS content with some users does not convert defendants' passive assistance into active abetting. Once the platform and sorting tool algorithms were up and running, defendants had most allegedly stood back and watched. They are not alleged to have taken any further action with respect to ISIS.
That bit about algorithms being part of the infrastructure is important. It calls back to an important First Amendment principle known as content neutrality. Content neutral laws regulate speech without regard to what the speech is actually about, and are generally allowed under the First Amendment. As an example, you can have laws that require a permit for a protest, but those laws cannot be based on what the protest is about.
In this case, Justice Thomas seems to imply that as long as the algorithms weren't designed to push ISIS content explicitly, they're just algorithms, not explicit points of view about controversial topics. They can't be held liable for whatever content they happen to surface. Moreover, to the extent they could theoretically be held liable, it would be for explicitly and purposely pushing content with the intent of causing harm.
That's the exact opposite complain of many Section 230 critics, which is that platform censored too much. Once again though, the actual decision that was made here was not about Section 230. Still, I think this is a very good signal that Section 230 is on very solid ground, which is good news for tech. If Congress doesn't like it, they'll need to make a new law.
The daily update is intended for a single recipient, but occasional forwarding is totally fine. If you'd like to order multiple subscriptions for your team with a group discount, please contact me directly. And thanks for being a subscriber and have a great day.