Hey Citizen,
Imagine a world where the spread of ideas and the flow of information are manipulated by powerful actors (blah blah blah), we all know the story—our perceptions and thoughts are being influenced from all sides. While there has been some action to control this influence, it’s becoming more and more subtle, emphasizing the real problems to address.
We digest ideas that float to the surface. Out of thousands of opinions some are “magically” curated for us by an “innocent”, opaque recommendation system running towards one-sided optimized goals leading to instrumental convergence. We nurture ourselves with these ideas regularly, having our daily information meal, as if to complete a perfect analogy to fast food consumption.
Most people continue to consume content that, while regulatory approved, is still unhealthy—but doesn’t yet seem urgent enough to act on changing the circumstances. There’s a saying in the data science community: “garbage in, garbage out”, so by now you are mostly spewing garbage by food and by data.
The tech companies behind these systems optimize for revenue, often skirting the edge of regulatory requirements in an ongoing cat-and-mouse game.
We all witnessed the social media CEOs testifying before Congress, and many of us found their policies disturbing, yet not entirely surprising. However, the topic soon faded from public discourse, as we assumed that regulators were addressing the issue.
The truth is, it’s incredibly challenging for regulators to influence these algorithms in any meaningful way beyond the obvious step of reducing the promotion of divisive content. The problem runs deeper than the algorithms themselves; the real challenge lies in achieving clarity on how ideas—both factual and false—are disseminated across these platforms. Social media companies have little to no (or even negative) incentive to actively understand their platform’s micro and macro dynamics. And that’s understandable—this challenge requires its own ecosystem (stay tuned for a future post on this topic).
Similar to how the earth platforms natural disasters, we can’t fully blame these platforms for the dynamics they unfold. What we lack is a robust system to understand and predict these dynamics—much like how a weather system helps us mitigate the impact of natural disasters. Just as we rely on weather forecasts to prepare for storms, we need visibility into public discourse to prevent societal crises.
Now, imagine we had a powerful tool that allowed us to peek behind this digital curtain—a tool that could untangle the flow of information.
Researchers, journalists, and watchdogs once had such a tool.
It was called CrowdTangle. And now, Meta has shut it down.
TL;DR:
On August 14, 2024, Meta officially shut down CrowdTangle, a social media monitoring tool used by researchers, journalists, and fact-checkers worldwide. CrowdTangle allowed users to track how content spread across Facebook, Instagram, and other platforms. Meta cited the tool's limitations and the development of a new system, the Meta Content Library (MCL), as reasons for the closure. However, many experts argue that the MCL falls short of CrowdTangle's capabilities and accessibility, potentially hampering efforts to study misinformation, track political discourse, and ensure platform accountability.
For the curious rabbits wanting to dive deeper, here are some links as a solid start:
Brandon Silverman's (CrowdTangle founder) thoughts and analysis report
Meta Is Getting Rid of CrowdTangle — and Its Replacement Isn’t As Transparent or Accessible
And I get it—raising alarms about social media transparency might feel like just another thing in a long list of modern concerns. For every citizen of the internet, it can be exhausting to care deeply about yet another pressing issue. We’re all busy, swamped by a constant flood of drama, news, work, relationships, or just plain hard nothing. But this isn’t just another issue—it’s about the digital world we’ve grown into, the inseparable space that strongly shapes our lives and minds.
As has always been the case throughout history, our thoughts are inevitably being framed by advanced strategies that we just barely had the taste of. With generative AI pushing these issues to criticality, making it more urgent then ever for the creation of convenient tools and processes to uncover these influences.
This call for uncovering these complex influences is what I see as the social media transparency imperative.
Let’s take this as a wake-up call. This is the century where we must learn to efficiently discern what enters our minds, building mental and technological tools that help us navigate, filter, contextualize, and serve information in alignment with our personal and societal incentives.
Looking ahead, I want to make this clear: consuming social media through algorithms developed by profit-driven mega-companies, coupled with a lack of competition and complex dynamics, sends a loud message—personal well-being is out of scope. As discussed earlier, it’s not entirely their fault. We’re all just playing the game within the current system. But the question is, don’t you want to take your turn?
On that note, I’d like to mention that we’re quietly working on developing tools that could gradually help shift this dynamic, adding to the efforts of others already addressing these important challenges. More on that soon™.