Mallory Knodel

Next week the human rights and tech community will convene once again this time in Lusaka, Zambia. Annually RightsCon brings together practitioners across civil society, industry, governments, and the technical community. This is my tenth Rightscon and the Social Web Foundation’s second. Like last year, our participation in RightsCon is part of a broader commitment to ensuring that the development of digital infrastructure remains closely connected to public-interest values and to the communities most affected by technical design decisions.

This year, I will be participating in three sessions that reflect different but closely related strands of this work.

The first session, “Human rights reviews in internet standardization – what is at stake?” on human rights reviews in internet standardization, focuses on the processes through which technical standards are developed and the implications those processes have for rights protections. Standards bodies such as the World Wide Web Consortium, the Internet Engineering Task Force, and the IEEE play a central role in shaping the digital environment, yet their work often remains difficult to access and unevenly influenced. The discussion will consider how and when human rights and privacy considerations are incorporated into these processes, what is at stake when they are not, and what conditions are necessary to enable more meaningful and representative participation in standardization work. We’ll be in AG03 at 11:30 am on May 6. 

A second session, “A little less talk and a lot more action: Mobilising for feminist tech industry standards” hosted by the United Nations Population Fund, turns to the question of “safety by design” in the context of technology standards. While this concept has gained increasing prominence as a corporate and policy framework, the session situates it within a broader set of concerns about whose experiences and priorities are reflected in how safety is defined. By grounding the discussion in feminist principles and human rights obligations, the session creates space to examine how current approaches may fall short, particularly for communities that are disproportionately affected by technology-related harms, including gender-based violence. Catch us in A101 at 10:15 am on May 7.

The third session, “From platforms to people: Reclaiming the internet through the Fediverse” hosted by the Electronic Frontier Foundation and featuring co-panelist Bruce Schneier, focuses on the Fediverse as an alternative model for social networking infrastructure. The session will explore how federated and interoperable systems can support a more open and rights-respecting online environment, and what challenges remain in translating these models into systems that can operate at scale. In this context, the Fediverse is understood not only as a set of technologies, but as an evolving ecosystem shaped by governance choices, standards development, and the practices of its participants. The session is in AG01 at 15:15 on May 7.

Taken together, these sessions reflect the range of spaces in which questions about rights, governance, and infrastructure must be addressed as the social web develops. They also underscore the importance of sustained engagement across technical and policy communities. For SWF, RightsCon provides an opportunity to situate our work within this broader landscape and to contribute to ongoing conversations about how the new social media can better reflect and uphold human rights.

Last week at the Internet Governance Forum (IGF) Expert Group Meeting we considered what changes to this 20-year-old, UN initiative are required now that the UN General Assembly has made it permanent. This small, invitation-only gathering was tasked with the future of the IGF as a permanent UN mandate and how it achieves outcomes. Now that the IGF’s place at the UN is secure, we can stop trying to prove that multistakeholder dialogue matters, and start showing what multistakeholder governance is multistakeholder governance is capable of delivering.

This means civil society can find purchase for its work in IGF work itself, rather than considering the annual meeting a venue for outreach and promotion of its work that ultimately happens in other places.

The IGF is being actively redefined and the process is open to meaningful influence. I attended representing the Social Web Foundation, both a civil society organization and a key player in the technical community. My remarks were informed by other civil society organizations: the Association for Progressive Communications and its members.

Across discussions several core tensions and opportunities emerged. Rather than either/or, in almost all cases I view the IGF as being able to balance both:

  1. Dialogue and influence decisions. There is clear pressure for the IGF to move beyond being a convening space and toward something that can influence decision-making processes. This includes stronger alignment with global frameworks like World Summit on the Information Society (WSIS) and the Global Digital Compact, and more intentional pathways for IGF outputs to inform policy fora. This is possible without losing the open, iterative, multistakeholder dialogue that makes the IGF valuable. The approach is to enhance and make more visible the IGF’s ongoing work between annual meetings. Like standards bodies, the authority to push forward points of view and outputs within intersessional work rests on those who are participating in those processes, and the fact that the output is part of a multistakeholder UN process.
  2. Institutionalize top-down and elevate bottom-up. The permanent mandate creates an opportunity to rethink governance, structures, and operations of the annual meeting. At the same time there is broad recognition that the IGF’s legitimacy comes from its bottom-up nature, particularly through national and regional initiatives (NRIs). Embedding those processes more directly into governance is essential to strengthening the utility of top-down institutionalization while putting resources and attention on the more valuable bottom-up and direct impact potential of NRIs.

For those of us working on the social web, open protocols, and public-interest infrastructure, this moment is a significant one that can help leverage the IGF toward outcomes, not just outreach.

The IGF has long been a space where principles of openness, interoperability, decentralization are articulated. Last year SWF hosted an IGF session on decentralized social media. What is changing now is the hope, or the expectation, that concrete ideas grounded in these principles can translate into real outcomes both in policy processes and technical designs. To achieve this, two elements are needed: topic coherence and inclusion.

Concrete proposals already exist to strengthen topic coherence through organizing work into thematic clusters, streamlining and better coordinating ongoing work between annual meetings, and producing outputs that are targeted and usable. Some cross-institutional examples for the potential impact of IGF intersessional work:

Strengthening intersessional work reflects a shift toward treating the IGF as an ongoing governance process, not just a yearly event.

Moreover inclusion gets addressed if participation can be reframed not just as a value but as infrastructure. Unlike other internet governance institutions like global standards bodies, the UN provides funding for participation, language accessibility, and mechanisms for meaningful engagement from underrepresented groups and developing countries. Substantively, the IGF  can attract more structured engagement with governments, while simultaneously advancing openness to non-state actors in settings that have traditionally been the exclusive domain of multilateral diplomacy. 

What was clear in this meeting is that this outcome is not predetermined. It is being actively constructed and influenced by those participating in the process. The next phase will include rounds of consultations on many of the subjects under consideration by this small expert group. It’s important to think about how to leverage the IGF for tangible outcomes that bridge SDOs and other sites of influence over internet governance.

With Modal Foundation, PublicAI and Project Liberty Institute, the Social Web Foundation wrote a paper on the expert workshop held at the AI Action Summit in New Delhi.

Open social protocols promise to deliver user agency. As adoption grows, users encounter new tradeoffs across technical, economic, and governance dimensions. These tradeoffs are unique to open, federated architectures such as DSNP, ActivityPub, and AT Protocol, in comparison to the existing walled-garden platforms of big tech companies. The top-down control of dominant social media platforms manifests in centralized data accumulation, opaque ranking systems, surveillance tech revenue models, and locked-down architectures. Widespread concerns around data protection and privacy are commonly associated with algorithmic feeds and large language models powered by big tech companies’ AI products.

As new social and AI infrastructures emerge, it is important to consider not only whether they are open, but how they are governed. Since protocol design is not neutral, choices about how protocols structure identity, control over data, moderation, and economic incentives will shape who holds power in these new systems.

We recommend research priorities, standards coordination and experimentation with governance and funding models. The paper serves as an explainer on AI and Social Web protocols and includes contemporary analysis. It can be downloaded and shared on the Project Liberty Institute website.

By Mallory Knodel

This week I spoke at the United Nations 70th Commission on the Status of Women in a session titled “Automating Justice: Can Artificial Intelligence Increase Women’s and Girls’ Access to Justice?” The recording is available on the UN’s WebTV. I also delivered a short intervention at an EU side event on Preventing and Combating All Forms of Cyber Violence Against Girls, my reflections of which you can read about on GenderIT.

On the topic of AI, over the past several years I’ve written guidance for two main audiences: for courts navigating AI systems in judicial contexts, and for technology companies building the tools themselves. This talk was different. This was guidance for feminists, civil society leaders, technologists, and advocates working toward gender justice and equity, because AI governance is no longer a niche technical issue. It is shaping the systems that structure everyday life.

I came to the discussion as a technologist with nearly two decades of experience building internet infrastructure and now working on AI systems. Through my work with the Social Web Foundation, I also focus on emerging digital platforms and the infrastructure decisions that determine who benefits from technology and who bears its risks. I shared three reflections.

First, tech governance starts long before regulation.

AI governance doesn’t begin when a system is deployed, it begins in design. We can and should question the intentions of systems design, intervene in their development and monitor deployments.

In research I’ve done for US courts, one salient example is predictive policing. These tools promise efficiency by analyzing historical crime data. But historical data reflects historical inequality. When systems optimize on the past, they risk reproducing its injustices. The intention was to make policing easier, which the technology might actually achieve, but it comes at the expense of justice itself. Another panelist from Equality Now said in very strong terms that bias deepens gender imbalance. 

AI systems don’t “generate” new, more equitable realities. The generation is a reproduction that mirrors existing patterns in data from the past. If that data encodes discrimination, the analysis and the generated outputs will scale it.

Bias is not a technical glitch at the margins. It can enter at every stage: design choices, training data, validation processes and deployment contexts. Governance by design means addressing these risks before systems negatively impact people’s lives.

Second, equitable tech democratization needs feminism.

We are living through a remarkable moment: extraordinarily powerful AI tools are widely accessible. They help people summarize information, draft documents, translate complex material, and navigate complex systems.

This creates real possibilities for expanding access to justice, but access alone is not empowerment. The room closed with comments from Estonia, a fully digitalized European state. Governments should hold their own use of AI systems to the highest possible standard. Estonia said digitalisation and AI is not about efficiency or economics alone. It’s the point at which we must ask, “who will benefit and who is left out?” These are fundamentally questions of power and responsibility and human rights, which feminism is well placed to answer. 

We’ve seen this dynamic with social media platforms, which were celebrated as democratizing technologies that would ultimately lead to social change. Yet the imperfect implementations of this idea within the context of surveillance capitalism meant that data generated by billions of users ultimately fueled advertising systems optimized for profit, often at the expense of privacy and freedom of expression.

AI risks repeating this pattern. In my research on privacy-preserving AI, such as in encrypted environments, I examine how technical architecture can protect sensitive conversations and ensure people seeking help are not exposed to new forms of surveillance or harm.

If AI systems are to support survivors, legal aid seekers, and marginalized communities, they must include strong technical guardrails: privacy protections, meaningful data minimization, secure system design, limits on exploitative data extraction. Feminist principles can guide us towards the democratization of technology that does not exploit the marginalized.

Third, governance happens through standardization and setting global norms.

AI governance is often framed as a regulatory problem, but many of the most consequential decisions happen earlier in technical standards, system architectures, procurement rules, and infrastructure design.

The global internet offers an important lesson. A shared set of open technical protocols allowed networks around the world to interconnect. That technical coordination also produced a unique governance structure: multistakeholderism, in which norms are debated and standards evolve by a multitude of expert stakeholders.

However, AI lacks comparable interoperability. The largest AI foundation models rely on proprietary data ecosystems and are in competition. The AI business model rewards enclosure. As I mentioned earlier, AI was largely build upon enclosed, walled garden social media platforms. As a result, technical interoperability, and the governance structures that accompany it, remain drastically underdeveloped.

Recent work on human rights in standards by the Office of the High Commissioner for Human Rights has emphasized the governance instruments that shape how rights are realized in practice: standards bodies, procurement frameworks, and regulatory regimes. They all must consider human rights from the start. Perhaps this is our opportunity for interoperability, and thus multistakeholder governance of AI.

The future of AI is not predetermined. These technologies will reflect the values embedded in their design and governance. Ensuring they expand justice for women and girls requires vigilance, participation, and collective responsibility.

Mallory is heading to India later this month to kick off something we’ve been building toward for a while: the first in a global series on AI and the social web.

Alongside the AI Impact Summit in Delhi, SWF is holding space for dialogue among a focused group to dig into what comes next as AI becomes agentic and social systems become protocol-based. Hosted by the Observer Research Foundation, our core organizational partners for the event include Project Liberty Institute, Public AI and Modal Foundation.

We’re bringing together people who build, govern, research, and challenge digital infrastructure to get to the heart of the real technical, economic, and governance tradeoffs across open social protocols: ActivityPub, AT Protocol, and DSNP.

AI systems are increasingly shaping how information is created, ranked, moderated, and governed online. At the same time, if open social protocols are to be viable alternatives to centralized platforms, we must anticipate risks and ensure immunity to the same concentrations of power in the age of AI.

We’re curious about the emerging approaches to context, consent, and accountability when AI enters our feeds and slides into our DMs. In Delhi, we hope to surface shared priorities, hard constraints, and concrete next steps, and to lay the groundwork for ongoing protocol governance work that extends well beyond one protocol and this initial meeting.

Express your interest in joining us on 20 February at 11 am IST! https://luma.com/25chdoy8 

This week I (Mallory) was at the Digital Competition Conference hosted by the Knight Georgetown Institute representing SWF. Conversations unlocked cooperation between tech, law and regulation that is needed to rein in market abuse by gigantic digital infrastructures. What made this conference especially valuable was how consistently the discussion balanced between all three.

I was particularly happy to see a strong focus on technical realities: not just what regulators want to do, but what is actually feasible and desirable given how systems are experienced by users, and how they are built, secured, and governed today. The agenda was strong and  featured four technical talks that cut through the usual abstractions and tackled real trade-offs head-on.

My colleague Daji Landis  at NYU presented Security vs. Interoperability, which helpfully disentangles when security concerns are legitimate and when they are deployed rhetorically to block competition or interoperability.

Economist Yifei Wang from University of Pittsburgh examined Competition and Privacy, probing where these objectives reinforce one another and where policy frameworks still treat them as falsely opposed.

And Thijmen van Gend from TU Delft along with co-author Seda Gurses closed with The PET Paradox, a critical look at how large platforms instrumentalize privacy-enhancing technologies in ways that can entrench, rather than loosen, market power.

You’d imagine a conference hosted in DC during this political climate would center the EU and US, but the discussion was genuinely international, including perspectives from jurisdictions such as CADE in Brazil. Another standout was Gunn Jiravuttipong’s paper, The Global Race to Rein in Big Tech, which offered an unusually coherent overview of how competition authorities around the world are approaching digital markets.

A final moment that stuck with me came from Alexandre de Streel, who reframed the controversial “digital sovereignty” to “reducing digital dependency,” a more pragmatic and descriptive term. That framing is useful but I think it should be extended slightly. Too often, policy debates imply there is only one path to reducing dependency: building national or regional alternatives to dominant, foreign platforms. In practice, there are at least three distinct options:

  1. Expect neutrality and norms from providers. This is the traditional assumption—but as we know by now, technology is not neutral, and governance choices inevitably shape outcomes. That doesn’t mean we give up on accountability for providers, but it isn’t the only remedy.
  2. Enable credible exit. Choosing better providers and avoiding lock in only works if switching costs are lowered through interoperability, data portability, and real competition.
  3. Re-internalize capacity. Especially for large institutions and governments, there is a third option that deserves more attention: doing more technology provisioning in-house rather than defaulting to privatization or outsourcing in the first place.

All of this lands squarely in the terrain of the social web. Reducing digital dependency is not an abstract policy goal; it is a design challenge that shows up in protocols, defaults, governance, and who gets to participate on fair terms. Interoperability, credible exit, and institutional capacity are not just competition tools, they are preconditions for a pluralistic, resilient online public sphere. From this perspective, the social web is not a niche alternative to dominant platforms but a necessary counterweight

Last week Mallory and the Internet Exchange team hosted a series of events related to human rights and the social web at MozFest 2025 in Barcelona.

Mallory joined Rabble on the revolution.social podcast— Check out the episode below or wherever you get your podcasts.

In addition, with Mozilla’s support we convened a MozFest Fringe Event on strategy in for growing, healthy, sustainable and multi-polar social web in 2026 and beyond. Colleagues from Pangea, Tech for Palestine, World History Encyclopedia, Mozilla and many others joined us at Cantina Lab Can Batlló, a vegan restaurant cooperative in Barcelona.

I joined Anne Pasmanick on Power Station to talk about the new social media. I loved the title they chose for the episode, “This is for everyone and everyone should be able to contribute.” Power Station is a podcast about building civic space and people power and has hosted some amazing guests across its 386! episodes. I am so glad I was able to share my story and why I believe open social media plays a crucially important role for movement building.

Please listen to the episode and send in your comments.

This month, the Social Web Foundation is joining the UN’s 20th annual conference on the internet, the Internet Governance Forum. Held in Oslo, Norway, IGF 2025 brings together policymakers, technologists, activists, and academics to address the most pressing questions about digital governance. From AI regulation to connectivity in underserved regions, the agenda reflects how internet governance is now inseparable from broader social, economic, and political concerns.

Mallory Knodel, Executive Director of the Social Web Foundation and founder of this newsletter, will be moderating a workshop on “Privacy-Preserving Interoperability and the Fediverse”, a session that speaks directly to the Social Web Foundation’s mission: growing, healthy, sustainable and multi-polar Fediverse.

The session will examine a practical tension: interoperability allows people to move fluidly across platforms—whether it’s Mastodon, PeerTube, or other services in the Fediverse. Yet this fluidity exposes new privacy risks. For example, a user’s profile photo or contact list might unintentionally follow them from one service to another without explicit consent. To ensure the social web continues to grow in a responsible way, we need thoughtful policy, smart technical design, and cross-sector collaboration.

To tackle this, Mallory will be posing three concrete questions to a diverse panel featuring voices from academia, civil society, and the private sector:

  1. User agency: How can we design cross-platform data flows so that individuals—not servers—decide what travels with them?
  2. Legal alignment: What does real compliance with the GDPR look like for a decentralised network, and how might the Digital Markets Act nudge the large incumbents toward meaningful interoperability?
  3. Technical safeguards: Which standards or privacy-enhancing tools could make federation safer by default?

By grounding the discussion in technical and legal constraints, this workshop aims to develop practical, actionable recommendations that platforms, developers, and policymakers can adopt. We’ll refine these into a summary document outlining key takeaways and next steps, which we’ll share in a future edition of this newsletter.

This conversation also comes at a critical time. The momentum behind decentralized platforms is growing, but regulatory clarity and technical safeguards lag behind. Without coordination, we risk repeating the mistakes of Web2: centralisation of power, opaque data practices, and exclusionary design.

Attending the IGF is free! Whether you’re joining us in Oslo or tuning in online, we encourage you to participate. Your questions, insights, and lived experiences help shape the conversation. We’ll be taking audience questions during the session, and they’ll feed directly into the discussion.

We launched in September 2024 with a bold mission: to foster an open, decentralized, and user-centric social web. In just the few short months that remained in 2024, we made meaningful progress. Our participation at W3C TPAC and collaborations with major stakeholders like Mastodon, Ghost, and Automattic have helped spark momentum for a healthier, more resilient online ecosystem.

Today we are proud to publish our 2024 Annual Report– the first of many– to highlight our technical milestones and community engagement, as well as to spotlight the generous support from our funders and advisors. We also preview 2025 and how we plan to deepen our community engagement and technical work on protocol development.

Read the full report here to learn how we’re laying the foundation for a social web that serves everyone.

SWF Annual Report 2024Download