Mallory Knodel

By Mallory Knodel

This week I spoke at the United Nations 70th Commission on the Status of Women in a session titled “Automating Justice: Can Artificial Intelligence Increase Women’s and Girls’ Access to Justice?” I also delivered a short intervention at an EU side event on Preventing and Combating All Forms of Cyber Violence Against Girls, my reflections of which you can read about on GenderIT.

On the topic of AI, over the past several years I’ve written guidance for two main audiences: for courts navigating AI systems in judicial contexts, and for technology companies building the tools themselves. This talk was different. This was guidance for feminists, civil society leaders, technologists, and advocates working toward gender justice and equity, because AI governance is no longer a niche technical issue. It is shaping the systems that structure everyday life.

I came to the discussion as a technologist with nearly two decades of experience building internet infrastructure and now working on AI systems. Through my work with the Social Web Foundation, I also focus on emerging digital platforms and the infrastructure decisions that determine who benefits from technology and who bears its risks. I shared three reflections.

First, tech governance starts long before regulation.

AI governance doesn’t begin when a system is deployed, it begins in design. We can and should question the intentions of systems design, intervene in their development and monitor deployments.

In research I’ve done for US courts, one salient example is predictive policing. These tools promise efficiency by analyzing historical crime data. But historical data reflects historical inequality. When systems optimize on the past, they risk reproducing its injustices. The intention was to make policing easier, which the technology might actually achieve, but it comes at the expense of justice itself. Another panelist from Equality Now said in very strong terms that bias deepens gender imbalance. 

AI systems don’t “generate” new, more equitable realities. The generation is a reproduction that mirrors existing patterns in data from the past. If that data encodes discrimination, the analysis and the generated outputs will scale it.

Bias is not a technical glitch at the margins. It can enter at every stage: design choices, training data, validation processes and deployment contexts. Governance by design means addressing these risks before systems negatively impact people’s lives.

Second, equitable tech democratization needs feminism.

We are living through a remarkable moment: extraordinarily powerful AI tools are widely accessible. They help people summarize information, draft documents, translate complex material, and navigate complex systems.

This creates real possibilities for expanding access to justice, but access alone is not empowerment. The room closed with comments from Estonia, a fully digitalized European state. Governments should hold their own use of AI systems to the highest possible standard. Estonia said digitalisation and AI is not about efficiency or economics alone. It’s the point at which we must ask, “who will benefit and who is left out?” These are fundamentally questions of power and responsibility and human rights, which feminism is well placed to answer. 

We’ve seen this dynamic with social media platforms, which were celebrated as democratizing technologies that would ultimately lead to social change. Yet the imperfect implementations of this idea within the context of surveillance capitalism meant that data generated by billions of users ultimately fueled advertising systems optimized for profit, often at the expense of privacy and freedom of expression.

AI risks repeating this pattern. In my research on privacy-preserving AI, such as in encrypted environments, I examine how technical architecture can protect sensitive conversations and ensure people seeking help are not exposed to new forms of surveillance or harm.

If AI systems are to support survivors, legal aid seekers, and marginalized communities, they must include strong technical guardrails: privacy protections, meaningful data minimization, secure system design, limits on exploitative data extraction. Feminist principles can guide us towards the democratization of technology that does not exploit the marginalized.

Third, governance happens through standardization and setting global norms.

AI governance is often framed as a regulatory problem, but many of the most consequential decisions happen earlier in technical standards, system architectures, procurement rules, and infrastructure design.

The global internet offers an important lesson. A shared set of open technical protocols allowed networks around the world to interconnect. That technical coordination also produced a unique governance structure: multistakeholderism, in which norms are debated and standards evolve by a multitude of expert stakeholders.

However, AI lacks comparable interoperability. The largest AI foundation models rely on proprietary data ecosystems and are in competition. The AI business model rewards enclosure. As I mentioned earlier, AI was largely build upon enclosed, walled garden social media platforms. As a result, technical interoperability, and the governance structures that accompany it, remain drastically underdeveloped.

Recent work on human rights in standards by the Office of the High Commissioner for Human Rights has emphasized the governance instruments that shape how rights are realized in practice: standards bodies, procurement frameworks, and regulatory regimes. They all must consider human rights from the start. Perhaps this is our opportunity for interoperability, and thus multistakeholder governance of AI.

The future of AI is not predetermined. These technologies will reflect the values embedded in their design and governance. Ensuring they expand justice for women and girls requires vigilance, participation, and collective responsibility.

Mallory is heading to India later this month to kick off something we’ve been building toward for a while: the first in a global series on AI and the social web.

Alongside the AI Impact Summit in Delhi, SWF is holding space for dialogue among a focused group to dig into what comes next as AI becomes agentic and social systems become protocol-based. Hosted by the Observer Research Foundation, our core organizational partners for the event include Project Liberty Institute, Public AI and Modal Foundation.

We’re bringing together people who build, govern, research, and challenge digital infrastructure to get to the heart of the real technical, economic, and governance tradeoffs across open social protocols: ActivityPub, AT Protocol, and DSNP.

AI systems are increasingly shaping how information is created, ranked, moderated, and governed online. At the same time, if open social protocols are to be viable alternatives to centralized platforms, we must anticipate risks and ensure immunity to the same concentrations of power in the age of AI.

We’re curious about the emerging approaches to context, consent, and accountability when AI enters our feeds and slides into our DMs. In Delhi, we hope to surface shared priorities, hard constraints, and concrete next steps, and to lay the groundwork for ongoing protocol governance work that extends well beyond one protocol and this initial meeting.

Express your interest in joining us on 20 February at 11 am IST! https://luma.com/25chdoy8 

This week I (Mallory) was at the Digital Competition Conference hosted by the Knight Georgetown Institute representing SWF. Conversations unlocked cooperation between tech, law and regulation that is needed to rein in market abuse by gigantic digital infrastructures. What made this conference especially valuable was how consistently the discussion balanced between all three.

I was particularly happy to see a strong focus on technical realities: not just what regulators want to do, but what is actually feasible and desirable given how systems are experienced by users, and how they are built, secured, and governed today. The agenda was strong and  featured four technical talks that cut through the usual abstractions and tackled real trade-offs head-on.

My colleague Daji Landis  at NYU presented Security vs. Interoperability, which helpfully disentangles when security concerns are legitimate and when they are deployed rhetorically to block competition or interoperability.

Economist Yifei Wang from University of Pittsburgh examined Competition and Privacy, probing where these objectives reinforce one another and where policy frameworks still treat them as falsely opposed.

And Thijmen van Gend from TU Delft along with co-author Seda Gurses closed with The PET Paradox, a critical look at how large platforms instrumentalize privacy-enhancing technologies in ways that can entrench, rather than loosen, market power.

You’d imagine a conference hosted in DC during this political climate would center the EU and US, but the discussion was genuinely international, including perspectives from jurisdictions such as CADE in Brazil. Another standout was Gunn Jiravuttipong’s paper, The Global Race to Rein in Big Tech, which offered an unusually coherent overview of how competition authorities around the world are approaching digital markets.

A final moment that stuck with me came from Alexandre de Streel, who reframed the controversial “digital sovereignty” to “reducing digital dependency,” a more pragmatic and descriptive term. That framing is useful but I think it should be extended slightly. Too often, policy debates imply there is only one path to reducing dependency: building national or regional alternatives to dominant, foreign platforms. In practice, there are at least three distinct options:

  1. Expect neutrality and norms from providers. This is the traditional assumption—but as we know by now, technology is not neutral, and governance choices inevitably shape outcomes. That doesn’t mean we give up on accountability for providers, but it isn’t the only remedy.
  2. Enable credible exit. Choosing better providers and avoiding lock in only works if switching costs are lowered through interoperability, data portability, and real competition.
  3. Re-internalize capacity. Especially for large institutions and governments, there is a third option that deserves more attention: doing more technology provisioning in-house rather than defaulting to privatization or outsourcing in the first place.

All of this lands squarely in the terrain of the social web. Reducing digital dependency is not an abstract policy goal; it is a design challenge that shows up in protocols, defaults, governance, and who gets to participate on fair terms. Interoperability, credible exit, and institutional capacity are not just competition tools, they are preconditions for a pluralistic, resilient online public sphere. From this perspective, the social web is not a niche alternative to dominant platforms but a necessary counterweight

Last week Mallory and the Internet Exchange team hosted a series of events related to human rights and the social web at MozFest 2025 in Barcelona.

Mallory joined Rabble on the revolution.social podcast— Check out the episode below or wherever you get your podcasts.

In addition, with Mozilla’s support we convened a MozFest Fringe Event on strategy in for growing, healthy, sustainable and multi-polar social web in 2026 and beyond. Colleagues from Pangea, Tech for Palestine, World History Encyclopedia, Mozilla and many others joined us at Cantina Lab Can Batlló, a vegan restaurant cooperative in Barcelona.

I joined Anne Pasmanick on Power Station to talk about the new social media. I loved the title they chose for the episode, “This is for everyone and everyone should be able to contribute.” Power Station is a podcast about building civic space and people power and has hosted some amazing guests across its 386! episodes. I am so glad I was able to share my story and why I believe open social media plays a crucially important role for movement building.

Please listen to the episode and send in your comments.

This month, the Social Web Foundation is joining the UN’s 20th annual conference on the internet, the Internet Governance Forum. Held in Oslo, Norway, IGF 2025 brings together policymakers, technologists, activists, and academics to address the most pressing questions about digital governance. From AI regulation to connectivity in underserved regions, the agenda reflects how internet governance is now inseparable from broader social, economic, and political concerns.

Mallory Knodel, Executive Director of the Social Web Foundation and founder of this newsletter, will be moderating a workshop on “Privacy-Preserving Interoperability and the Fediverse”, a session that speaks directly to the Social Web Foundation’s mission: growing, healthy, sustainable and multi-polar Fediverse.

The session will examine a practical tension: interoperability allows people to move fluidly across platforms—whether it’s Mastodon, PeerTube, or other services in the Fediverse. Yet this fluidity exposes new privacy risks. For example, a user’s profile photo or contact list might unintentionally follow them from one service to another without explicit consent. To ensure the social web continues to grow in a responsible way, we need thoughtful policy, smart technical design, and cross-sector collaboration.

To tackle this, Mallory will be posing three concrete questions to a diverse panel featuring voices from academia, civil society, and the private sector:

  1. User agency: How can we design cross-platform data flows so that individuals—not servers—decide what travels with them?
  2. Legal alignment: What does real compliance with the GDPR look like for a decentralised network, and how might the Digital Markets Act nudge the large incumbents toward meaningful interoperability?
  3. Technical safeguards: Which standards or privacy-enhancing tools could make federation safer by default?

By grounding the discussion in technical and legal constraints, this workshop aims to develop practical, actionable recommendations that platforms, developers, and policymakers can adopt. We’ll refine these into a summary document outlining key takeaways and next steps, which we’ll share in a future edition of this newsletter.

This conversation also comes at a critical time. The momentum behind decentralized platforms is growing, but regulatory clarity and technical safeguards lag behind. Without coordination, we risk repeating the mistakes of Web2: centralisation of power, opaque data practices, and exclusionary design.

Attending the IGF is free! Whether you’re joining us in Oslo or tuning in online, we encourage you to participate. Your questions, insights, and lived experiences help shape the conversation. We’ll be taking audience questions during the session, and they’ll feed directly into the discussion.

We launched in September 2024 with a bold mission: to foster an open, decentralized, and user-centric social web. In just the few short months that remained in 2024, we made meaningful progress. Our participation at W3C TPAC and collaborations with major stakeholders like Mastodon, Ghost, and Automattic have helped spark momentum for a healthier, more resilient online ecosystem.

Today we are proud to publish our 2024 Annual Report– the first of many– to highlight our technical milestones and community engagement, as well as to spotlight the generous support from our funders and advisors. We also preview 2025 and how we plan to deepen our community engagement and technical work on protocol development.

Read the full report here to learn how we’re laying the foundation for a social web that serves everyone.

SWF Annual Report 2024Download

RightsCon 2025– “rights” as in human rights and “con” as in tech conference– is less than a week away, and the SWF is hosting some exciting events taking place in Taipei. It’s the annual event for the human rights community, and below are several sessions that I am excited to be involved in. Please reach out if you’ll be attending and want to meet up!

How we build a new social web, Tuesday 25 February @ 9 am – SWF will moderate a discussion with leaders from Threads, DSNP, and Mastodon to discuss open standards, user agency, and the fediverse’s potential as a progressive evolution of social media.

Roundtable with privacy regulators, Wednesday 26 February @ 11:30 am – We’re hosting a conversation with Carly Kind (Australia), and Miguel Bernal-Castillero (Canada) on safeguarding data privacy in an era of tech monopolies and global surveillance.

Accountability on the net: building tools for a meaningful multistakeholder cooperation, Thursday 27 February @ 9 am – I’ve been invited to help lead a workshop on building an “Internet Accountability Compass” so governments and companies can be held to their commitments for an open, safe, and rights-respecting internet.

Lastly Taiwan is a crucial location for the latest installment of Splintercon. It’s an event about internet censorship hosted by eQualitie (I’m on their board) and I’ll be moderating the first session “Infrastructure, Policy, and the Geopolitics of Internet Fragmentation,” as well as delivering a talk on censorship circumvention without VPNs. Splintercon is a satellite event of RightsCon and participants need a ticket to attend. Some sessions will be recorded and shared online later.

SWF has joined the World Wide Web Consortium (W3C) to advance open standards for the social web. Evan Prodromou will be SWF’s advisory council representative as he continues in his role as maintainer of the ActivityPub and Activity Streams 2.0 specifications.

Extending and expanding the implementation of the ActivityPub is a core area of work for SWF. ActivityPub is the only open standard that can underpin a social web that is truly interoperable, fostering innovation and usability across social platforms like Mastodon, Pixelfed, Ghost, Threads, Flipboard and an increasing number of other online spaces.

The SWF and its founders have a history of engagement in the W3C. Evan was co-author on the Activity Streams 2.0 and ActivityPub specifications. As head of digital at ARTICLE 19 and later CTO of the Center for Democracy and Technology, I’ve engaged as both an invited technical expert and an advisory council member under CDT’s membership on topics related to technical standards and human rights issues.

The W3C Social Community Group is the primary vehicle for ActivityPub extensions that provides a structured process for introducing new features and updating the existing specifications. The W3C CGs are open to anyone who would like to join and contribute.

SWF joins CDT as one of the few civil society organizations that comprise the Consortium. SWF’s membership in the W3C underscores our commitment to promoting an open and federated social web, and our alignment with the W3C mission to develop web standards through community consensus that ensures the long-term growth and accessibility of technical specifications that are openly licensed.

In terms of concrete ongoing work, we look forward to bringing end-to-end encryption to direct messages in ActivityPub, developing groups on the social web, supporting data portability with ActivityPub, making discovery of ActivityPub objects and their authors easier. We’ll participate in ongoing maintenance and development of the core standards. We’ll also work alongside the renowned experts in web accessibility, building with potential companies and organizations who might become SWF supporters, and serving as a voice for the public interest in this important forum for multistakeholder internet governance.

Today’s public square is on private property. To fight inequality, participate in democracy, and build an equitable society and economy, we must build a true public commons beyond one or two corporate-owned, profit-driven spaces.

With the open social web, the production, distribution and use of online content is being freed from the silos erected by today’s largest multinationals. But our data is still captured in AI systems that have “reached scale”– essentially the search, recommendation, and moderation algorithms that manipulate content consumption. These are monopoly technologies by their very design.

The detrimental consequences to society of this status quo are diverse and well-documented: concentration of profits and power, degraded information and media landscapes, prominence of polarizing content, disinformation and online harassment, degraded mental health, and more.

There is a broad vision for something different. We do not have to settle for how corporate spaces choose to govern, or not govern, themselves. The promise of an open social web is that we can all join communities and that online communities can be interconnected.

The Social Web Foundation focuses on the network of platforms connected via ActivityPub. We also support efforts to make other distributed social networking protocols more open and equitable. For this reason, we are excited to support the #FreeOurFeeds campaign launching today. This campaign is an opportunity to develop the capacity needed for the open social web protocols– ActivityPub along with Bluesky’s AT Protocol– to better interoperate, leveraging the entire open social ecosystem to create a working demonstration of algorithmic pluralism at scale.

We look forward to contributing to the essential work that follows from the campaign #FreeOurFeeds: Infrastructure, Innovation Ecosystem, and Governance.

Operating open and independent infrastructure services for social applications are essential but costly. Some proposed outcomes of #FreeOurFeeds are:

The #FreeOurFeeds seed funds can support an innovation ecosystem of public-interest alternatives. Just imagine the impact of third-party recommendations and moderation systems, LLM diversity, and curation, news and information discovery.

Open social web innovation in global governance spaces supports diverse, online communities. Protocols need to remain capture-resistant and supportive of open innovation and interoperability across the open social web and an industry-wide transition to interoperable protocols and public interest digital infrastructure is needed.

#FreeOurFeeds is a public crowdfunder to realize this vision with material support, but it is also an opportunity to create a powerful public narrative, bring experts together and plot a strategy around the need for investing in algorithmic pluralism and digital public infrastructures. I’m honored to join the “custodians” who are supporting the vision for this emerging work including Nabiha Syed, Robin Berjon, Deepti Doshi, Marc Faddoul and Sherif Elsayed-Ali. We hope you’ll join us by signing up at freeourfeeds.com.