By Mallory Knodel
This week I spoke at the United Nations 70th Commission on the Status of Women in a session titled “Automating Justice: Can Artificial Intelligence Increase Women’s and Girls’ Access to Justice?” The recording is available on the UN’s WebTV. I also delivered a short intervention at an EU side event on Preventing and Combating All Forms of Cyber Violence Against Girls, my reflections of which you can read about on GenderIT.
On the topic of AI, over the past several years I’ve written guidance for two main audiences: for courts navigating AI systems in judicial contexts, and for technology companies building the tools themselves. This talk was different. This was guidance for feminists, civil society leaders, technologists, and advocates working toward gender justice and equity, because AI governance is no longer a niche technical issue. It is shaping the systems that structure everyday life.
I came to the discussion as a technologist with nearly two decades of experience building internet infrastructure and now working on AI systems. Through my work with the Social Web Foundation, I also focus on emerging digital platforms and the infrastructure decisions that determine who benefits from technology and who bears its risks. I shared three reflections.
First, tech governance starts long before regulation.
AI governance doesn’t begin when a system is deployed, it begins in design. We can and should question the intentions of systems design, intervene in their development and monitor deployments.
In research I’ve done for US courts, one salient example is predictive policing. These tools promise efficiency by analyzing historical crime data. But historical data reflects historical inequality. When systems optimize on the past, they risk reproducing its injustices. The intention was to make policing easier, which the technology might actually achieve, but it comes at the expense of justice itself. Another panelist from Equality Now said in very strong terms that bias deepens gender imbalance.
AI systems don’t “generate” new, more equitable realities. The generation is a reproduction that mirrors existing patterns in data from the past. If that data encodes discrimination, the analysis and the generated outputs will scale it.
Bias is not a technical glitch at the margins. It can enter at every stage: design choices, training data, validation processes and deployment contexts. Governance by design means addressing these risks before systems negatively impact people’s lives.
Second, equitable tech democratization needs feminism.
We are living through a remarkable moment: extraordinarily powerful AI tools are widely accessible. They help people summarize information, draft documents, translate complex material, and navigate complex systems.
This creates real possibilities for expanding access to justice, but access alone is not empowerment. The room closed with comments from Estonia, a fully digitalized European state. Governments should hold their own use of AI systems to the highest possible standard. Estonia said digitalisation and AI is not about efficiency or economics alone. It’s the point at which we must ask, “who will benefit and who is left out?” These are fundamentally questions of power and responsibility and human rights, which feminism is well placed to answer.
We’ve seen this dynamic with social media platforms, which were celebrated as democratizing technologies that would ultimately lead to social change. Yet the imperfect implementations of this idea within the context of surveillance capitalism meant that data generated by billions of users ultimately fueled advertising systems optimized for profit, often at the expense of privacy and freedom of expression.
AI risks repeating this pattern. In my research on privacy-preserving AI, such as in encrypted environments, I examine how technical architecture can protect sensitive conversations and ensure people seeking help are not exposed to new forms of surveillance or harm.
If AI systems are to support survivors, legal aid seekers, and marginalized communities, they must include strong technical guardrails: privacy protections, meaningful data minimization, secure system design, limits on exploitative data extraction. Feminist principles can guide us towards the democratization of technology that does not exploit the marginalized.
Third, governance happens through standardization and setting global norms.
AI governance is often framed as a regulatory problem, but many of the most consequential decisions happen earlier in technical standards, system architectures, procurement rules, and infrastructure design.
The global internet offers an important lesson. A shared set of open technical protocols allowed networks around the world to interconnect. That technical coordination also produced a unique governance structure: multistakeholderism, in which norms are debated and standards evolve by a multitude of expert stakeholders.
However, AI lacks comparable interoperability. The largest AI foundation models rely on proprietary data ecosystems and are in competition. The AI business model rewards enclosure. As I mentioned earlier, AI was largely build upon enclosed, walled garden social media platforms. As a result, technical interoperability, and the governance structures that accompany it, remain drastically underdeveloped.
Recent work on human rights in standards by the Office of the High Commissioner for Human Rights has emphasized the governance instruments that shape how rights are realized in practice: standards bodies, procurement frameworks, and regulatory regimes. They all must consider human rights from the start. Perhaps this is our opportunity for interoperability, and thus multistakeholder governance of AI.
The future of AI is not predetermined. These technologies will reflect the values embedded in their design and governance. Ensuring they expand justice for women and girls requires vigilance, participation, and collective responsibility.
As part of my book “ActivityPub: Programming for the Social Web“, I created a coding example to show how to program for the ActivityPub API. ap is a command-line client, written in Python, for doing basic tasks with ActivityPub.
For example, you can log into a server using this command:
ap login yourname@yourserver.example
Once you’re logged in, you can follow someone:
ap follow other@different.example
Or, you could post some content:
ap create note --public "Hello, World"
This isn’t enough to have a real social networking experience, but I think it’s pretty useful for testing an ActivityPub API server, or automating some repetitive tasks.
I should note quickly here that not all ActivityPub servers support the ActivityPub API. It’s an under-utilized part of the ActivityPub standard. In particular, Mastodon, Threads, Flipboard, and other services don’t support the API. There’s a pretty good list of servers and clients that do support the API in this Codeberg issue.
Suffice it to say, unless you’re actively working with one of those platforms, or you are writing your own, you’re not going to get much use out of ap. It will probably give you an error message like “No OAuth endpoints found” if it can’t use the service.
I’ve never packaged ap for distribution; it was always supposed to be example code. But given the recent interest in the ActivityPub API, including the work going on in the ActivityPub API task force, I decided to get it into shape for installation by developers working on other apps. My friend Matthias Pfefferle of Automattic asked me about it when we were at FOSDEM this year, and I was embarrassed to see how difficult it was for him to use.
So, I’ve made two big upgrades to the package. The first was actually making it a package, and distributing it! I upgraded the package management framework to uv, which seems like a good bet for now, and pushing the application to PyPI, the Python Package Index. It’s visible at https://pypi.org/project/activitypub-cli/ now. (Note: different package name from the command name! The PyPI “ap” package name was taken a while ago.)
You can now install the application in one shot with this command on a computer that has Python on it:
pipx install activitypub-cli
You can test that the application installed correctly in your path by running the version command:
ap version
That should show the same version as is currently on the pypi.org page for the project.
The second change was implementing the current OAuth 2.0 profile best practices. I’ve upgraded the login flow so it tries a lot of different options for identifying itself to the server: CIMD, FEP d8c2, and Dynamic Client Registration. It tries to do them in preferential order; it uses permanent, global client identifiers before dynamic ones.
I’m especially interested in testing this command-line client against other servers. If you’re developing an ActivityPub API server, please install the ap command and try it out against your (development!) server. Report a bug if it doesn’t work well, or send me a DM at @evanprodromou if it works OK. Given time, I think ap can be a useful first smoke test for ActivityPub API implementations.
I’m going to be participating in the Growing the Open Social Web workshop at Fediforum on March 2, 2026. I’m excited to talk to other people who care about the Fediverse about ways to connect more people through ActivityPub.
Fediforum invited attendees to publish position papers before the workshop. SWF has a number of hypotheses about growth of the social web; I’ll try to summarise some of them here.
We’re looking forward to engaging with the Fediforum community on these and other topics. We’ll see you on March 2!
A brief note: the Social Web Foundation, Qlub and FediHost are presenting a day-long Fediverse conference in Montreal, Canada on February 24, 2026. FediMTL features speakers from across the Fediverse, including Cory Doctorow, Christine Lemmer-Webber, Julian Lam, and yours truly, Evan Prodromou. The theme of digital autonomy for Canada has never been more important. Tickets are on sale now for both in person and streaming attendance. I look forward to seeing you there!
Mallory is heading to India later this month to kick off something we’ve been building toward for a while: the first in a global series on AI and the social web.
Alongside the AI Impact Summit in Delhi, SWF is holding space for dialogue among a focused group to dig into what comes next as AI becomes agentic and social systems become protocol-based. Hosted by the Observer Research Foundation, our core organizational partners for the event include Project Liberty Institute, Public AI and Modal Foundation.
We’re bringing together people who build, govern, research, and challenge digital infrastructure to get to the heart of the real technical, economic, and governance tradeoffs across open social protocols: ActivityPub, AT Protocol, and DSNP.
AI systems are increasingly shaping how information is created, ranked, moderated, and governed online. At the same time, if open social protocols are to be viable alternatives to centralized platforms, we must anticipate risks and ensure immunity to the same concentrations of power in the age of AI.
We’re curious about the emerging approaches to context, consent, and accountability when AI enters our feeds and slides into our DMs. In Delhi, we hope to surface shared priorities, hard constraints, and concrete next steps, and to lay the groundwork for ongoing protocol governance work that extends well beyond one protocol and this initial meeting.
Express your interest in joining us on 20 February at 11 am IST! https://luma.com/25chdoy8
This week I (Mallory) was at the Digital Competition Conference hosted by the Knight Georgetown Institute representing SWF. Conversations unlocked cooperation between tech, law and regulation that is needed to rein in market abuse by gigantic digital infrastructures. What made this conference especially valuable was how consistently the discussion balanced between all three.
I was particularly happy to see a strong focus on technical realities: not just what regulators want to do, but what is actually feasible and desirable given how systems are experienced by users, and how they are built, secured, and governed today. The agenda was strong and featured four technical talks that cut through the usual abstractions and tackled real trade-offs head-on.
My colleague Daji Landis at NYU presented Security vs. Interoperability, which helpfully disentangles when security concerns are legitimate and when they are deployed rhetorically to block competition or interoperability.
Economist Yifei Wang from University of Pittsburgh examined Competition and Privacy, probing where these objectives reinforce one another and where policy frameworks still treat them as falsely opposed.
And Thijmen van Gend from TU Delft along with co-author Seda Gurses closed with The PET Paradox, a critical look at how large platforms instrumentalize privacy-enhancing technologies in ways that can entrench, rather than loosen, market power.
You’d imagine a conference hosted in DC during this political climate would center the EU and US, but the discussion was genuinely international, including perspectives from jurisdictions such as CADE in Brazil. Another standout was Gunn Jiravuttipong’s paper, The Global Race to Rein in Big Tech, which offered an unusually coherent overview of how competition authorities around the world are approaching digital markets.
A final moment that stuck with me came from Alexandre de Streel, who reframed the controversial “digital sovereignty” to “reducing digital dependency,” a more pragmatic and descriptive term. That framing is useful but I think it should be extended slightly. Too often, policy debates imply there is only one path to reducing dependency: building national or regional alternatives to dominant, foreign platforms. In practice, there are at least three distinct options:
All of this lands squarely in the terrain of the social web. Reducing digital dependency is not an abstract policy goal; it is a design challenge that shows up in protocols, defaults, governance, and who gets to participate on fair terms. Interoperability, credible exit, and institutional capacity are not just competition tools, they are preconditions for a pluralistic, resilient online public sphere. From this perspective, the social web is not a niche alternative to dominant platforms but a necessary counterweight
Thanks to the Interledger Foundation for their generous Grant for the Web to the Social Web Foundation. With the help of ILF, we are launching a new program area focused on economic issues on the Social Web. In particular, we’ll be producing three reports: one on sustainability for social web instances; one for the Fediverse and the creator economy; and one for cooperatives on the Social Web. In addition, we’ll be engaging multimedia Fediverse apps for using the Web Monetization standard. You can read more about grant on our shared announcement: Interledger Foundation awards $200,000 to Social Web Foundation to support decentralized social media.
The European Union solicited feedback from stakeholders on Open Source software’s position with respect to digital sovereignty, security, and competitiveness. The Social Web Foundation worked with allied organizations like Newsmast Foundation, SABOA, FediVariety and Save Social to encourage the support and adoption of Open Source Fediverse technologies in Europe. The text of our letter follows.
We are a coalition of civil society organisations operating in Europe and globally. We are encouraging the adoption of Open Source social networking platforms that connect to the Fediverse.
Social networking is an especially vulnerable sector for digital sovereignty, security, and competitiveness. Worldwide, an increasing percentage of citizens get their news primarily or solely from social media. Centralised social platforms have been documented vectors for interference in the democratic process in Europe and elsewhere. Engagement-oriented algorithmic feeds have amplified inauthentic behaviour. From Brazil to Canada to Brussels, centralised social platforms have used their power in the market to push back on enforcement of local laws like the Digital Services Act and the Digital Markets Act.
We believe the Fediverse provides a strong check on these vulnerabilities. The Fediverse is a network of interoperable social media platforms connected with the open standard protocol ActivityPub. Services on the Fediverse include Mastodon, Flipboard, WordPress, Peertube, Write As, Pixelfed, Meta Threads, Ghost.org, Mobilizon, WordPress, and others.
The Fediverse offers users a choice of social media platforms without losing access to their friends, families, publishers, local communities or important thinkers. Because it’s possible to connect across platforms, different users can make different choices, and maintain their social ties, read others’ posts, and comment, like and share. Every day, millions of people share their life and career updates, engage in conversations, post silly memes, or stay in touch with loved ones on the Fediverse.
Because ActivityPub is an open standard protocol, published at the World Wide Web Consortium (W3C) in 2018, it is available to any company or developer to implement, without patent encumbrance or fees.
Fediverse software that is available under an Open Source license lets individuals, families, communities, businesses, universities, cities and other stakeholders in society establish social network servers of their own, which interact on an even playing field with commercial services. There are dozens of Open Source Fediverse social networking programs, such as made-in-Europe options like Mastodon, Peertube and Mobilizon, with thousands of installed servers across the continent and around the world.
Connecting the Fediverse from the bottom up, person by person and community by community, is a resilient structure and one of the Fediverse’s core strengths. Heterogeneous Open Source software lets software decisions occur much closer to the user than with a single commercial platform. The choices that the Fediverse provides allow a competitive market in social platforms resistant to vendor lock-in. Local and national governments, the EU, universities, public broadcasters and media companies in Europe have adopted Fediverse technologies.
The Fediverse is a solid foundation for European digital sovereignty in the social networking space. Europeans can use social platforms hosted and managed close to home, under the jurisdiction of their elected officials. And the Fediverse allows them to stay connected to people in Latin America, the US and Canada, Asia, Africa, and Oceania – but rooted in Europe, on European terms.
Our organisations strongly encourages the EU to consider the promise of the Fediverse for connecting European society with autonomy and independence, and the importance of Open Source software to the spread and adoption of the Fediverse. The EU can greatly aid digital sovereignty efforts by supporting the development and adoption of Open Source Fediverse technologies.
Next week is European Open Source Week in Brussels, culminating in FOSDEM 2026 on the weekend. There are several important ways to stay connected to the Fediverse while you’re visiting this week!
As always, watch the #FOSDEM and #socialwebfosdem and #FOSDEM2026 hashtags for news and updates.
If you’re not travelling to Brussels, watch for streaming video from room H.2215 . There are also Fediverse events happening throughout the world throughout the year; Fediforum keeps a great list of the most prominent.
Today the W3C standards organization announced a new working group to advance the ActivityPub and Activity Streams standards. The Social Web Foundation, as a W3C member organization, will be participating in the group. The working group’s goal is to release a backwards-compatible iteration of each specification in Q3 of 2026.
Activity Streams was released in 2017, and ActivityPub was released in early 2018. Since that time, the experience of hundreds of implementers and millions of users has shown places that the specifications are confusing or unclear, or missing features. Some problems have been documented with errata, but others require more work. The Next Version tag in the ActivityPub GitHub issue repository gives some good examples of topics to be considered. The new Social Web Working Group will provide revisions of these documents to make them easier to use for implementers.
ActivityPub is an actively used protocol with millions of users and billions of notes, images, video and audio files published. Standards work on ActivityPub will necessarily be evolutionary, not revolutionary, and will incorporate backwards compatibility. Developers can confidently keep working on ActivityPub today without worrying about breaking changes in the future.
The Social Web Working Group will work closely with the Social Web Community Group, the organization that has been stewarding ActivityPub and its extensions since 2018. The Community Group will remain the focal point for innovative developments extending ActivityPub into different areas like geosocial applications or threaded forums, while the Working Group will concentrate on the core documents.
One Community Group document that will be moving into the Working Group is LOLA, the live data portability spec that originated in the CG’s Data Portability Task Force. LOLA lets users move from one ActivityPub server to another while retaining all their social connections, their content, and their reactions. It’s a great improvement for data portability on the social web.
The Social Web Working Group will consist of representatives of W3C member organizations and invited experts from the standards and development community. The group will be chaired by Darius Kazemi, longtime contributor to the ActivityPub developer community. Meetings and proceedings will be public, and developers can review the work happening in the ActivityPub GitHub repository.
Thanks to everyone who’s done the work getting this charter to completion; especially Dmitri Zagidulin, the SocialCG chair who drove the charter editing and review process. Now, the work begins!