Page not found

I (Evan) will be at the Wikimedia Hackathon 2026 in Milan, Italy this weekend (May 1-4). I’m especially interested in how we can connect Wikimedia projects and content to the Social Web using ActivityPub. I’ll be holding a session on the topic on Sunday May 3 at 9AM, but I’ll also be available for discussions throughout the weekend.

My hacking project plan is to make an ActivityPub object server for films. There are about 343,000 films in Wikidata, which compares pretty favourably with the 740,000 films in IMDB. There is a JSON-LD interface to Wikidata, but the types used don’t match up with ActivityPub types like Video. So, like places.pub, I’ll set up movies.pub to share an ActivityPub object for every Q-item for a movie, as well as a search endpoint to find films by name.

If I get ahead of the project and I’m not too jet-lagged, I’d like to add an ActivityPub API app to “check in” to a movie that you’re watching (and maybe give a little review). Similar to checkin.swf.pub with places!

If you’re at this weekend, please come say hi. I love talking about Wikimedia projects and the open social web.

Next week the human rights and tech community will convene once again this time in Lusaka, Zambia. Annually RightsCon brings together practitioners across civil society, industry, governments, and the technical community. This is my tenth Rightscon and the Social Web Foundation’s second. Like last year, our participation in RightsCon is part of a broader commitment to ensuring that the development of digital infrastructure remains closely connected to public-interest values and to the communities most affected by technical design decisions.

This year, I will be participating in three sessions that reflect different but closely related strands of this work.

The first session, “Human rights reviews in internet standardization – what is at stake?” on human rights reviews in internet standardization, focuses on the processes through which technical standards are developed and the implications those processes have for rights protections. Standards bodies such as the World Wide Web Consortium, the Internet Engineering Task Force, and the IEEE play a central role in shaping the digital environment, yet their work often remains difficult to access and unevenly influenced. The discussion will consider how and when human rights and privacy considerations are incorporated into these processes, what is at stake when they are not, and what conditions are necessary to enable more meaningful and representative participation in standardization work. We’ll be in AG03 at 11:30 am on May 6. 

A second session, “A little less talk and a lot more action: Mobilising for feminist tech industry standards” hosted by the United Nations Population Fund, turns to the question of “safety by design” in the context of technology standards. While this concept has gained increasing prominence as a corporate and policy framework, the session situates it within a broader set of concerns about whose experiences and priorities are reflected in how safety is defined. By grounding the discussion in feminist principles and human rights obligations, the session creates space to examine how current approaches may fall short, particularly for communities that are disproportionately affected by technology-related harms, including gender-based violence. Catch us in A101 at 10:15 am on May 7.

The third session, “From platforms to people: Reclaiming the internet through the Fediverse” hosted by the Electronic Frontier Foundation and featuring co-panelist Bruce Schneier, focuses on the Fediverse as an alternative model for social networking infrastructure. The session will explore how federated and interoperable systems can support a more open and rights-respecting online environment, and what challenges remain in translating these models into systems that can operate at scale. In this context, the Fediverse is understood not only as a set of technologies, but as an evolving ecosystem shaped by governance choices, standards development, and the practices of its participants. The session is in AG01 at 15:15 on May 7.

Taken together, these sessions reflect the range of spaces in which questions about rights, governance, and infrastructure must be addressed as the social web develops. They also underscore the importance of sustained engagement across technical and policy communities. For SWF, RightsCon provides an opportunity to situate our work within this broader landscape and to contribute to ongoing conversations about how the new social media can better reflect and uphold human rights.

I (Evan) will be giving talk at Fediforum 26-04 next week, April 28, 2026, on the exciting topic of faking your way through ActivityPub conversations. Here’s the description:

“One of the best bluffers in the field of distributed social networks gives you just enough knowledge about ActivityPub to sound smarter than everyone around you. In this talk, Evan will cover the essential architecture of ActivityPub, what works and what doesn’t, and what is coming up next for the standard. You’ll walk out of this talk with just enough knowledge to speak with confidence about anything at Fediforum.”

If you’ve ever wanted to know what ActivityPub is and how it works, please come along. I hope the event is fun and interesting. Bring questions!

Last week at the Internet Governance Forum (IGF) Expert Group Meeting we considered what changes to this 20-year-old, UN initiative are required now that the UN General Assembly has made it permanent. This small, invitation-only gathering was tasked with the future of the IGF as a permanent UN mandate and how it achieves outcomes. Now that the IGF’s place at the UN is secure, we can stop trying to prove that multistakeholder dialogue matters, and start showing what multistakeholder governance is multistakeholder governance is capable of delivering.

This means civil society can find purchase for its work in IGF work itself, rather than considering the annual meeting a venue for outreach and promotion of its work that ultimately happens in other places.

The IGF is being actively redefined and the process is open to meaningful influence. I attended representing the Social Web Foundation, both a civil society organization and a key player in the technical community. My remarks were informed by other civil society organizations: the Association for Progressive Communications and its members.

Across discussions several core tensions and opportunities emerged. Rather than either/or, in almost all cases I view the IGF as being able to balance both:

  1. Dialogue and influence decisions. There is clear pressure for the IGF to move beyond being a convening space and toward something that can influence decision-making processes. This includes stronger alignment with global frameworks like World Summit on the Information Society (WSIS) and the Global Digital Compact, and more intentional pathways for IGF outputs to inform policy fora. This is possible without losing the open, iterative, multistakeholder dialogue that makes the IGF valuable. The approach is to enhance and make more visible the IGF’s ongoing work between annual meetings. Like standards bodies, the authority to push forward points of view and outputs within intersessional work rests on those who are participating in those processes, and the fact that the output is part of a multistakeholder UN process.
  2. Institutionalize top-down and elevate bottom-up. The permanent mandate creates an opportunity to rethink governance, structures, and operations of the annual meeting. At the same time there is broad recognition that the IGF’s legitimacy comes from its bottom-up nature, particularly through national and regional initiatives (NRIs). Embedding those processes more directly into governance is essential to strengthening the utility of top-down institutionalization while putting resources and attention on the more valuable bottom-up and direct impact potential of NRIs.

For those of us working on the social web, open protocols, and public-interest infrastructure, this moment is a significant one that can help leverage the IGF toward outcomes, not just outreach.

The IGF has long been a space where principles of openness, interoperability, decentralization are articulated. Last year SWF hosted an IGF session on decentralized social media. What is changing now is the hope, or the expectation, that concrete ideas grounded in these principles can translate into real outcomes both in policy processes and technical designs. To achieve this, two elements are needed: topic coherence and inclusion.

Concrete proposals already exist to strengthen topic coherence through organizing work into thematic clusters, streamlining and better coordinating ongoing work between annual meetings, and producing outputs that are targeted and usable. Some cross-institutional examples for the potential impact of IGF intersessional work:

Strengthening intersessional work reflects a shift toward treating the IGF as an ongoing governance process, not just a yearly event.

Moreover inclusion gets addressed if participation can be reframed not just as a value but as infrastructure. Unlike other internet governance institutions like global standards bodies, the UN provides funding for participation, language accessibility, and mechanisms for meaningful engagement from underrepresented groups and developing countries. Substantively, the IGF  can attract more structured engagement with governments, while simultaneously advancing openness to non-state actors in settings that have traditionally been the exclusive domain of multilateral diplomacy. 

What was clear in this meeting is that this outcome is not predetermined. It is being actively constructed and influenced by those participating in the process. The next phase will include rounds of consultations on many of the subjects under consideration by this small expert group. It’s important to think about how to leverage the IGF for tangible outcomes that bridge SDOs and other sites of influence over internet governance.

As part of the work on Fediverse sustainability announced earlier this year, the Social Web Foundation is running our first Fediverse Sustainability Survey . We’re seeking operators, moderators and administrators of Fediverse sites, from the smallest to the largest, to fill out the anonymous survey and share information about how their instances work. If you help run an instance, please take the 10-15 minutes needed to fill out the survey. We need a lot of responses (hundreds!) to get statistically relevant data, so please feel free to share the link. We’re also on the lookout for operators of instances that are no longer running; there is a lot to learn about sustainability from servers that closed down, for whatever reason. We’ll publish findings here as part of our sustainability report, supplemented by interviews with selected respondents. Thanks!

Update: we hit a survey response limit in LimeSurvey — it’s been fixed. If you had a problem getting to the survey, it should be cleared now.

The call for proposals is open for the COSCUP Fediverse track in Taipei, Taiwan. ActivityPub-related software, including server and client implementations, are great topics for the event.

COSCUP (“Conference for Open Source Coders, Users, and Promoters”) is the FOSDEM of East Asia. Run by the Open Source community in Taiwan, it brings together people excited about FOSS across the region.

For the first time, this year, members of the Korean ActivityPub developer community FediDev KR are joining up with FediLUG of Japan to program and run a Fediverse track at COSCUP. This has the potential to be a huge step forward for the Fediverse developer community. Although many major projects, like Fedify and Misskey, are created and promoted in East Asia, distance and language barriers make it hard for East Asian devs to participate in European and North American in-person events.

The Fediverse track is open to proposals about ActivityPub implementations, clients for ActivityPub platforms, ancillary services, libraries and toolkits. But also, as at FOSDEM, talks about the human aspects of Fediverse technology, like moderation, policy and governance, are welcome and encouraged. This event looks like it will cover as much interesting conceptual space as its twin at FOSDEM.

Hong Minhee, hongminhee@hollo.social, was one of the main speakers at FOSDEM’s Social Web devroom this year. Their talk about Fedify was important, but even more important was their effort to bridge the gap between Asia’s and Europe’s Fediverse development communities.

I (Evan) hope that COSCUP brings together many Asian developers, but I also hope that North American and European individuals and teams put in proposals as well. Knitting together these two important communities on the Fediverse requires effort from both sides. That’s why I’m applying to speak (about ActivityPub 1.1), and why I hope to see many familiar faces among the new ones in Taiwan.

With Modal Foundation, PublicAI and Project Liberty Institute, the Social Web Foundation wrote a paper on the expert workshop held at the AI Action Summit in New Delhi.

Open social protocols promise to deliver user agency. As adoption grows, users encounter new tradeoffs across technical, economic, and governance dimensions. These tradeoffs are unique to open, federated architectures such as DSNP, ActivityPub, and AT Protocol, in comparison to the existing walled-garden platforms of big tech companies. The top-down control of dominant social media platforms manifests in centralized data accumulation, opaque ranking systems, surveillance tech revenue models, and locked-down architectures. Widespread concerns around data protection and privacy are commonly associated with algorithmic feeds and large language models powered by big tech companies’ AI products.

As new social and AI infrastructures emerge, it is important to consider not only whether they are open, but how they are governed. Since protocol design is not neutral, choices about how protocols structure identity, control over data, moderation, and economic incentives will shape who holds power in these new systems.

We recommend research priorities, standards coordination and experimentation with governance and funding models. The paper serves as an explainer on AI and Social Web protocols and includes contemporary analysis. It can be downloaded and shared on the Project Liberty Institute website.

tags.pub is a new service under development by the Social Web Foundation. It is a global hashtag server — it lets you follow a hashtag across the Fediverse. There’s lots of information on the tags.pub home page, and I (Evan) did a talk about tags.pub at FOSDEM 2026. This blog post answers some basics about tags.pub.

By Mallory Knodel

This week I spoke at the United Nations 70th Commission on the Status of Women in a session titled “Automating Justice: Can Artificial Intelligence Increase Women’s and Girls’ Access to Justice?” The recording is available on the UN’s WebTV. I also delivered a short intervention at an EU side event on Preventing and Combating All Forms of Cyber Violence Against Girls, my reflections of which you can read about on GenderIT.

On the topic of AI, over the past several years I’ve written guidance for two main audiences: for courts navigating AI systems in judicial contexts, and for technology companies building the tools themselves. This talk was different. This was guidance for feminists, civil society leaders, technologists, and advocates working toward gender justice and equity, because AI governance is no longer a niche technical issue. It is shaping the systems that structure everyday life.

I came to the discussion as a technologist with nearly two decades of experience building internet infrastructure and now working on AI systems. Through my work with the Social Web Foundation, I also focus on emerging digital platforms and the infrastructure decisions that determine who benefits from technology and who bears its risks. I shared three reflections.

First, tech governance starts long before regulation.

AI governance doesn’t begin when a system is deployed, it begins in design. We can and should question the intentions of systems design, intervene in their development and monitor deployments.

In research I’ve done for US courts, one salient example is predictive policing. These tools promise efficiency by analyzing historical crime data. But historical data reflects historical inequality. When systems optimize on the past, they risk reproducing its injustices. The intention was to make policing easier, which the technology might actually achieve, but it comes at the expense of justice itself. Another panelist from Equality Now said in very strong terms that bias deepens gender imbalance. 

AI systems don’t “generate” new, more equitable realities. The generation is a reproduction that mirrors existing patterns in data from the past. If that data encodes discrimination, the analysis and the generated outputs will scale it.

Bias is not a technical glitch at the margins. It can enter at every stage: design choices, training data, validation processes and deployment contexts. Governance by design means addressing these risks before systems negatively impact people’s lives.

Second, equitable tech democratization needs feminism.

We are living through a remarkable moment: extraordinarily powerful AI tools are widely accessible. They help people summarize information, draft documents, translate complex material, and navigate complex systems.

This creates real possibilities for expanding access to justice, but access alone is not empowerment. The room closed with comments from Estonia, a fully digitalized European state. Governments should hold their own use of AI systems to the highest possible standard. Estonia said digitalisation and AI is not about efficiency or economics alone. It’s the point at which we must ask, “who will benefit and who is left out?” These are fundamentally questions of power and responsibility and human rights, which feminism is well placed to answer. 

We’ve seen this dynamic with social media platforms, which were celebrated as democratizing technologies that would ultimately lead to social change. Yet the imperfect implementations of this idea within the context of surveillance capitalism meant that data generated by billions of users ultimately fueled advertising systems optimized for profit, often at the expense of privacy and freedom of expression.

AI risks repeating this pattern. In my research on privacy-preserving AI, such as in encrypted environments, I examine how technical architecture can protect sensitive conversations and ensure people seeking help are not exposed to new forms of surveillance or harm.

If AI systems are to support survivors, legal aid seekers, and marginalized communities, they must include strong technical guardrails: privacy protections, meaningful data minimization, secure system design, limits on exploitative data extraction. Feminist principles can guide us towards the democratization of technology that does not exploit the marginalized.

Third, governance happens through standardization and setting global norms.

AI governance is often framed as a regulatory problem, but many of the most consequential decisions happen earlier in technical standards, system architectures, procurement rules, and infrastructure design.

The global internet offers an important lesson. A shared set of open technical protocols allowed networks around the world to interconnect. That technical coordination also produced a unique governance structure: multistakeholderism, in which norms are debated and standards evolve by a multitude of expert stakeholders.

However, AI lacks comparable interoperability. The largest AI foundation models rely on proprietary data ecosystems and are in competition. The AI business model rewards enclosure. As I mentioned earlier, AI was largely build upon enclosed, walled garden social media platforms. As a result, technical interoperability, and the governance structures that accompany it, remain drastically underdeveloped.

Recent work on human rights in standards by the Office of the High Commissioner for Human Rights has emphasized the governance instruments that shape how rights are realized in practice: standards bodies, procurement frameworks, and regulatory regimes. They all must consider human rights from the start. Perhaps this is our opportunity for interoperability, and thus multistakeholder governance of AI.

The future of AI is not predetermined. These technologies will reflect the values embedded in their design and governance. Ensuring they expand justice for women and girls requires vigilance, participation, and collective responsibility.

As part of my book “ActivityPub: Programming for the Social Web“, I created a coding example to show how to program for the ActivityPub API. ap is a command-line client, written in Python, for doing basic tasks with ActivityPub.

For example, you can log into a server using this command:

ap login yourname@yourserver.example

Once you’re logged in, you can follow someone:

ap follow other@different.example

Or, you could post some content:

ap create note --public "Hello, World"

This isn’t enough to have a real social networking experience, but I think it’s pretty useful for testing an ActivityPub API server, or automating some repetitive tasks.

I should note quickly here that not all ActivityPub servers support the ActivityPub API. It’s an under-utilized part of the ActivityPub standard. In particular, Mastodon, Threads, Flipboard, and other services don’t support the API. There’s a pretty good list of servers and clients that do support the API in this Codeberg issue.

Suffice it to say, unless you’re actively working with one of those platforms, or you are writing your own, you’re not going to get much use out of ap. It will probably give you an error message like “No OAuth endpoints found” if it can’t use the service.

Refreshing the project

I’ve never packaged ap for distribution; it was always supposed to be example code. But given the recent interest in the ActivityPub API, including the work going on in the ActivityPub API task force, I decided to get it into shape for installation by developers working on other apps. My friend Matthias Pfefferle of Automattic asked me about it when we were at FOSDEM this year, and I was embarrassed to see how difficult it was for him to use.

So, I’ve made two big upgrades to the package. The first was actually making it a package, and distributing it! I upgraded the package management framework to uv, which seems like a good bet for now, and pushing the application to PyPI, the Python Package Index. It’s visible at https://pypi.org/project/activitypub-cli/ now. (Note: different package name from the command name! The PyPI “ap” package name was taken a while ago.)

You can now install the application in one shot with this command on a computer that has Python on it:

pipx install activitypub-cli

You can test that the application installed correctly in your path by running the version command:

ap version

That should show the same version as is currently on the pypi.org page for the project.

The second change was implementing the current OAuth 2.0 profile best practices. I’ve upgraded the login flow so it tries a lot of different options for identifying itself to the server: CIMD, FEP d8c2, and Dynamic Client Registration. It tries to do them in preferential order; it uses permanent, global client identifiers before dynamic ones.

Help me test

I’m especially interested in testing this command-line client against other servers. If you’re developing an ActivityPub API server, please install the ap command and try it out against your (development!) server. Report a bug if it doesn’t work well, or send me a DM at @evanprodromou if it works OK. Given time, I think ap can be a useful first smoke test for ActivityPub API implementations.