Ben Werdmuller's Avatar

Ben Werdmuller

@ben.werd.io.ap.brid.gy

Writing at the intersection of technology, democracy, and society 🌉 bridged from ⁂ https://werd.io/, follow @ap.brid.gy to interact

7
Followers
0
Following
117
Posts
06.08.2025
Joined
Posts Following

Latest posts by Ben Werdmuller @ben.werd.io.ap.brid.gy

Workers who love ‘synergizing paradigms’ might be bad at their jobs [Kate Blackwood in Cornell Chronicle] The results of this study into corporate BS isn’t going to surprise anyone who’s spent much time in an office. The researchers generated meaningless corporate gobbledegook and tested how workers rated its business-savviness. > “Workers who were more susceptible to corporate BS rated their supervisors as more charismatic and “visionary,” but also displayed lower scores on a portion of the study that tested analytic thinking, cognitive reflection and fluid intelligence. Those more receptive to corporate BS also scored significantly worse on a test of effective workplace decision-making. > > […] Essentially, the employees most excited and inspired by “visionary” corporate jargon may be the least equipped to make effective, practical business decisions for their companies.” The Cornell report labels this as a paradox, I guess because these people disproportionately liked their supervisors but were also bad at their jobs. I don’t see that as a paradox at all: my bias is that people who think for themselves and are more distrustful of hierarchy are, to be honest, smarter. I love this sentence: > “Researching BS also points out the importance of critical thinking for everyone, inside the workplace and out. “ Well, yes. [Link]

Employees who are impressed with corporate jargon are less good at their jobs. News at 11.

06.03.2026 14:24 👍 1 🔁 0 💬 0 📌 0
Proton Mail Helped FBI Unmask Anonymous ‘Stop Cop City’ Protester [Joseph Cox at 404 Media] Worth knowing if you think of Proton Mail as being a blanket security solution: in this case it was compelled to provide payment information for an account to the Swiss authorities, who then, via a Mutual Legal Assistance Treaty, handed it over to the FBI. As a result, the FBI were able to determine the identity of the account owner, an activist who does not appear to have been charged with a crime. This is also kind of a weasely statement: > “Edward Shone, head of communications for Proton AG, the company behind Proton Mail, told 404 Media in an email: “We want to first clarify that Proton did not provide any information to the FBI, the information was obtained from the Swiss justice department via MLAT. Proton only provides the limited information that we have when issued with a legally binding order from Swiss authorities, which can only happen after all Swiss legal checks are passed. This is an important distinction because Proton operates exclusively under Swiss law.” Functionally, though, the material was provided to the FBI.” Not every Proton Mail account is paid. But adding payment information can effectively deanonymize a user. Compare and contrast to, say, Mullvad, which allows payments to be made fully anonymously. [Link]

"A court record reviewed by 404 Media shows privacy-focused email provider Proton Mail handed over payment data related to a Stop Cop City email account to the Swiss government, which handed it to the FBI."

05.03.2026 22:44 👍 0 🔁 0 💬 1 📌 0
BBC says ‘irreversible’ trends mean it will not survive without major overhaul [https://www.theguardian.com/profile/michael-savage] I hold three potentially-conflicting opinions about the BBC at once: * The license fee is a regressive tax that is punitive for lower-income people and needs to be overhauled * While it’s supposed to be independent and representative, its news coverage has sometimes fallen short of this standard * It is a treasure and must be protected at all costs Every British household that watches live content is supposed to pay £169.50 (around $225) a year. That’s more than many streaming services — although you arguably get a lot more for your money, considering the plethora of local coverage, stations, and other programs that the BBC supports. It doesn’t represent _all_ of its income, but it accounts for most of it. > “In its opening response to government talks over its future, the corporation said 94% of people in the UK continued to use the BBC each month, but fewer than 80% of households contributed to the license fee.” Because more households are moving to on-demand instead of live — except, perhaps, for sports and some rare but high-profile events — license fee revenue has fallen. It’s interesting to think about what it would take to reform this funding structure to preserve public service broadcasting in the UK. There’s also an elephant in the room, which is the intentional gutting of public service broadcasting here in the US. How could the British ecosystem be inoculated — or at least strengthened — against that kind of threat from a future government? I’m not sure that turning it into a “Netflix for British TV” is the right answer. What might it look like to take a more open approach and turn the BBC into something that doesn’t copy any private company’s business model but is something truly new that meets public service media needs in the 21st century? Could it be more of an operating system that supports new experimentation and different kinds of media? How might it be more radically collaborative and representative in ways that private broadcasters aren’t able to achieve? There’s a lot to talk about. [Link]

The BBC is dying. It needs to be preserved - but doing so will require a radical reinvention.

05.03.2026 14:23 👍 0 🔁 0 💬 0 📌 0
The Safety Levers [Corey Ford at Point C] Another really good framework from Corey. Leading with vulnerability gives the people on your team permission to be vulnerable too. > “When leaders frame work as execution, they imply the answer is already known. When they frame it as learning, they acknowledge uncertainty is part of the work. > > […] When leaders project certainty, dissent feels risky. When leaders acknowledge fallibility, speaking up becomes contribution, not challenge.” Modeling uncertainty, learning, and humility allows everyone to be in growth mode vs approaching their work with a fixed mindset. But it has to be done with intention: uncertainty that doesn’t also come with norms around experimentation, feedback, and accountability just feels like instability. I’m still growing here myself: in my world, everything is a prototype that can be challenged, experimented with, and iterated on. But providing the clear, structured lanes for people to experiment is crucial — and that intentional structure can be one of the first things to go when things get busy or fraught. Structures and norms only matter if they guide us through every situation and if they’re for everyone. [Link]

"A framework to move your subculture from the Anxiety Zone to the Learning Zone" - and provide a way for everyone on your team to contribute and experiment safely.

05.03.2026 14:01 👍 0 🔁 0 💬 0 📌 0
Preview
Can we build the dog? “Will the dog hunt?” My sneakers squeaked on the concrete floor. Twenty entrepreneurs in black hoodies looked up at me, taking notes. This room had been a working garage once; now, we ceremonially opened the garage door to let in new cohorts of early-stage media startups with the potential to change media for good. Outside, the San Francisco traffic honked and screeched. We were midway through the bootcamp: the week-long course at the beginning of the accelerator that aimed to teach startups the fundamentals of human-centered venture design. We’d taken them out of their comfort zone to help them use journalistic skills to understand who they were building for and why. We’d helped them to think about how to effectively tell the story of their business in a way that helped them sharpen their underlying strategy. And now I was trying to explain _feasibility_. I echoed Corey Ford, the Managing Director, who had laid out the groundwork in the days before. Repetition was our friend. _Desirability_ , I explained, is your user risk: are you building something that meets a real person’s needs? Will the dog hunt? _Viability_ , in turn, is your business risk: if you are successful, can your venture succeed as a profitable, growing business? We stretched the metaphor a little bit here: will the dog eat the dog food? And now it was time to explore _Feasibility_ : can you provide this service with the team, time, and resources reasonably at your disposal? I leaned in conspiratorially and vamped: _can we build the dog?_ It was the best job I ever had: using my experience as a founder, an engineer, and a storyteller to support teams that were genuinely trying to make a difference. People who went through Matter have gone on to help countless newsrooms succeed by being more empathetic and product-minded; some have left media and even gone on to build hospitals. I went through Matter as the founder of Known in 2014 and came back to support other founders a few years later. Since then, how I think about feasibility has completely changed. ### The build vs the long, wagging tail The center of gravity for feasibility, at least in my mind, used to be the build stage. How do you build the initial version of a tool or a service that provides a minimum desirable experience to meet your user’s need? Startups also need to consider how to make it scale so that you can address a larger potential number of users with the time, team, and resources _potentially_ at your disposal. If you can’t get there with the resources you have, could you get there with investment dollars you could realistically raise? In a newsroom or other organization the formula is a little bit different. You’re probably not raising money for a specific tool or a service — although, sometimes, grant funding or funding from a corporate parent _is_ available for certain things. But you’re most often asking whether you can provide the service or tool with the time, team, and resources _currently_ at your disposal. Often, your time is limited, your team is small, and your resources are meagre. You have to make stark tradeoffs. In both contexts, you don’t want to waste time spinning your wheels building solutions to problems other people have already solved. I’ve already written about building vs buying for newsrooms: > Newsroom tech teams are like startups in that they’re running with limited resources and constantly trying to assess how they can provide the most value. Back when I was Director of Investments at Matter Ventures, I advised them to spend their time building the things that made them special — their differentiating value — and using the most boring, accepted solution for everything else. > > It's a rule of thumb that works universally: build what makes you special and buy the rest. But the critical difference is that, in newsrooms, what makes you special is the journalism that software enables, not the software itself. The cost of building something new has fallen through the floor over the last decade. Developer tools have become more powerful, numerous, and freely available. Open source has exploded with libraries that can help you get to an initial version much faster. Enter AI. Almost without warning, AI-enabled tools dramatically expanded what a resource-strapped team can create. It’s a genuine sea change. The more founders and senior engineers I speak to who are actively using these tools, the more stories I hear about accelerated development. People are building smaller tools that would have taken many sprints in less than a day; founders are building entire startups that might have taken six months in less than one. But all code needs to be maintained. There are bugs, libraries need to be upgraded, underlying platform changes introduce security flaws and incompatibilities. Changes in business needs mean that tools and services need to be adjusted. All of those things add up to a maintenance overhead that comes with introducing any new tool or service. If we rapidly build more and more software, that maintenance overhead accumulates at speed. Even if we have the discipline to keep our technical footprint small, we’re not absolved from doing what has to be done to keep everything running. When we consider feasibility, the center of gravity is no longer in building the thing. It’s _supporting_ it. ### A shared rubric reduces risk The dynamics may have changed but every team still needs to make a bet about whether we can build and support a project before it takes it on. If something is obviously not feasible, the team shouldn’t do it. On the other hand, if a team doesn’t have a clear, shared understanding of how to assess feasibility, it can become an easy way for someone to subjectively shut down a project for arbitrary reasons. Without a clearly shared understanding of risk, the _idea_ of risk can be poison. So that’s what we need: a shared rubric for assessing the feasibility of a project. Our assessments won’t always be right, and we always learn new things about a project in the course of building or supporting it. But while complete certainty is hard to achieve, this will at least provide _directional_ information about whether we can do it. There are existing frameworks, but they’re mostly designed for large enterprise environments: instead of giving you a directional gut check, they produce documents used in commissioning vendors, justifying budgets to executives, and satisfying governance processes. TELOS — Technical, Economic, Legal, Operational, and Scheduling — feasibility tests are very broad and don’t consider technology alone. PIECES examines whether a proposed project will improve the status quo across Performance, Information, Economics, Control, Efficiency, and Services. Both are useful to understand, but also not quite what most time-strapped contexts demand. We need something scrappier. ### A prototype rubric for feasibility Here are some questions a team can ask themselves. Not only are they useful in themselves, but you can use them for alignment: if a product manager ranks a factor with a low score but a senior engineer on the team ranks it with a high one, you know there’s a problem that you need to dig into. Each of these questions can lead to its own targeted discussion. The purpose of the rubric is not to be a thought-ending exercise: it’s to align a team around what’s actually important to consider, and open up conversations about any disagreements so that everyone can come to a consensus. Each person on the team should run through the rubric — perhaps asynchronously — and then share their results with the group in a shared meeting. These questions take our AI engineering context into account: questions about exploring new architectures are weighted lower than they would have been ten years ago. #### 1. The problem context (25 points) **How much of this project's scope is a black box? (10 points)** _1 = We have built this exact thing before. 10 = This is completely new territory; we don't even know what questions to ask yet._ **How much friction will our existing tech stack, legacy systems, or organizational quirks add to the build? (10 points)** _1 = Greenfield project using our preferred stack; 10 = Navigating a maze of legacy spaghetti code or systems._ **How fundamentally difficult is the core problem we are trying to solve? (5 points)** _1 = A standard CRUD app; 5 = Uncharted algorithmic research._ #### 2. Execution (15 points) Here, the “development period” is tailored to your unit of project organization time on your product roadmap. On some teams, it’s a quarter; for others, it’s half a year. These questions are _not_ meant to be considered at the sprint level. **How much of the team’s total capacity will the initial build consume? (10 points)** _1 = We can spin this up in an afternoon; 10 = Consumes 100% of the engineering team's capacity for the development period._ **How well do the skills required match the people we currently have? (5 points)** _1 = We have deep expertise here; 5 = Requires learning new frameworks from scratch._ #### 3. The long, wagging tail (60 points) While most of these questions consider up-front issues, this section describes the _ongoing_ overhead for a team. This represents risk: time a team spends working on maintaining an existing tool is time it can’t spend building anything new or maintaining other tools. Over time, without careful lifecycle management and brutal decision-making, a team’s bandwidth can disappear into ongoing maintenance. **How long are we committing to keep this system alive? (20 points)** _1 = A disposable prototype or short-term event tool (weeks or months); 20 = A permanent, foundational system we expect to rely on for years._ **How much of our team’s capacity will keeping this system alive consume in a typical month? (30 points)** _1 = Set it and forget it; 30 = Requires daily babysitting, constant bug fixes, and continuous adaptation to upstream changes._ **How heavily does this project rely on teams, platforms, APIs, or vendors that we do not control? (10 points)** _1 = Fully self-contained; 10 = Dependent on unstable APIs, beta AI models, or restrictive third-party vendors._ #### 4. The blast radius (40 points) Because modern tools let small teams build powerful things quickly, the risk of deploying something dangerous or irreversible is higher. This category is weighted heavily to catch those risks. **How sensitive is the information that this tool handles? (20 points)** _1 = Public, anonymous data; 20 = Handling highly sensitive PII, whistleblower documents, or financial data._ **How catastrophic is it if we ship an imperfect, glitchy version? (10 points)** _1 = We can ship it broken and iterate safely; 10 = Mission-critical; if it’s not perfect on day one, we burn trust or face legal ruin._ **If we realize this was a mistake halfway through, how hard is it to undo? (10 points)** _1 = A two-way door; we can easily turn it off; 10 = A one-way door; involves irreversible data migrations or permanent structural changes._ Once everyone has tallied their numbers, compare your total scores out of 140. This is nothing more than a temperature check: again, it should be considered a conversation-starter, not the final word. Roughly, here’s how the scores break down: **0–45: Green light.** This project is highly feasible. The initial lift is manageable, the ongoing tax on your team is low, and the blast radius if things go wrong is minimal. Build the dog. **46–95: Yellow light.** This is the danger zone of hidden costs. You can probably build this, but the lifespan, ongoing maintenance, vendor dependencies, or security requirements will create a permanent drag on your team’s velocity. Before proceeding, ask yourselves: what existing project are we willing to sunset to make room for this new maintenance burden? What’s the opportunity cost, and is the lift to the organization worth it? While this rubric only considers feasibility, this is a good time to go back to desirability and make sure the juice is worth the squeeze. **96–140: Red light.** This project is fundamentally infeasible with your current resources. The complexity is too high, the blast radius is too dangerous, or the multi-year maintenance load will simply sink your engineering team. If this project is absolutely vital to the business, you cannot build it scrappily: you need to buy an enterprise solution, hire specialists, or radically reduce the scope by choosing a much smaller problem to solve first. Once again, this isn’t the be-all and end-all. For one thing, the rubric is not a simple average: pattern matters as much as the total. It’s worth checking for category dominance: if your scores are generally low but are much higher for the long, wagging tail, that doesn’t mean you have an unambiguous green light. These categories may also surface other conversations that aren’t cleanly captured by the rubric. But the first step towards shared understanding is building a structure to achieve understanding _about_ — and hopefully this gets you some of the way there. And please: talk to people. For more complicated projects, I always think it’s a good idea to speak to experts in order to validate your assumptions about feasibility. For _any_ project, speaking to your users to make sure you’ve nailed desirability, and speaking to equivalent businesses to validate your viability assumptions, are crucial. The map is not the territory, and sometimes you need multiple maps. ### Can we build the dog? The beauty of a shared rubric isn’t that it automatically makes decisions for you. It’s that it forces a team to look at the exact same map. If a product manager scores the project a 40, but a senior engineer scores it a 105, you’ve found an area of disagreement that you need to explore. It’s far better to do that early, before you dive into complicated specification work or writing code. In a world where AI and modern tooling make it dangerously easy to spin up new software, our ultimate constraint is no longer our ability to type code. It’s our capacity to care for the things we bring into the world. Saying "no" to a project with a massive, hidden maintenance burden isn't a failure of imagination; it is how you protect your team’s time so they can focus on the journalism, the community, or the core mission that actually makes your organization special. Today, building the dog is the easy part. The real question is whether you have the time, energy, and resources to feed it, walk it, and take it to the vet for the next five years. If you do, then by all means: let’s see if it hunts.

What resource-constrained teams need to ask before writing a line of code

03.03.2026 13:46 👍 1 🔁 1 💬 0 📌 0
Preview
Good vibes, bad vendors When I was thirteen or fourteen I had a really comfortable sweatshirt that I wore to school all the time — but it did have a few inherent problems. For one thing, it had a great big target on it, and wearing a literal target to high school was just asking for it. For another, on top of that, in Looney Tunes writing, was the confident phrase: “It’s a good vibe!” I was bullied as mercilessly as one might expect, but I honestly think it might have killed in the AI era. I’d like to think I was just ahead of my time. Andrej Karpathy, an early OpenAI researcher who now works at his own startup, coined the phrase _vibe coding_ last year. To vibe code is to use an LLM like Claude or ChatGPT to generate source code instead of writing it yourself. He meant it as a way to loosely prototype code or to make progress on a weekend project. LLMs, at least at the time, could not be fully trusted to write well-written, working code. It was an out-there idea. What a difference a year makes. Today, it’s a mainstream conversation that is rapidly reshaping technology strategy — and informing layoffs across industries. AI conversations are always fraught, for good reasons that include the underlying power dynamics and the bad behavior of most of the AI vendors. At the same time, the whole AI landscape is changing incredibly rapidly, and it’s become a cliché to point out that any discussion of what LLMs can and can’t do today will probably be invalid two or three months from now. And, of course, millions of words have been written about it at this point. But even despite all that, I still think it’s worth talking about. If you’re running technology in a small, resource-constrained environment — like a newsroom or a non-profit — how should you think about AI-enhanced software engineering? Come to that, how should _I_? Let’s talk about it. ### First things first: does it work? It didn’t, and then it did. Six months ago, LLMs could generate a certain amount of code, but they would often make inefficient decisions or hallucinate libraries and API endpoints, and you’d need to babysit them a lot. Their use was mostly passive: they would generate code snippets based on immediate user prompts, and engineers would have to spend a bunch of time debugging the output. And in terms of security, it was the Wild West; there were essentially no security considerations. LLMs are famously stochastic (their output is randomly determined, not deterministic) and prone to hallucinations. The result was unreliable code. A lot has changed since then. In particular, the models released in February, 2026 are a sea change in reliability: given the right prompt, they often genuinely can write decent code in one shot. Tools like Claude Code can go off, spawn multiple agents, investigate a problem, build a reasonable plan, and then execute on it, while working in a safely sandboxed environment. It’s not just about improved models, although they obviously have a central part to play. An ecosystem is developing around doing AI-assisted software engineering well. Plugins like Jesse Vincent’s Superpowers encourage good decision-making based on principles of excellent software architecture design and product management. Structured frameworks like spec-driven development similarly help lead the agent to sensible outcomes; both are incorporated in all-in-one coding lifecycle toolkits like Metaswarm. A rigorous process is preserved, and throughout, there are far more safety guardrails to prevent security incidents (although it’s easy to overcome them, or they’re sometimes not on by default), and using AI to generate code is much safer than it was. Claude Code absolutely can write the code, build a plan, and document its work. I have been an AI skeptic, but in my experiments I’ve found that it really can feel like magic. You can reasonably object to AI for any number of reasons, but this is no longer one. It works. The thing to understand is that this is a tool for engineers — and senior engineers will get the best results. It takes real engineering skill to craft a prompt that will do the right thing and result in a strong architecture. The process changes the center of gravity from writing source code in a programming language to crafting goals, understanding your user, being crystal clear about the experience and the value you want to convey, and thinking about architectural implications. That probably means talking to people, forming a hypothesis about what they need, testing it with them, and considering the ongoing technical implications of the work. Those are things that senior engineers already spend much of their time doing — indeed, I’d argue that it’s what separates a great senior engineer from a mid-level one. The core question a senior engineer navigates well comes down to: lots of people _can_ write code, but _should_ they? Why, and for whom? Those questions only become more important in a world where AI is writing the source code. When implementation is faster, problem selection and scoping become the scarce skills. Friction is training: we learn how to engineer software through our terrible experiences. When things go wrong, we learn. When we have to refactor, we learn. When we talk to our peers about our work, we learn. AI removes most of this friction and hides the complexity away from us: it obscures failure, compresses the process of debugging, and automates refactoring. When these hard-earned skills are the reason we can make good software engineering decisions with AI, but the AI doesn’t offer newcomers the ability to build those skills, who will train the AI once we are gone? Opinions on that change in center of gravity will be intensely divided. I stand by this New Year’s Day thought about Claude Code: > It has the potential to transform all of tech. I also think we’re going to see a real split in the tech industry (and everywhere code is written) between people who are outcome-driven and are excited to get to the part where they can test their work with users faster, and people who are process-driven and get their meaning from the engineering itself and are upset about having that taken away. I’m very much an outcome-driven developer, and to me it’s a giant relief. Not everyone will feel the same way. Resource-constrained environments _must_ be outcome-driven. They can’t spend their time on the process of software engineering; the best way for them to move forward is to start small, release a valuable core that solves a problem for some set of users as early as possible, and then continually iterate around it, using user feedback as a guide. There’s no alternative to having empathetic, human-centered senior engineers on your team — with or without AI. But AI engineering tools may have an interesting side effect: I can see a world where pushing these product and spec questions to the forefront helps more engineers build those skills more quickly. The first step, after all, is understanding that those answers are needed to begin with. It’s worth saying that there will be many managers who hope that tools like Claude Code will mean they can do away with engineers or dramatically cut their workforces. Of course there will. They may even see engineers as gatekeepers, and there may be resentment that they’re needed at all — and a hope that this work can be done directly by managers or other key employees. In a newsroom, for example, can’t the _journalists_ produce tools now? For non-engineers, they can be a useful prototyping tool: for a product manager, for example, it may help assess a user interface or experiment with an idea. But those prototypes are not enduring software; nor are they projects that can be “handed off” to engineers to support. To properly architect a system, there’s a lot you need to consider. This includes performance, scalability, and the ongoing overhead of maintaining a project and keeping it safe: nobody wants to rely on software that proves to be slow, insecure, or impossible to update. You also need to assess the technical implications of a project: are there technical standards that the project should be adhering to, or battle-tested best practices that the design should take into consideration? For all these reasons, an engineer must be involved from the beginning. These tools can’t replace technical staff, and they shouldn’t. Like I said, these tools are _for_ engineers, not a replacement for them. ### Okay, but what about those power dynamics? Consider an individual, indie developer. Over the last few decades, they’ve become more and more empowered: developer tools have become cheaper and more of them are open source. Power and control have been devolved to the individual; you can run the tools you want on your own hardware, configure or recode them to your needs, use them for free, and share any of your changes. Engineering has become more and more of an open collective built on radical collaboration. That allows developers with fewer resources to build more easily, widening the pool of people who can build startups, create useful tools, and learn these skills to begin with. AI-assisted engineering centralizes power back in the other direction. Claude Code, Codex, and so on are all centralized, proprietary tools that become harder to move away from the longer they’re relied upon. They’re also expensive: while open source tools are decentralized and free, it’s incredibly easy to spend large amounts on Claude. Based on my own experimentation and anecdotes from friends and peer companies, any engineer that relies on Claude Code as part of their daily work is likely to spend hundreds of dollars a week; these are new costs that didn’t previously exist. Those extra costs could theoretically be offset by significant performance or efficiency gains. The thing is, those gains aren’t as strong as you might expect given the apparent magic of automatically generated code. A study recently published in Harvard Business Review indicated that adding AI actually intensified the workload, putting engineers at risk of burning out: > The changes brought about by enthusiastic AI adoption can be unsustainable, causing problems down the line. Once the excitement of experimenting fades, workers can find that their workload has quietly grown and feel stretched from juggling everything that’s suddenly on their plate. That workload creep can in turn lead to cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems. These can be mitigated by good work hygiene: enforcing breaks and sensible work hours. But the employers who are most enthusiastic about introducing AI may also be the ones that are least enthusiastic about benefits that center employee well-being over productivity. I’ve already mentioned that some managers may hope that AI can reduce their investment in software engineers. One can easily imagine that the presence of AI — or, rather, the threat of being replaced by it — could be used as a cudgel to depress engineer salaries. It gives managers more leverage beyond money, too: those longer hours and more intense workloads that the HBR study found could burn engineers out might be more likely in a world where engineers fear for their jobs. The long-term implications are even starker. Consider a world where the recentralization of power from individuals to large, centralized companies continues at the current pace. When AI writes most source code, fewer and fewer engineers will be capable of doing this work themselves, which will lead to even more dependence and lock-in. It’s been noted in the past that while generative AI robs artists of the interesting work and leaves them with the mundane bits, for outcome-oriented engineers it robs them of the mundane bits and leaves them with the interesting parts. I’d argue that the real value is in the intersection between coding and the higher-level work; they’re inseparable. By improving the way we code, we improve the way we can solve problems for real people. (How can you solve a problem if you don’t really understand how the solution works?) By improving the way we think about solving problems, we improve the way we code. (How can you code something well if you don’t know who it’s for or why it needs to exist?) They aren’t two separate processes; they’re parts of the same thing. Removing one makes the other less effective. Without concerted effort, an entire industry will be de-skilled and de-valued, their human expertise replaced with software that charges by the token. ### So let’s put in the effort AI isn’t going away, and AI-assisted software engineering is a permanent addition to the way we build software. But that’s not the same thing as saying that the way we use AI _today_ won’t change. Any policy for AI-assisted engineering has to take into account risks of various kinds. I’d loosely separate them into the following categories: * **Employee risk:** preventing burnout, staff turnover, and poor morale. * **Security risk:** preventing data leaks and security incidents that compromise customers, sources, employees, or other members of the community. * **Quality risk:** preventing low-quality code that impacts the efficiency, experience, or perceived quality of the organization’s work. * **Supplier risk:** reducing the potential impact of potentially harmful choices made by AI vendors. While I’m not going to go into a full framework here — that’s part of what I do at my day job — let’s talk about how we might think about addressing them together. #### Employee risk In that Harvard Business School report about AI-driven burnout in engineers, the authors suggested some sensible mitigations. These included creating, as team norms, structured time for quiet reflection on the project at hand, and limiting interruptions; intentional processes for limiting the work that can move forward, to prevent engineers from taking on (or being asked to take on) too many tasks just because they think they can; and creating more space for empathetic human connection as a team. Those are all things that every team should do, whether or not they use AI! But they become even more important on an AI-accelerated team. If you don’t have any norms about tightly controlling when work moves forward, for example, adding a tool that accelerates the work will result in a higher volume of work getting processed, but not necessarily any strategic selection about the most _important_ work to do. Perhaps most importantly, engineers are worried that they’ll be replaced at the hands of managers who may not understand what they do. They need to have the emotional safety and security that comes from knowing that they won’t. It needs to be communicated to them that the importance of their skills is understood. They are experts in their fields, and they’ve just gained another tool to help them; they are not interchangeable with the tool. #### Security and quality risk It turns out that you go a long way towards addressing a lot of security, quality, and efficiency issues — as well as some of the morale issues that lead to employee risk — by placing engineers at the center of the process. Some AI processes talk about “human in the loop”. That term was borrowed from more traditional machine learning processes; in the case of anything where AI takes an action in the world on behalf of a user, like engineering, I’d prefer to reframe it as a tool that is always directly under human control. In that light, all code must have a human owner who will take responsibility for it. It’s _their_ code, just as if they’d written it in an integrated development environment; they just happened to use a different tool. If all generated code must ultimately be owned and reviewed by a human, that person is able to tune the results for safety, efficiency, and quality. Most well-run engineering teams have a peer review process where code written by an engineer must be officially reviewed by a second engineer before it can be merged into the main codebase. If we assume that generated code is owned by Engineer A, that means there must be a human Engineer B to give it a second pair of eyes. They might also be using automated tools to help their review along, but they’re the ones who ultimately take responsibility for a review. This isn’t enough. All projects need to have comprehensive automated testing: tests that must run on code that is about to be merged into the main codebase in order to make sure everything still functions. Tests for efficiency, adherence to style guidelines, and security issues can be run here too. What’s kind of fun is that when these are in place, tools like Claude Code will look at the test output, make corrections when something doesn’t pass, and try again — all automatically. #### Supplier risk The centralization that removes power and agency from engineers also introduces a serious business risk. If a core part of an organization’s value comes from software development, inexorably placing a centralized service in the middle of your process makes you heavily dependent on their decision-making. They can increase their prices, make changes to their stack, or change the way they think about keeping your data and source code safe, and there’s very little you can do about it. The good news is that, right now, no AI vendor can lock you into their services, because your source code itself and your infrastructure stack are independent of your AI tools. Your code is managed, stored, and hosted in different places, and you can think of source code itself as being a kind of open protocol: because it’s plain text, you can use virtually any tool with it. Source code still has the devolved, open, decentralized properties of the open source ecosystem that has put power in engineers’ hands for decades. That provides at least some protection against an AI vendor suddenly increasing their prices or changing their privacy stance: you can always vote with your feet. If you’re uncomfortable using one of the major model providers, open source alternatives are available. Tools like Aider and Cline can provide agentic coding using any model, including local models that could theoretically be run on an organization’s own infrastructure. In practice, though, this requires more powerful hardware than most smaller organizations can afford; this may become less of an issue over time, as new hardware emerges, but it certainly is one now. Still, local models could help prevent lock-in — and may prevent some security issues, too. This inherent openness could change as AI vendors look for ways to increase their revenue and reduce churn. We may see AI-specific alternatives to git and GitHub; I can even imagine programming languages that are “optimized for AI” but that just happen to be proprietary and locked in to a vendor. Every company that builds software should watch for these forms of lock-in and reject them. We should also be wary of marketing that tells us to just let the AI write code autonomously. These are ideas that cement vendors as a full replacement for the software development process, moving a center of expertise that was previously owned by an organization into a centralized technology owned by someone else. It’s a trap: that world is one where the source code can’t be moved between agents and your products are fully locked into their services without a credible exit. ### Do we want to invite these companies into our workplaces? A _ton_ has been written on the issues surrounding AI. Last summer, I wrote a broader guide to navigating AI that I think still holds up. In it, I noted: > A lot of money has been spent to encourage businesses to adopt AI — which means deeply embed services provided by these vendors into their processes. The intention is to make their services integral as quickly as possible. That’s why there’s heavy sponsorship at conferences for various industries, programs to sponsor adoption, and so on. Managers and board members see all this messaging and start asking, “what are we doing with AI?” specifically because this FOMO message has reached them. My approach to evaluating AI remains through two main lenses: the technology itself and the vendors who make it. The further reading section of that earlier piece is a good place to start. The thing that I didn’t mention then, but is worth calling out now, is the sheer precarity of these vendors. AI vendors are offering their services for below cost and have struggled to articulate value in a way that could credibly lead to profitability. Apparently feeling this gap, OpenAI is experimenting with ads and porn, while finding itself under scrutiny for putting teen wellbeing at risk through choices it made to boost engagement. Anthropic was sued by Reddit last year for scraping Reddit data for training without authorization, and had to settle a high-profile lawsuit brought by book authors whose work it stole for training data. I’ve mentioned Claude Code a bunch in this piece, because it works really well, but it was trained using stolen work. Meanwhile, from a technical standpoint, there has been some research that there are diminishing returns to new LLM development and we’re already past the peak. There’s no guarantee these companies will make it. If an organization has invested in agentic coding processes that don’t substantially keep humans in the loop and the vendors that power them disappear, they will be left in a bind, with no in-house expertise, and company strategies that depend on AI. That makes it a dangerous gamble. We will have lost internal skill while increasing our dependence on very fragile external suppliers. ### So how should you think about it? AI coding works. It shifts the center of gravity from implementation to judgment, which increases the value of senior engineering skills. It also introduces significant power, labor, and supplier risks. That means that solid guardrails and cultural norms are non-optional. Even if you haven’t rolled it out yet, your engineers are almost certainly using it. In conversations with my peers, I’ve heard countless stories of organizations that banned it but discovered that its workforce had just taken matters into their own hands. While there are many engineers who refuse to touch it, many more are eager to have it. You could ban it, but it’d likely be fruitless: those engineers who use their own accounts will probably keep doing so. It’s better to have the tools in a place that’s under your control and observable than used in the shadows in a way that might put your data at risk. Given that, it’s better to roll it out than to not. But you need to do it with your eyes wide open and with a sense of intentionality. Be aware of the risks, and mitigate them in advance with common sense cultural norms like the ones I discussed earlier: pay attention to your employee and supplier risks in particular. Don’t let AI push to production without oversight. And keep humans not just in the loop but fully in control. I don’t think it’s productive to _mandate_ the use of AI-assisted engineering, which runs the risk of alienating some engineers — the split between AI skeptics and those who are excited about the technology is real — and preventing nuanced discussions about how the technology can be used inside your workplace. What happens in practice when you just let it roll out to anyone who wants to try it is that people _do_ try it; they find that it’s useful for some tasks, and then quickly find its limitations. That’s a healthy exploration. How should _I_ think about it? I’m still figuring it out. Jesse Vincent compares the process of hating “agentic” development and pushing through it to discover that code was never the most important part of building software to the process of being a manager and asking your team to build things instead of coding them yourself. I agree that these experiences rhyme — but of course, when you lead a team, you’re investing in human beings, working alongside them, and helping them to grow in the process. That’s exponentially more rewarding than leading software agents built to provide value for a megacorporation. But it doesn’t need to be that way. You can do both. If you treat the technology as a tool, albeit one that has been made by genuinely problematic companies, you can roll it out to a real, human team and continue to build things together. You can invest in and support them while you navigate new kinds of software problems; together, you can figure out how to shape the culture of an engineering team that is undergoing a paradigm shift. You can train the next generation of software engineers, both keeping the long history of software development in mind, and taking into account these new skills. And you can look for the next thing that properly devolves power down the stack to the individual, for the benefit of everyone. Software development is still human. You can work together towards a shared mission, pick and choose the pieces of this new technology that make sense according to your strategy and values, and build community in the process. _That’s_ a good vibe.

AI coding works now. Here's how to think about it.

25.02.2026 10:00 👍 0 🔁 0 💬 0 📌 0
Preview
Notable links: February 20, 2026 _Most Fridays, I share a handful of pieces that caught my eye at the intersection of technology, media, and society._ _This week: the trouble with innovation and revenue in news, and some of the societal forces (political manipulation, exploitation) that mean figuring those things out are vital._ _Did I miss something important?__Send me an email_ _to let me know._ * * * ### Journalism lost its culture of sharing I agree, strongly, with this piece about (re)building an open source culture in news by Scott Klein and Ben Welsh. But then, I would: I spent over a decade working to build open source communities, and then another decade and change working alongside and then inside newsrooms. So it’s to my chagrin that the newsroom where I currently serve as Senior Director of Technology is one of the places listed here where open source contributions have significantly dropped off: > “At ProPublica, teams published detailed white papers alongside major investigations, explaining their quantitative methodologies with scientific rigor, allowing other researchers to verify and learn from their work. Major news organizations ran active blogs where they shared techniques and lessons learned. Conference presentations at NICAR and elsewhere became venues for passing along hard-won knowledge.” The effect of this work didn’t just lift the work of journalism, it attracted new people to it: > “This culture made newsrooms more attractive places to work for civic-minded technologists. If you had programming skills and wanted to use them to make a difference, journalism offered you the chance to build things that mattered and share them with the world.” I think there’s a lot to be gained by collaborating on an open source basis. We typically run small, resource-constrained teams where building new software is contextually hard. And we have problems that, if they’re not identical, are at least significantly overlapping; by _not_ collaborating on them, we further an ecosystem where low-resource organizations are all solving the same sorts of things with very few people and very little money in parallel. I was present at the News Product Alliance Summit session described in this piece, and I think the analysis of both the causes of this decline and some of the solutions are spot on. I was particularly enamored by the idea of an Open Source Editor (or director — does everything in news need to be an editor?) and public recognition for great open technical work in the field of journalism. I think it’s also worth saying that open source, done well, is about much more than just releasing your code. A good open source project is a community, not a package. So there’s a lot of ecosystem development and community management involved to foster the kind of real collaboration that is required for this to succeed — even after newsrooms have overcome the institutional hurdles to releasing their work in the first place. I’m really grateful that Scott and Ben have been championing this cause. I’m right there with them, and I’ll do what I can to help. It’s a concrete way we can build a more successful, efficient news ecosystem with stronger technology capabilities, and that’s something we should all want. * * * ### Stop calling optimization "innovation." I appreciate this distillation of the twin needs of optimizing the Engine — getting as much value as you can out of your existing business model — and the Explorer, which is all about actual innovation that seeks out _new_ products, markets, and models. > “If your staff meetings are all about how to hit next month’s KPIs, you don’t have an Explorer. You have a very well-oiled engine. True resilience means insulating your Explorer team from the Engine. It means giving a team room to spend 6 months on a project that could totally flop without punishing them if it does.” I think this is clearly true. At the same time, I think it’s very optimistic about where many organizations actually are: they very often don’t have those goals or KPIs to hit. The result is a kind of vibes-based strategy. Because nothing is measured, or the right things aren’t measured, it’s impossible to run an informed experiment. In those organizations, what feels like innovation is just getting to baseline competence. Before they can optimize, they need to define a concrete strategy, with attendant metrics that you can measure as the basis for performing experiments. Buying a neat new product can be a way to absolve the team from doing the hard work of strategy-building: “look,” they can tell their boards, “we’re innovative!” Creating a concrete strategy and deploying technology that can help serve it are vital. But they, in themselves, aren’t innovation: creating a real culture of innovative experimentation where you can try new things and fail fast is how you de-risk your business for the future. That means understanding your readers incredibly well, so you can anchor your experiments around their needs; it means giving your team the permission to fail; it means creating cross-functional teams who can be radically collaborative and draw conclusions from their experiments quickly; and it means being clear-eyed about where your business actually stands. * * * ### The political effects of X’s feed algorithm Users who moved from a reverse-chronological social media algorithm to X’s: > “[…] were 4.7 percentage points more likely to prioritize policy issues considered important by Republicans, such as inflation, immigration and crime. They were also 5.5 percentage points more likely to believe that the investigations into Trump are unacceptable, describing them as contrary to the rule of law, undermining democracy, an attempt to stop the campaign and an attack on people like themselves.” And even more surprisingly, once the algorithm was switched _off_ , their views did not change again. The effect of the algorithm lingered, in part because it led users to follow more conservative influencers. We intuitively knew that the algorithm mattered, but this is a key finding that puts numbers to it. If that number seems small to you, consider that 4.7% is more than enough to swing an election. It’s also interesting that findings for other algorithms were different; if this result holds up, it suggests that X’s algorithm may be particularly predisposed for political manipulation, even above Facebook and Instagram. This should be a wakeup call for politically-engaged funders and anyone who cares about civil society. It’s not that we need to have less conservative algorithms; it’s that whoever controls the algorithms has a disproportionate say over the electorate’s view of the world. We need more funding into open protocols that decentralize algorithmic ownership; open platforms that give users a choice of algorithm and platform provider; and algorithmic transparency across our information ecosystem. * * * ### Palantir vs. the "Republik": US analytics firm takes magazine to court A series of articles by Switzerland’s _Republik_ magazine highlighted Palantir's rejection by Swiss authorities as a potential security risk: it appears to have determined that there weren’t sufficient protections against Swiss data falling into American hands. This reporting, in turn, led other governments to question use of the firm for the same reason. Now Palantir is taking them to court to force them to make a “counterstatement” that would correct the record. Of course, this has brought more international attention to _Republik_ ’s stories than they would otherwise have received: > “With the step to court, Palantir has generated more attention for the "Republik" reporting than the objected articles themselves could have caused – 23 years after Barbra Streisand triggered the effect named after her. And yet, there are reasons why Palantir is acting this way.” A Swiss counterstatement doesn’t actually hinge on the correctness of the original statement: it’s apparently sufficient for another version of events to be possible. So this is more of a way for Palantir to get its own PR line out than it is to sue _Republik_ for inaccurate reporting. That’s important because Palantir is trying to make headway into European markets and finding it tougher than they’d like. Understandably, there’s a lot of resistance to the firm that provides surveillance powers to the likes of ICE, and whose CEO has justified “anti-woke” strategies that bolster an increasingly authoritarian regime over the last few years. * * * ### In Graphic Detail: Subscriptions are rising at big news publishers – even as traffic shrinks This is exactly why micropayments — a model akin to Spotify’s streaming payments where each pageview receives a share from a reader’s monthly budget for all articles — are not the right solution for news. > “For a bunch, including The New York Times and The Wall Street Journal, growth isn’t just continuing, it’s speeding up, and likewise so is The Guardian’s paid reader contribution model. Meanwhile, Bloomberg’s subscription business shows signs of normalization after a 2024 spike, and Daily Mail is still ramping up its relatively new subscription business, which launched in 2024 in the U.K. and expanded to the U.S. and Canada in February 2025.” In news, value is not necessarily tethered to popular traffic. There’s a specific demographic (typically older, wealthier, and more highly educated – see the next link) that is more likely to pay for it, and there’s a lot to be gained by news organizations if they optimize for gaining that audience. The news organizations that have doubled down on paywalls, and things like them, are often doing better than the ones that aren’t. That can be a tough pill to swallow for the folks — like me — who believe that news should be available to all for the good of democracy. Of course, other models are available: specifically, non-profit newsrooms that operate with a philanthropic model. Like other public goods like Wikipedia and the Internet Archive, it turns out that a specific set of wealthier individuals and foundations are willing to pay to ensure that a resource can be made available for everyone. Unlike paywalls, though, that tends to put newsrooms at the mercy of large foundations and high net worth individuals. Non-profit newsrooms have done a good job of trying to prevent funding coming with strings that might affect their decision-making (The 19th’s endowment campaign is particularly inspiring), but it inevitably must still happen. Paywalls force the issue by ensuring every reader pays, distributing the load: they democratize funding even while restricting access. On the other hand, that makes the newsroom more subject to market forces. But none of this is about traffic. If you tether your payment model to the number of public pageviews you receive, you incentivize your newsroom to create clickbait. You’re ensuring that you have to compete for views for every single article, instead of building a direct relationship with a recurring member who is buying your product because they think it’s worth it overall. * * * ### Most Americans don’t pay for news and don’t think they need to Only 8% of participants in a new Pew survey say that individual Americans have a responsibility to pay for news. Some of the quotes here made me pause: > “I don’t pay to go to church, to get a spiritual message, you know? And if you’re true, and your mission is to relay facts that are fundamentally important for people’s well-being, do I need to pay you for that?” It’s hard to know how to even begin to answer that: the comparison chafes for me, but it amounts to putting both church and news into a “public good” bucket. That people see news in that way is probably good. Providing it for free is hard, but you can see how they got there. A newspaper is a physical object that you can imagine handing over dollars for; digital news feels like it’s in the ether. It perhaps points to a philanthropic model as the best fit. So depending on wealthy donors and foundations to allow everyone to have free access to it makes some sense. This also puts paid (so to speak) to micropayments solutions, which I’m generally skeptical of anyway. If nobody sees the need to pay for news, convincing them to fund a wallet feels like an uphill battle. Meanwhile, the people most likely to pay directly for news are older, wealthier, liberal Democrats. Again, not a surprise, but useful to have it laid out like this; many newsrooms I’ve spoken to are trying to figure out how to move away from a base of older, wealthier, left-leaning people, and, well, it’s not just them. Maybe it’s worth leaning into that for funding and concentrating on finding a broader audience for the news itself. * * * ### Everyone is stealing TV It makes sense that people don't want to be limited by regional geoblocks to get their content – but I don’t think these devices should be trusted. > “It’s called the SuperBox, and it’s being demoed by Jason, who also has homemade banana bread, okra, and canned goods for sale. “People are sick and tired of giving Dish Network $200 a month for trash service,” Jason says. His pitch to rural would-be cord-cutters: Buy a SuperBox for $300 to $400 instead, and you’ll never have to shell out money for cable or streaming subscriptions again.” From a user perspective, I see the appeal: I certainly have subscription fatigue. Beyond that, geoblocks are intensely irritating to me; I’d give anything to be able to watch the UK’s _Channel 4 News_ , or _Doctor Who_ spinoff _The War Between the Land and the Sea_ , which are both unavailable to me unless I want to dive into VPNs and breaking terms of service. A box that gives me what I want to watch, no questions asked, seems too good to be true. It’s not fully clear who is manufacturing these devices, what’s on them, or who runs the services that allow people to access all this television without paying for it. We already know that some streaming boxes have been fronts for residential botnets that have been used for illicit activities that run the gamut from avoiding scraper detection to real organized crime. If I wanted to run malware inside the networks of thousands of homes and businesses, this wouldn’t be a bad way to go about it. Which is a shame, because the allure is real. I’d pay for all that unavailable television. Just, please, let me. * * * ### Hiring in an era of fake candidates, real scams and AI slop Andrew Losowsky discusses the impact of AI on his hiring process: > “Within 12 hours of posting the role, we received more than 400 applications. At first, most of these candidates seemed to be genuine. However, as the person who had to read them all, I quickly saw some red flags, which were all clear indicators of inauthenticity.” These jibe with what I’ve seen lately too. I’ve had the privilege of hiring for a few technical roles over the last year, and every single time, _almost_ everything Andrew mentions has come up. The good news, as he points out, is that right now there are some really strong tells. One of the most important parts of any application I run is the “why are you excited about this job?” question, which is really a question about mission fit. The AI-generated answers are extremely generic, heavily reference the job description itself, and start looking very samey in a sample size of hundreds. Here’s the thing I _don’t_ believe I’ve encountered before: > “Someone made a fake email address similar to ours, then sent generic technical “tests” containing our logo to jobseekers, while linking to our job ad. Completing these tests led to a fake contract signed by someone claiming to be our CEO – it was at this point that the scammers requested financial information, saying they needed it to issue payments.” The thing is, without someone telling me about it, how would I know? This is where we need stronger tools – the anti-spam protections of yore don’t work very well against AI-powered scams. Centralized repositories of scammers and stronger anti-spam filters _may_ work, but I suspect we’re going to need to find other approaches. Impersonating to make some quick money is one thing (and bad enough), but when you consider that for both Andrew and I we’re talking about impersonating newsrooms, this could get very bad very quickly.

Why aren't newsrooms sharing and innovating? And more.

20.02.2026 10:00 👍 0 🔁 0 💬 0 📌 0
Stop calling optimization "innovation." [Yoni Greenbaum] I really appreciate this distillation of the twin needs of optimizing the Engine — getting as much value as you can out of your existing business model — and the Explorer, which is all about actual innovation that seeks out _new_ products, markets, and models. > “If your staff meetings are all about how to hit next month’s KPIs, you don’t have an Explorer. You have a very well-oiled engine. True resilience means insulating your Explorer team from the Engine. It means giving a team room to spend 6 months on a project that could totally flop without punishing them if it does.” I think this is clearly true. At the same time, I think it’s very optimistic about where many organizations actually are: they very often don’t have those goals or KPIs to hit. The result is a kind of vibes-based strategy. Because nothing is measured, or the right things aren’t measured, it’s impossible to run an informed experiment. In those organizations, what feels like innovation is just getting to baseline competence. Before they can optimize, they need to define a concrete strategy, with attendant metrics that you can measure as the basis for performing experiments. Buying a neat new product can be a way to absolve the team from doing the hard work of strategy-building: “look,” they can tell their boards, “we’re innovative!” Creating a concrete strategy and deploying technology that can help serve it are vital. But they, in themselves, aren’t innovation: creating a real culture of innovative experimentation where you can try new things and fail fast is how you de-risk your business for the future. That means understanding your readers incredibly well, so you can anchor your experiments around their needs; it means giving your team the permission to fail; it means creating cross-functional teams who can be radically collaborative and draw conclusions from their experiments quickly; and it means being clear-eyed about where your business actually stands. [Link]

"The problem is, if you’re optimizing a product that fundamentally isn’t working for how people get news in 2026, all you’re really doing is riding that buggy off of a cliff with style."

19.02.2026 14:53 👍 0 🔁 0 💬 0 📌 0
The political effects of X’s feed algorithm [Germain Gauthier, Roland Hodler, Philine Widmer and Ekaterina Zhuravskaya in Nature] This is a very significant finding. Users who moved from a reverse-chronological social media algorithm to X’s: > “[…] were 4.7 percentage points more likely to prioritize policy issues considered important by Republicans, such as inflation, immigration and crime. They were also 5.5 percentage points more likely to believe that the investigations into Trump are unacceptable, describing them as contrary to the rule of law, undermining democracy, an attempt to stop the campaign and an attack on people like themselves.” And even more surprisingly, once the algorithm was switched _off_ , their views did not change again. The effect of the algorithm lingered, in part because it led users to follow more conservative influencers. We intuitively knew that the algorithm mattered, but this is a key finding that puts numbers to it. If that number seems small to you, consider that 4.7% is more than enough to swing an election. It’s also interesting that findings for other algorithms were different; if this result holds up, it suggests that X’s algorithm may be particularly predisposed for political manipulation, even above Facebook and Instagram. This should be a wakeup call for politically-engaged funders and anyone who cares about civil society. It’s not that we need to have less conservative algorithms; it’s that whoever controls the algorithms has a disproportionate say over the electorate’s view of the world. We need more funding into open protocols that decentralize algorithmic ownership; open platforms that give users a choice of algorithm and platform provider; and algorithmic transparency across our information ecosystem. [Link]

"Feed algorithms are widely suspected to influence political attitudes." This study shows that some do - to significant effect.

18.02.2026 20:08 👍 1 🔁 0 💬 0 📌 0
In Graphic Detail: Subscriptions are rising at big news publishers – even as traffic shrinks [Sara Guaglione at Digiday] This is exactly why micropayments — a model akin to Spotify’s streaming payments where each pageview receives a share from a reader’s monthly budget for all articles — are not the right solution for news. > “For a bunch, including The New York Times and The Wall Street Journal, growth isn’t just continuing, it’s speeding up, and likewise so is The Guardian’s paid reader contribution model. Meanwhile, Bloomberg’s subscription business shows signs of normalization after a 2024 spike, and Daily Mail is still ramping up its relatively new subscription business, which launched in 2024 in the U.K. and expanded to the U.S. and Canada in February 2025.” In news, value is not necessarily tethered to popular traffic. There’s a specific demographic (typically older, wealthier, and more highly educated) that is more likely to pay for it, and there’s a lot to be gained by news organizations if they optimize for gaining that audience. The news organizations that have doubled down on paywalls, and things like them, are often doing better than the ones that aren’t. That can be a tough pill to swallow for the folks — like me — who believe that news should be available to all for the good of democracy. Of course, other models are available: specifically, non-profit newsrooms that operate with a philanthropic model. Like other public goods like Wikipedia and the Internet Archive, it turns out that a specific set of wealthier individuals and foundations are willing to pay to ensure that a resource can be made available for everyone. Unlike paywalls, though, that tends to put newsrooms at the mercy of large foundations and high net worth individuals. Non-profit newsrooms have done a good job of trying to prevent funding coming with strings that might affect their decision-making (The 19th’s endowment campaign is particularly inspiring), but it inevitably must still happen. Paywalls force the issue by ensuring every reader pays, distributing the load: they democratize funding even while restricting access. On the other hand, that makes the newsroom more subject to market forces. But none of this is about traffic. If you tether your payment model to the number of public pageviews you receive, you incentivize your newsroom to create clickbait. You’re ensuring that you have to compete for views for every single article, instead of building a direct relationship with a recurring member who is buying your product because they think it’s worth it overall. [Link]

In a world where traffic is decreasing, publishers are moving more heavily into subscriptions - with very good results.

18.02.2026 14:38 👍 1 🔁 0 💬 0 📌 0
An increasingly dangerous world I’ve got a pretty bleak, albeit reductive, theory of global politics that I’m working from right now. The key driver is climate change. We’re living in a world that will have fewer livable places and fewer resources. This will happen quickly. Rather than co-operate to slow climate change and distribute resources intelligently to preserve life and ecosystems, there are a set of powerful people who see this as an opportunity to consolidate their power and influence. Around those people are a set of other, relatively powerful people, who are either on board with consolidation or can be manipulated into supporting it. Consolidation means acquiring land and resources. It also means manipulating people into believing that only some humans are worthy of having access to them. The others can be sacrificed or put to work. Hence, we get more war (land and resource acquisition), more nationalism / fascism (dehumanization of everyone but a defined in-group), and less democracy (disenfranchisement for all but a few groups). The people who are most resistant to consolidation and manipulation are the young people who will have to live through the fallout from it. They are more likely to protest and organize for an inclusive, co-operative world. The people who are most liable to go along with it are the people who always were on the side of dehumanization, those who want to take the opportunity to preserve a better life for themselves at the expense of others, and people who are not paying attention. They are not necessarily equally morally culpable, but they are participants nonetheless. Those of us who are in opposition need to support the young people. We need to give them platforms, put our full support behind them, and more than anything else, listen to them, take their lead, and do what the activists and leaders among them ask us to do. It’s not theoretical and it’s not purely in the land of the ideological. People will die at the hands of fascism, war, and climate change itself. Peace is worth struggling for. An inclusive world is worth putting ourselves on the line for. We need to be watchful for the power dynamics that seek to strip agency, power, resources, and importance from out-groups and consolidate them into a tiny few. It will become genuinely life or death. Thank you for your attention to this matter.

My underlying model for everything that's happening

17.02.2026 15:16 👍 0 🔁 0 💬 0 📌 0
Preview
Building trust in the open Earlier this month I had the privilege of being the MC for the second Protocols for Publishers event, which took place at Newspeak House, an independent college for political technology at the top of London’s Brick Lane. London itself has been under attack in some circles. Ruby on Rails creator David Heinemeier Hansson wrote a screed last year, which I won’t link to, in which he talked about the city falling to what he called “demographic replacement” — a statement so blatant that it barely even qualifies as a racist dog whistle. There have been many similar allegations, mostly from the American right wing, who I suspect hate that London has a Muslim mayor. It was my first time back in eight years. What I found was a beautiful, well-run city, utterly vibrant in its diversity and notably inclusive in its demeanor. The food, once the butt of jokes the world over, was superb, having gained immensely from its internationalism. Compared to what I’ve become used to, the city was spectacular. It was surreal, in a way, to be in a place that was the backdrop to so much of my younger life; more surreal still that it has been so maligned while I’ve been away, but in reality has become _even better_. Newspeak House is a venue tucked into the middle of all of it: a residential, full-time college and event hall all in one, all designed to be hyper-connected to the society evolving around it. This is relevant, I promise. I’ll come back to it. ### Journalism and the open web need each other Here’s how I framed the conversation on the first day. My intention was to pull no punches and make it clear why open social protocols matter in the real world: * * * Running technology in a newsroom in this era has been fascinating. Journalism, when it’s done well, is how we become informed citizens, and how, in turn, we make informed democratic decisions. That’s always important, but I’ve just flown in from a country that is very quickly descending into authoritarianism. Children are being kidnapped. People are being killed. I would argue that journalism is even more important than it has been in a very long time. Journalism is also under threat. Certainly, at least where I live, from the government, which has decimated funding for public media, among other things. But search engine and social media referrals are down, trust has long been declining, and their relationships with their followers have been intermediated. Lots of newsrooms have found refuge in email newsletters, which they see as the last place they can directly own their relationships with their communities — but, guess what, AI intermediated inboxes are coming. It sure seems bleak. But. But! All is not lost. Around the world, there are communities of mission-driven technologists who want to make sure publishers and community owners can own their relationships with their members and participants, and create their own community cultures rather than accepting some corporation’s version of how people should interact. They have been working for years, sometimes for decades, to build an alternative to the prevailing Big Tech view of the world. Those communities eventually evolved into what we call the open social web: a fabric of protocols that provide all the functionality of social media, and more, but with the open ownership of the web. The fediverse (projects like Mastodon) and the atmosphere (Bluesky to its friends) are movements at heart, powered by open protocols. The indieweb community encourages everyone to own their own publishing surfaces. They’re like the web itself: no-one owns them, and no-one can co-opt them. They have the potential to allow publishers to build first-party relationships with their communities without intermediation, entirely on their own terms. Publishers tend to look inwards to other publishers for their answers; people in tech tend to look to other technologists. But there’s so much to be gained by talking to each other. By working with each other to build the platforms we all wish we had. Publishers could really use the sovereign properties of protocols. And the people building protocols really want to understand how they can be useful to publishers. Which brings us to this room. * * * ### The first step is to talk to each other One of the key points I make at every news event I speak at is that the journalism industry tends to treat technology as something that happens _to it_ , like an asteroid. But it doesn’t need to. It can build its own technology, and it can work in collaboration and co-ownership with existing open source communities. That collaboration, in particular, is what I hope will arise from the relationships built at the event. We hosted four conversations on the opening night. I moderated a Q&A with Siddhartha Kurapati from the Bristol Cable and Saskia Welch from the Newsmast Foundation. The Cable is a reader-owned local newspaper that has partnered with Newsmast to build an app that includes a full, Mastodon-powered community for paid members. It’s a very slick, consumer-grade experience, as you’d hope; a well-run community layer in a local news app provides a gathering point for people to discuss local issues, meet up, and share resources that matter to them. What’s particularly cool is that because it’s fediverse-compatible, the community can expand to include members from other local newsrooms and beyond, who can participate from the apps that matter for them. Over time, that creates a social substrate of engaged members of cultural institutions from around Bristol — something that’s hard to get from existing social media. We then heard from Ændra Rininsland, who gave what she called “a brief history of algorithmic fuckery” — a _précis_ of every time centralized social platforms have pulled the rug out from under the communities they were theoretically supporting. With that frame in mind, she demoed the work she’s done to create algorithmic feeds for News in the Bluesky ecosystem that highlight publishers and help ensure that real journalism makes its way to social readers. Her point was that news can create its own algorithms and technology, too, rather than simply being subject to other people’s. Again: journalism can _build_. Jeremiah Lee from the Interledger Foundation delivered a provocation about micropayments. Noting that putting journalism behind a paywall allows authoritarianism to thrive (because most people don’t have access to the facts that would help them make good democratic decisions), he pointed out that Spotify-like streaming payments turned the fortunes of the music industry around. His argument was that it could do the same for journalism: something the Interledger Foundation is excited to experiment with. Finally, Nick Bennett from Mozilla Data Collective talked about AI scraping. Most publishers have had to deal with the issue at this point, making policy about whether to allow it or not, sometimes suing the AI vendors and sometimes making deals. But it turns out that all of this data only amounts to around 1% of the data that _could_ be ingested. Mozilla wants to help AI vendors get access to some of the rest while doing it on the data owners’ terms: creating what it sees as more equitable ways for models to acquire training data. Most of the protocol builders _and_ publishers came in with a fairly anti-AI stance, so this was a brave provocation in itself. The second day was all about conversations. Laurens Hof from Connected Places gave an overview of the protocol landscape, discussing recent developments in the context of the rest of the technology landscape, including AI and societal changes. This set us up to have some useful connected discussions. Small groups of technologists listened to the newsrooms describe their challenges across Discovery & AI; Monetization & Sustainability; and Audience & Community. Finally, we turned the tables and opened the floor for protocol builders to talk about how their work might help to address those challenges and how they might work together. The point was not to get to a conclusion. You can’t solve these problems in a day, but you _can_ build relationships and create the conditions for collaborations where none existed before. * * * ### Community is the antidote to an AI-driven web In her writeup of the event, Mia Biberović, Editor-in-Chief of Balkan tech outlets Netokracija and ShiftMag, noted that smaller publishers have more to lose — and therefore more to gain from open protocols: > All publishers operate within the same distribution environment - search, social media, and increasingly AI-mediated discovery. However, the effects of that environment are not the same, nor are the risks distributed evenly, particularly once the ability to maintain a direct relationship with audiences begins to erode. […] For smaller publishers, this often also means losing what little direct relationship with their audience they still have. […] Open protocols currently offer more tangible value to smaller publishers than to larger ones. Larger publishers tend to be older and wealthier: they’ve had more time to build a relationship with their communities and they typically have more money to enforce it with marketing. In my local community, the Philadelphia Inquirer often runs wide marketing campaigns; the Kensington Voice, which is a much-needed startup newsroom that covers an underserved part of Philadelphia, cannot. There is a difference between news and journalism. The first is information, and the second is context. News has become heavily commoditized: recently, the Pew Research Center found that only 8% of Americans feel the need to pay for it. Journalism, on the other hand, is not. That’s where relationships become important, because we all care where we get our context from. We want to get it from people we think won’t manipulate or mislead us, who have accumulated enough social proof for us to trust them. Ideally, they should be people we know, but sometimes those are parasocial relationships. Titles like _The New Yorker_ and people like Seymour Hersh and Ronan Farrow have had time, space, and money to build that social proof; newcomers have not. AI can deliver commoditized news, but it can’t build the trust relationships that make journalism valuable. The way newcomers can fight their lack of social proof is by building tight communities that nurture and support their audiences. You might not trust a newspaper you haven’t encountered before, but you _might_ trust the humans behind it if you can be in conversation with them. By using open protocols to build those communities, these conversations act as a discovery engine: * You can discover a conversation about your local area (or topic that’s important to you) on your existing social platforms. * It’s easy to follow the conversation thread to discover the publication that hosts them. * You can then make the jump and experience those conversations through the publication’s site or apps, signing up in the process. * When you do, you gain access to more features and greater depth, growing your relationship with the publication itself. It isn’t _all_ about these conversations. It matters that the open social web doesn’t suppress links to journalism, unlike every commercial social network. But this access to the humans behind a newsroom, and the other humans in its community, is what builds trust. People are wired to trust other people, not brands. I have a relationship with London. When I walk Brick Lane, I remember being in my twenties, getting a salt beef beigel at the Beigel Bake or drinking a Red Stripe in a cafe with my friends. Those human connections are what keeps it alive for me; it can be maligned by some right wing talking head thousands of miles away, but nobody can take away those experiences, or the connections I have with people who still live there and can report first-hand on what _they_ experience. It’s a city that’s predisposed to make me smile — and it did that in abundance. The social proof outweighs any argument a racist screed could make. Stronger communities could be built on traditional social media platforms, too. But those have proven themselves to be unreliable partners: remember that referrals are down, links are being suppressed, and trusting a tech company means letting tech happen to you, like an asteroid. Open protocols are permissionless; building on them means building real ownership, as long as the protocols are built with publishers in mind. If only the publishers and protocol builders could talk to each other. * * * ### Thank you I was very grateful to Chad Kohalyk for inviting me to MC, and to everyone who attended. I’m also indebted to the sponsors that made it possible, which included: * Newsmast Foundation * Mozilla Foundation * Bluesky * AT Protocol Community Fund * Interledger Foundation Not to mention the incredible Newspeak House, which I wish I could live in full-time. More Protocols for Publishers events are on the horizon. If you want to stay in the loop, go sign up for the Protocols for Publishers newsletter, subscribe via RSS, or follow them on Bluesky or Mastodon.

How Protocols for Publishers points to the future of journalism – and the web

17.02.2026 10:00 👍 0 🔁 0 💬 0 📌 0
Growing the open social web _This short position statement was prepared for_ _the FediForum Growing the Open Social Web Un-Workshop_ _, which is held online on March 2, 2026._ The question is: _why_ do we want to grow the open social web, and for _whom_? Presumably we don't want to become just another in a long line of gatekeepers. If the open social web is an alternative social network — Twitter but without the corporate structure — then why will it succeed this time when we have multiple decades of failed attempts behind us? Is it because Elon Musk owns Twitter now and American technology companies are increasingly allied with Trump? That seems like a failure of imagination: not only does it define the open social web in terms of what it’s not — it’s _not_ Twitter, _not_ Trump, _not_ American big tech — there will come a time when neither Musk nor Trump are as dominant as they are today and those oppositions matter far less. We’ve seen a lot of projects along the lines of “Twitter but European”, “Twitter but Canadian”, and so on, but these are very brittle provocations. While they do address risks posed by an America-centric internet, they don’t at all speak to why this is actually valuable to real communities, real people, real movements. They take the same power dynamics and transplant them across borders. Nothing fundamentally changes except the nationality of those involved. That’s not without value, but so much more is possible. The real value is in the protocols themselves — but only if we can share ownership of them. There are billions of people who are not well served by the existing social web, particularly in global majority countries. Open social web protocols have the potential to allow them to not just build communities that better address their needs, with features and cultural assumptions that veer far from US and European norms, but to _own_ them. These aren’t communities that need to be spoken down to or harvested by American non-profits; they haven’t been spoken to at all, except as communities to strip-mine by the likes of Meta. They need to be first-party participants in the communities that are building the open social web. Palestinians should be building ActivityPub. People of African nations should not just be running PDS servers but defining the protocols that rule them. The Kalaallit in Greenland are dominated by Facebook, their online communications templated to American norms even as America seeks to acquire their land. We don’t just need Mastodon; we need thousands of Mastodons, open source projects built in their own ways by their own communities, supporting a plurality of cultures, assumptions, and norms. There are plenty of communities building technology in all corners of the globe. They are the future of the social web. Are our standards bodies, communities, documentation, libraries, events, and conversations accessible to them? They need to be. Do the assumptions built into our protocols support their cultural needs? They must. The only way to do that is to not just co-design with them but distribute equity. They must be co-owners of the open social web. That also means there’s a role for funders. Undeniably, the US has been the center because that’s where the money is. Building a more global open social web creates a world where communications, memes, and the flow of news and information are not dominated by the interests of one country. The most important place to start is the people who have been served the least well. Grant-makers need to understand this and need to provide funding to communities in the global majority, who in turn need to build their own networks and make their own decisions. We should take a position of being of service to them, and be honored if we can help.

A position statement for FediForum's unworkshop

16.02.2026 02:56 👍 0 🔁 1 💬 0 📌 0
Palantir vs. the "Republik": US analytics firm takes magazine to court [Falk Steiner in Heise Online] A series of articles by Switzerland’s _Republik_ magazine resulted in Palantir being rejected by Swiss authorities as a potential security risk: it appears to have determined that there weren’t sufficient protections against Swiss data falling into American hands, which in turn led other governments to question use of the firm for the same reason. Now Palantir is taking them to court to force them to make a “counterstatement” that would correct the record. Of course, this has brought more international attention to _Republik_ ’s stories than they would otherwise have received: > “With the step to court, Palantir has generated more attention for the "Republik" reporting than the objected articles themselves could have caused – 23 years after Barbra Streisand triggered the effect named after her. And yet, there are reasons why Palantir is acting this way.” A Swiss counterstatement doesn’t actually hinge on the correctness of the original statement: it’s apparently sufficient for another version of events to be possible. So this is more of a way for Palantir to get its own PR line out than it is to sue _Republik_ for inaccurate reporting. That’s important because Palantir is trying to make headway into European markets and finding it tougher than they’d like. Understandably, there’s a lot of resistance to the firm that provides surveillance powers to the likes of ICE, and whose CEO has justified “anti-woke” strategies that bolster an increasingly authoritarian regime over the last few years. [Link]

Switzerland rejected Palantir on security grounds after independent journalism shed light on fundamental issues. Now the company is suing to tell its side of the story.

15.02.2026 15:38 👍 0 🔁 1 💬 0 📌 0
Most Americans don’t pay for news and don’t think they need to [Hanaa' Tameez at NiemanLab] Disquieting findings for the news industry, although not really a surprise: only 8% of participants in a new Pew survey say that individual Americans have a responsibility to pay for news. Some of the quotes here made me pause: > “I don’t pay to go to church, to get a spiritual message, you know? And if you’re true, and your mission is to relay facts that are fundamentally important for people’s well-being, do I need to pay you for that?” It’s hard to know how to even begin to answer that: the comparison chafes for me, but it amounts to putting both church and news into a “public good” bucket. That people see news in that way is probably good. Providing it for free is hard, but you can see how they got there. A newspaper is a physical object that you can imagine handing over dollars for; digital news feels like it’s in the ether. It perhaps points to a philanthropic model as the best fit. So depending on wealthy donors and foundations to allow everyone to have free access to it makes some sense. This also puts paid (so to speak) to micropayments solutions, which I’m generally skeptical of anyway. If nobody sees the need to pay for news, convincing them to fund a wallet feels like an uphill battle. Meanwhile, the people most likely to pay directly for news are older, wealthier, liberal Democrats. Again, not a surprise, but useful to have it laid out like this; many newsrooms I’ve spoken to are trying to figure out how to move away from a base of older, wealthier, left-leaning people, and, well, it’s not just them. Maybe it’s worth leaning into that for funding and concentrating on finding a broader audience for the news itself. [Link]

Only 8% of respondents believe individual Americans have a responsibility to pay for news. "I don't think that information should be a privilege," one respondent said.

11.02.2026 23:07 👍 0 🔁 0 💬 0 📌 0
Everyone is stealing TV [Janko Roettgers at The Verge] Maybe this goes without saying, but I don’t think these devices should be trusted. > “It’s called the SuperBox, and it’s being demoed by Jason, who also has homemade banana bread, okra, and canned goods for sale. “People are sick and tired of giving Dish Network $200 a month for trash service,” Jason says. His pitch to rural would-be cord-cutters: Buy a SuperBox for $300 to $400 instead, and you’ll never have to shell out money for cable or streaming subscriptions again.” From a user perspective, I see the appeal: I certainly have subscription fatigue. Beyond that, geoblocks are intensely irritating to me; I’d give anything to be able to watch the UK’s _Channel 4 News_ , or _Doctor Who_ spinoff _The War Between the Land and the Sea_ , which are both unavailable to me unless I want to dive into VPNs and breaking terms of service. A box that gives me what I want to watch, no questions asked, seems too good to be true. It’s not fully clear who is manufacturing these devices, what’s on them, or who runs the services that allow people to access all this television without paying for it. We already know that some streaming boxes have been fronts for residential botnets that have been used for illicit activities that run the gamut from avoiding scraper detection to real organized crime. If I wanted to run malware inside the networks of thousands of homes and businesses, this wouldn’t be a bad way to go about it. Which is a shame, because the allure is real. I’d pay for all that unavailable television. Just, please, let me. [Link]

"Fed up with increasing subscription prices, viewers embrace rogue streaming boxes." The question is: what's on them?

11.02.2026 16:45 👍 0 🔁 0 💬 0 📌 0
A note about personal security I made a mistake last month that hopefully others can learn from. There are rumors that ICE is turning its attention to the Philadelphia area, where I live. I’m a natural-born American citizen, but based on accounts from Minneapolis, I’m not excited to run into them, and I’m worried about the health and well-being of my child. So I briefly joined a local Signal group that reports ICE sightings. I have been using my full name on Signal — and I’m the only Ben Werdmuller in the world. In the less than 24 hours that I was a part of the group, the entire group’s membership was downloaded and investigated by a right-wing community. They ultimately released details about my location (not specific, but specific enough to let me know they know where I live) as well as some vague threats about personal harm. They also linked my name to ProPublica reporting about the Alex Pretti killers. Journalism is a vital part of democracy and no journalist should ever receive these sorts of threats. For the record, I have no involvement with actual reporting at ProPublica: I’m not a journalist and I’m not part of the news team. I support IT, security, and product engineering. I didn’t know the story was coming out until it was published. But again: I think it would be unacceptable for journalists who _were_ involved in this (or any) reporting to receive these sorts of threats. The lesson is: there are co-ordinated groups of right-wing activists (or potentially more than activists) looking to intimidate people who might protest or report on ICE. Be aware of where you have posted identifying information, and be wary of joining groups about sensitive topics. What starts online doesn’t necessarily stay online. I should have been more aware of the threats. Joining that group created a risk for me and my family, and I should have been more aware of my Signal profile. The stakes are very high. I have a high risk tolerance for myself, but security and privacy are a group inoculation: nobody around me asked for this. Therefore there’s a responsibility to be careful.

I made a mistake last month that hopefully others can learn from.

There are rumors that ICE is turning its attention to the Philadelphia area, where I live. I’m a natural-born American citizen, but based on accounts from Minneapolis, I’m not excited to run into them, and I’m worried about the […]

11.02.2026 14:12 👍 1 🔁 1 💬 0 📌 0
Journalism lost its culture of sharing <p>[<a href="https://source.opennews.org/articles/journalism-lost-sharing-culture/">Scott Klein and Ben Welsh in Source</a>]</p><p>I agree, strongly, with this piece about (re)building an open source culture in news by <a href="https://bsky.app/profile/kleinmatic.bsky.social">Scott Klein</a> and <a href="https://palewi.re/who-is-ben-welsh/">Ben Welsh</a>. But then, I would: I spent over a decade working to build open source communities, and then another decade and change working alongside and then inside newsrooms.</p><p>So it’s to my chagrin that the newsroom where I currently serve as Senior Director of Technology is one of the places listed here where open source contributions have significantly dropped off:</p><blockquote>“At ProPublica, teams published detailed white papers alongside major investigations, explaining their quantitative methodologies with scientific rigor, allowing other researchers to verify and learn from their work. Major news organizations ran active blogs where they shared techniques and lessons learned. Conference presentations at NICAR and elsewhere became venues for passing along hard-won knowledge.”</blockquote><p>The effect of this work didn’t just lift the work of journalism, it attracted new people to it:</p><blockquote>“This culture made newsrooms more attractive places to work for civic-minded technologists. If you had programming skills and wanted to use them to make a difference, journalism offered you the chance to build things that mattered and share them with the world.”</blockquote><p>I think there’s a lot to be gained by collaborating on an open source basis. We typically run small, resource-constrained teams where building new software is contextually hard. And we have problems that, if they’re not identical, are at least significantly overlapping; by <em>not</em> collaborating on them, we further an ecosystem where low-resource organizations are all solving the same sorts of things with very few people and very little money in parallel.</p><p>I was present at the News Product Alliance Summit session described in this piece, and I think the analysis of both the causes of this decline and some of the solutions are spot on. I was particularly enamored by the idea of an Open Source Editor (or director — does everything in news need to be an editor?) and public recognition for great open technical work in the field of journalism.</p><p>I think it’s also worth saying that open source, done well, is about much more than just releasing your code. A good open source project is a community, not a package. So there’s a lot of ecosystem development and community management involved to foster the kind of real collaboration that is required for this to succeed — even after newsrooms have overcome the institutional hurdles to releasing their work in the first place.</p><p>I’m really grateful that Scott and Ben have been championing this cause. I’m right there with them, and I’ll do what I can to help. It’s a concrete way we can build a more successful, efficient news ecosystem with stronger technology capabilities, and that’s something we should all want.</p><p>[<a href="https://source.opennews.org/articles/journalism-lost-sharing-culture/">Link</a>]</p>

"The data are clear: The open-source culture that defined an earlier era of online journalism has collapsed."

27.01.2026 16:36 👍 0 🔁 1 💬 0 📌 0
Hiring in an era of fake candidates, real scams and AI slop "Red flags that both job seekers and employers should watch for, in an era of AI slop and application scams."
25.01.2026 13:59 👍 0 🔁 2 💬 0 📌 0
Preview
Notable links: Jan 24, 2026 What does the future of engineering look like? And more.
24.01.2026 13:49 👍 0 🔁 0 💬 0 📌 0
On ICE, Verification, and Presence As Harm "Bluesky built a verification system designed to distribute trust, and then didn't use it when it mattered."
24.01.2026 03:10 👍 0 🔁 2 💬 0 📌 0
The Five Levels: from Spicy Autocomplete to the Software Factory "I’ve now seen dozens of companies struggling to put AI to work writing code, and each one has moved through five clear tiers of automation. That felt familiar."
23.01.2026 22:51 👍 0 🔁 0 💬 0 📌 0
The Forwardable Email "A way to help others actually make that connection they said they would." I've used this for a decade, and it's probably useful for you, too.
22.01.2026 14:11 👍 0 🔁 0 💬 0 📌 0
Funding Open Source for Digital Sovereignty "Open Source alone won't deliver digital sovereignty. Europe must fix procurement and fund those who actually build it."
21.01.2026 17:00 👍 1 🔁 0 💬 0 📌 0
Preview
Who owns your data? A Supreme Court case about a bank robbery could redefine your digital rights.
20.01.2026 14:26 👍 0 🔁 0 💬 0 📌 0
Remembering Beyond Vietnam on MLK Day My friend Roxann Stafford introduced me to Martin Luther King Jr's _Beyond Vietnam_ speech some time ago. It was not well-received by the mainstream press at the time (more about that in a moment) but I think it's prescient and, unfortunately, timely. At the Riverside Church in New York City, he delivered a blistering response to America's war in Vietnam that went beyond the war itself to discuss the values of the nation and its impact at home and abroad: > We must rapidly begin the shift from a thing-oriented society to a person-oriented society. When machines and computers, profit motives and property rights, are considered more important than people, the giant triplets of racism, extreme materialism, and militarism are incapable of being conquered. > > [...] A true revolution of values will soon look uneasily on the glaring contrast of poverty and wealth. With righteous indignation, it will look across the seas and see individual capitalists of the West investing huge sums of money in Asia, Africa, and South America, only to take the profits out with no concern for the social betterment of the countries, and say, "This is not just." It will look at our alliance with the landed gentry of South America and say, "This is not just." The Western arrogance of feeling that it has everything to teach others and nothing to learn from them is not just. It is a brilliant speech. And at the time, it was condemned. As The Martin Luther King, Jr. Research and Education Institute at Stanford University notes: > To King, the Vietnam War was only the most pressing symptom of American colonialism worldwide. King claimed that America made “peaceful revolution impossible by refusing to give up the privileges and the pleasures that come from the immense profits of overseas investments”. King urged instead “a radical revolution of values” emphasizing love and justice rather than economic nationalism. > > The immediate response to King’s speech was largely negative. Both the _Washington Post_ and _New York Times_ published editorials criticizing the speech, with the _Post_ noting that King’s speech had “diminished his usefulness to his cause, to his country, and to his people” through a simplistic and flawed view of the situation (“A Tragedy,” 6 April 1967). Similarly, both the **National Association for the Advancement of Colored People** and Ralph **Bunche** accused King of linking two disparate issues, Vietnam and civil rights. Despite public criticism, King continued to attack the Vietnam War on both moral and economic grounds. Today, we tend to remember MLK in a sanitized, vague way. That's because what he was really calling for was parity in a way that is still unfortunately controversial, even today. At the time, 75% of Americans disapproved of him. But these ideas are even more obviously relevant today than they were at the time. He was prescient, and we need more leaders like him.
19.01.2026 13:58 👍 0 🔁 0 💬 0 📌 0
Preview
Notable links: January 16, 2026 An occupation in Minnesota and finding your way to work that doesn't harm
16.01.2026 10:00 👍 0 🔁 0 💬 0 📌 0
Remarks on the Federal Government’s Ongoing Presence in Minnesota “It is a campaign of organized brutality against the people of Minnesota by our own federal government." So record it.
16.01.2026 03:55 👍 0 🔁 1 💬 0 📌 0
The Curiosity Tour "A way to approach a job hunt that uncovers extraordinary, unlisted opportunities" - using a distilled framework that makes something that might look daunting feel easy.
16.01.2026 02:12 👍 1 🔁 0 💬 0 📌 0
‘ELITE’: The Palantir App ICE Uses to Find Neighborhoods to Raid "Internal ICE material and testimony from an official obtained by 404 Media provides the clearest link yet between the technological infrastructure Palantir is building for ICE and the agency’s activities on the ground."
15.01.2026 14:52 👍 0 🔁 1 💬 0 📌 0