Planet Debian's Avatar

Planet Debian

@planet.debian.org.web.brid.gy

🌉 bridged from https://planet.debian.org/ on the web: https://fed.brid.gy/web/planet.debian.org

28
Followers
0
Following
2,748
Posts
20.08.2024
Joined
Posts Following

Latest posts by Planet Debian @planet.debian.org.web.brid.gy

Mike Gabriel: Debian Lomiri Tablets 2025-2027 - Project Report (Q4/2025) On 25th Oct, I announced via my personal blog and on Mastodon that Fre(i)e Software GmbH was hiring. The hiring process was a mix of asking developers I know and waiting for new people to apply. At the beginning of November 2025 / in mid November 2025, we started with 13 developers (all part-time) to work on various topics around Lomiri (upstream and downstream). Note that the above achievements don't document the overall activity in the Lomiri project, but that part that our team at Fre(i)e Software GmbH contributed to. ### Organizational Achievements * Setup management board for Qt6 migration in Lomiri [1] * Setup management board for salsa2ubports package syncing [2] * Bootstrap Qt 6.8 in UBports APT repository * Bootstrap Qt 6.8 in Lomiri PPA * Fix Salsa CI for all Lomiri-related Debian packages * Facilitate contributor's project around XDG Desktop Portal support for Lomiri. * Plan how to bring DeltaTouch and DeltaChat core to Debian ### Maintenance Development * Replace libofono-qt by libqofono in telepathy-ofono * Rework unit tests in telepathy-ofono utilizing ofone-phonesim * Obsolete not-used-anymore u1db-qt * Fixing wrong bin:pkg names regarding snapd-glib's QML module ### Qt6 Porting * qmake -> CMake porting (if needed) and Qt6 porting of shared libraries and QML modules consumed by Lomiri shell and Lomiri apps: * biometryd * libqofono * libqofonoext * libqtdbusmock * lomiri-account-polld * lomiri-action-api * lomiri-api * lomiri-download-manager * lomiri-location-service * lomiri-online-accounts * lomiri-push-qml * lomiri-push-service * maliit-framework * mediascanner2 * qtlomiri-appmenutheme * qtpim (started, work in progress) * qwebdavlib * signond (flaws spotted in Debian's porting of signond to Qt6) ### Feature Development * Continuing with Morph Browser Qt6 / LUITK * Build, run and fix LUITK unit tests for Qt6 * various bug fixes and improvements for Morph Qt6 * Add mbim modem support to ofono upstream * Improve ofono support in Network Manager * Improve mbim modem support in lomiri-indicator-network * Package kazv (convergent Matrix client) and dependencies for Debian * Provide Lomiri images for Mobian ### Research * Research on fuse-based caching Webdav client for lomiri-cloudsync-app. * Research on alternative ORM instead of QDjango in libusermetrics 1] [https://gitlab.com/groups/ubports/development/-/boards/9895029?label_nam... [2] https://gitlab.com/groups/ubports/development/-/boards/10037876?label_name[]=Topic%3A%20salsa2ubports%20DEB%20syncing
12.03.2026 09:23 👍 0 🔁 0 💬 0 📌 0
Sven Hoexter: RFC 9849 - Encrypted Client Hello Now that ECH is standardized I started to look into it to understand what's coming. While generally desirable to not leak the SNI information, I'm not sure if it will ever make it to the masses of (web)servers outside of big CDNs. Beside of the extension of the TLS protocol to have an inner and outer ClientHello, you also need (frequent) updates to your HTTPS/SVCB DNS records. The idea is to rotate the key quickly, the OpenSSL APIs document talks about hourly rotation. Which means you've to have encrypted DNS in place (I guess these days DNSoverHTTPS is the most common case), and you need to be able to distribute the private key between all involved hosts + update DNS records in time. In addition to that you can also use a "shared mode" where you handle the outer ClientHello (the one using the public key from DNS) centrally and the inner ClientHello on your backend servers. I'm not yet sure if that makes it easier or even harder to get it right. That all makes sense, and is feasible for setups like those at Cloudflare where the common case is that they provide you NS servers for your domain, and terminate your HTTPS connections. But for the average webserver setup I guess we will not see a huge adoption rate. Or we soon see something like a Caddy webserver on steroids which integrates a DNS server for DoT with not only automatic certificate renewal build in, but also automatic ECHConfig updates. If you want to read up yourself here are my starting points: RFC 9849 TLS Encrypted Client Hello RFC 9848 Bootstrapping TLS Encrypted ClientHello with DNS Service Bindings RFC 9934 Privacy-Enhanced Mail (PEM) File Format for Encrypted ClientHello (ECH) OpenSSL 4.0 ECH APIs Cloudflare Good-bye ESNI, hello ECH! If you're looking for a test endpoint, I see one hosted by Cloudflare: $ dig +short IN HTTPS cloudflare-ech.com 1 . alpn="h3,h2" ipv4hint=104.18.10.118,104.18.11.118 ech=AEX+DQBBFQAgACDBFqmr34YRf/8Ymf+N5ZJCtNkLm3qnjylCCLZc8rUZcwAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA= ipv6hint=2606:4700::6812:a76,2606:4700::6812:b76
11.03.2026 17:21 👍 0 🔁 0 💬 0 📌 0
Dirk Eddelbuettel: RcppDE 0.1.9 on CRAN: Maintenance Another maintenance release of our RcppDE package arrived at CRAN, and has been built for r2u. RcppDE is a “port” of DEoptim, a package for derivative-free optimisation using differential evolution, from plain C to C++. By using RcppArmadillo the code became a lot shorter and more legible. Our other main contribution is to leverage some of the excellence we get for free from using Rcpp, in particular the ability to optimise user-supplied _compiled_ objective functions which can make things a lot faster than repeatedly evaluating interpreted objective functions as DEoptim does (and which, in fairness, most other optimisers do too). The gains can be quite substantial. This release is again maintenance. We aid Rcpp in the transition away from calling `Rf_error()` by relying in `Rcpp::stop()` which has better behaviour and unwinding when errors or exceptions are encountered. We also overhauled the references in the vignette, added an Armadillo version getter and made the regular updates to continuous integration. Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppDE page, or the repository. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
11.03.2026 15:21 👍 0 🔁 0 💬 0 📌 0
Freexian Collaborators: Debian Contributions: Opening DebConf 26 Registration, Debian CI improvements and more! (by Anupa Ann Joseph) # Debian Contributions: 2026-02 Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services. ## DebConf 26 Registration, by Stefano Rivera, Antonio Terceiro, and Santiago Ruano Rincón DebConf 26, to be held in Santa Fe Argentina in July, has opened for registration and event proposals. Stefano, Antonio, and Santiago all contributed to making this happen. As always, some changes needed to be made to the registration system. Bigger changes were planned, but we ran out of time to implement them for DebConf 26. All 3 of us have had experience in hosting local DebConf events in the past and have been advising the DebConf 26 local team. ## Debian CI improvements, by Antonio Terceiro Debian CI is the platform responsible for automated testing of packages from the Debian archive, and its results are used by the Debian Release team automation as Quality Assurance to control the migration of packages from Debian unstable into testing, the base for the next Debian release. Antonio started developing an incus backend, and that prompted two rounds of improvements to the platform, including but not limited to allowing user to select a job execution backend (lxc, qemu) during the job submission, reducing the part of testbed image creation that requires superuser privileges and other refactorings and bug fixes. The platform API was also improved to reduce disruption when reporting results to the Release Team automation after service downtimes. Last, but not least, the platform now has support for testing packages against variants of autopkgtest, which will allow the Debian CI team to test new versions of autopkgtest before making releases to avoid widespread regressions. ## Miscellaneous contributions * Carles improved po-debconf-manager while users requested features / found bugs. Improvements done - add packages from “unstable” instead of just salsa.debian.org, upgrade and merge templates of upgraded packages, finished adding typing annotations, improved deleting packages: support multiple line texts, add –debug to see “subprocess.run” commands, etc. * Carles, using po-debconf-manager, reviewed 7 Catalan translations and sent bug reports or MRs for 11 packages. Also reviewed the translations of `fortunes-debian-hints` and submitted possible changes in the hints. * Carles submitted MRs for reportbug (`reportbug --ui gtk` detecting the wrong dependencies), devscript (delete unused code from debrebuild and add recommended dependency), `wcurl` (format –help for 80 columns). Carles submitted a bug report for apt not showing the long descriptions of packages. * Carles resumed effort for checking relations (e.g. Recommends / Suggests) between Debian packages. A new codebase (still in early stages) was started with a new approach in order to detect, report and track the broken relations. * Emilio drove several transitions, most notably the haskell transition and the `glibc`/`gcc-15`/`zlib` transition for the s390 31-bit removal. This last one included reviewing and requeueing lots of autopkgtests due to britney losing a lot of results. * Emilio reviewed and uploaded `poppler` updates to experimental for a new transition. * Emilio reviewed, merged and deployed some performance improvements proposed for the security-tracker. * Stefano prepared routine updates for `pycparser`, `python-confuse`, `python-cffi`, `python-mitogen`, `python-pip`, `wheel`, `platformdirs`, `python-authlib`, and `python-virtualenv`. * Stefano updated Python 3.13 and 3.14 to the latest point releases, including security updates, and did some preliminary work for Python 3.15. * Stefano reviewed changes to `dh-python` and merged MRs. * Stefano did some debian.social sysadmin work, bridging additional IRC channels to Matrix. * Stefano and Antonio, as DebConf Committee Members, reviewed the DebConf 27 bids and took part in selecting the Japanese bid to host DebConf 27. * Helmut sent patches for 29 cross build failures. * Helmut continued to maintain rebootstrap addressing issues relating to specific architectures (such as `musl-linux`-`any`, `hurd-any` or `s390x`) or specific packages (such as `binutils`, `brotli` or `fontconfig`). * Helmut worked on diagnosing bugs such as `rocblas` #1126608, `python-memray` #1126944 upstream and `greetd` #1129070 with varying success. * Antonio provided support for multiple MiniDebConfs whose websites run wafer + wafer-debconf (the same stack as DebConf itself). * Antonio fixed the salsa tagpending webhook. * Antonio sent specinfra upstream a patch to fix detection of Debian systems in some situations. * Santiago reviewed some Merge Requests for the Salsa CI pipeline, including !703 and !704, that aim to improve how the `build source` job is handled by Salsa CI. Thanks a lot to Jochen for his work on this. * In collaboration with Emmanuel Arias, Santiago proposed a couple of projects for the Google Summer of Code (GSoC) 2026 round. Santiago has been reviewing applications and giving feedback to candidates. * Thorsten uploaded new upstream versions of `ipp-usb`, `brlaser` and `gutenprint`. * Raphaël updated publican to fix an old bug that became release critical and that happened only when building with the nocheck profile. Publican is a build dependency of the Debian’s Administrator Handbook and with that fix, the package is back into testing. * Raphaël implemented a small feature in Debusine that makes it possible to refer to a collection in a parent workspace even if a collection with the same name is present in the current workspace. * Lucas updated the current status of ruby packages affecting the Ruby 3.4 transition after a bunch of updates made by team members. He will follow up on this next month. * Lucas joined the Debian orga team for GSoC this year and tried to reach out to potential mentors. * Lucas did some content work for MiniDebConf Campinas - Brazil. * Colin published minor security updates to “bookworm” and “trixie” for CVE-2025-61984 and CVE-2025-61985 in `OpenSSH`, both of which allowed code execution via `ProxyCommand` in some cases. The “trixie” update also included a fix for mishandling of PerSourceMaxStartups. * Colin spotted and fixed a typo in the bug tracking system’s spam-handling rules, which in combination with a devscripts regression caused `bts forwarded` commands to be discarded. * Colin ported 12 more Python packages away from using the deprecated (and now removed upstream) `pkg_resources` module. * Anupa is co-organizing MiniDebConf Kanpur with Debian India team. Anupa was responsible for preparing the schedule, publishing it on the website, co-ordination with the fiscal host in addition to attending meetings. * Anupa attended the Debian Publicity team online sprint which was a skill sharing session.
11.03.2026 07:20 👍 0 🔁 0 💬 0 📌 0
Preview
Isoken Ibizugbe: Starting Out in Outreachy So you want to join Outreachy but you don’t understand it, you’re scared, or you don’t know what open source is about. ## What is FOSS anyway? Free and Open Source Software (FOSS) refers to software that anyone can use, modify, and share freely. Think of it as a community garden; instead of one company owning the “food,” people from all over the world contribute, improve, and maintain it so everyone can benefit for free. You can read more here on what it means to contribute to open source. Outreachy provides paid internships to anyone from any background who faces underrepresentation, systemic bias, or discrimination in the technical industry where they live. Their goal is to increase diversity in open source. Read their website for more. I spent a good amount of time reading all the guides listed, including the applicant guide and the how-to-apply guide. ## The “Secret” to Applying (Spoiler: It’s not a secret) I know newcomers are scared or unsure and would prefer answers from previous participants, but the Outreachy website is actually a goldmine, almost every question you have is already answered there if you look closely. I used to hate reading documentation, but I’ve learned to love it. Documentation is the “Source of Truth.” * My Advice: Read every single guide on their site. The applicant guide is your roadmap. Embracing documentation now will make you a much better contributor later. ## The AI Trap: Be Yourself Now for the part most newcomers have asked about is the initial essay. I know it’s tempting to use AI, but I really encourage you to skip it for this. Your own story is much more powerful than a generated one. Outreachy and its mentoring organizations value your unique story. They are strongly against fabricated or AI-exaggerated essays. For example, when I contributed to Debian using openQA, the information wasn’t well established on the web. When I tried to use AI, it suggested imaginary ideas. The project maintainers had a particular style of contributing, so I had to follow the instructions carefully, observe the codebase, and read the provided documentation. With that information, I always wrote a solution first before consulting AI, and mine was always better. AI can only be intelligent in the context of what you give it; if it doesn’t have your answer, it will look for the most similar solution (hallucinate). We do not want to increase the burden on reviewers—their time is important because they are volunteers, too. This is crucial when you qualify for the contribution phase. ## The Application Process There are two main stages: * The initial application: Here you fill in basic details, time availability, and essay questions (you can find these on the Outreachy website). * The contribution phase: This is where you show you have the skills to work on the projects. Every project will list the skills needed and the level of proficiency. ### When you qualify for the contribution phase: * A lot of people will try to create buzz or even panic; you just have to focus. Once you’ve gotten the hang of the project, remember to help others along the way. * You can start contributions with spelling corrections, move to medium tasks (do multiple of these), then a hard task if possible. You don’t need to be a guru on day one. * It’s all about community building. Do your part to help others understand the project too; this is also a form of contribution. * Lastly, every project mentor has a way of evaluating candidates. My summary is: be confident, demonstrate your skills, and learn where you are lacking. Start small and work your way up, you don’t have to prove yourself as a guru. ### Tips * Watch this: This step-by-step video is a great walkthrough of the initial application process. * Sign up for the email list to get updates: https://lists.outreachy.org/cgi-bin/mailman/listinfo/announce * Be fast: Complete your initial application in the first 3 days, as there are a lot of applicants. * Back it up: In your essay about systemic bias, include some statistics to back it up. * Learn Git: Even if you don’t have programming skills, contributions are pushed to GitHub or GitLab. Practice some commands and contribute to a “first open issue” to understand the flow: https://github.com/firstcontributions/first-contributions The most important tip? Apply anyway. Even if you feel underqualified, the process itself is a massive learning experience.
09.03.2026 23:14 👍 0 🔁 0 💬 0 📌 0
Dirk Eddelbuettel: nanotime 0.3.13 on CRAN: Maintenance Another minor update 0.3.13 for our nanotime package is now on CRAN, and has been uploaded to Debian and compiled for r2u. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual `integer64` arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged `nanotime` internals in S4 but also added new S4 types for _periods_ , _intervals_ and _durations_. This release, the first in eleven months, rounds out a few internal corners and helps Rcpp with the transition away from `Rf_error` to only using `Rcpp::stop` which deals more gracefully with error conditions and unwinding. We also updated how the vignette is made, its references, updated the continuous integration as one does, altered how the documentation site is built, gladly took a PR from Michael polishing another small aspect, and tweaked how the compilation standard is set. The NEWS snippet below has the fuller details. > #### Changes in version 0.3.13 (2026-03-08) > > * The `methods` package is now a Depends as WRE recommends (Michael Chirico in #141 based on a suggestion by Dirk in #140) > > * The mkdocs-material documentation site is now generated via altdoc > > * Continuous Integration scripts have been updated > > * Replace `Rf_error` with `Rcpp::stop`, turn remaining one into `(Rf_error)` (Dirk in #143) > > * Vignette now uses the `Rcpp::asis` builder for pre-made pdfs (Dirk in #146 fixing #144) > > * The C++ compilation standard is explicitly set to C++17 if an R version older than 4.3.0 is used (Dirk in #148 fixing #147) > > * The vignette references have been updated > > Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
09.03.2026 17:14 👍 0 🔁 0 💬 0 📌 0
Colin Watson: Free software activity in February 2026 My Debian contributions this month were all sponsored by Freexian. You can also support my work directly via Liberapay or GitHub Sponsors. ## OpenSSH * openssh: Please remove/replace usage of dh_movetousr I released bookworm and trixie fixes for CVE-2025-61984 and CVE-2025-61985, both allowing code execution via `ProxyCommand` in some cases. The trixie update also included a fix for openssh-server: refuses further connections after having handled PerSourceMaxStartups connections. ## bugs.debian.org administration Gioele Barabucci reported that some messages to the bug tracking system generated by the `bts` command were being discarded. While the regression here was on the client side, I found and fixed a typo in our SpamAssassin configuration that was failing to apply a bonus specifically to `forwarded` commands, mitigating the problem. ## Python packaging New upstream versions: * aiosmtplib * bitstruct * diff-cover * django-q * isort * multipart * poetry (adding support for Dulwich >= 0.25) * poetry-core * pydantic-settings * python-build * python-certifi * python-datamodel-code-generator * python-flatdict * python-holidays * python-maggma * python-pytokens * python-scruffy * python-urllib3 (fixing CVE-2025-66471 and a chunked decoding bug) * responses * yarsync * zope.component * zope.deferredimport Porting away from the deprecated (and now removed from upstream setuptools) `pkg_resources`: * genshi (contributed upstream) * germinate * mopidy * nose2 * pokrok (contributed upstream) * pylama * python-flask-seeder * python-maggma (contributed upstream) * python-pybadges * python-scruffy (contributed upstream) * thumbor (contributed upstream) * zope.deprecation (contributed upstream a while ago, but there hasn’t been an upstream release yet) Other build/test failures: * flask-dance: FTBFS: No module named ‘pkg_resources’ (actually fixed by adding a missing dependency to python3-sphinxcontrib.seqdiag) * paramiko: autopkgtest regression on i386 (contributed upstream) * poetry: autopkgtest regression on i386 * python-argh * python-django-celery-beat: FTBFS: FAILED t/unit/test_models.py::HumanReadableTestCase::test_long_name * python-maturin: rust-itertools update * python-msrest: FTBFS: FAILED tests/asynctests/test_async_client.py::TestServiceClient::test_client_send (contributed upstream, though not very successfully) * python-typing-inspect Other bugs: * python-datamodel-code-generator: Depends: python3-isort (< 8) but 8.0.0-1 is to be installed (contributed upstream) * python-typeguard: Mark python3-typeguard Multi-Arch: foreign * wheel: Mark python3-wheel Multi-Arch: foreign * zope.deferredimport: Please make the build reproducible (contributed upstream, with a follow-up fix) I added a manual page symlink to make the documentation for `Testsuite: autopkgtest-pkg-pybuild` easier to find. I backported python-pytest-unmagic and a more recent version of pytest-django to trixie. ## Rust packaging * librust-pyo3-ffi-dev: Cannot be installed for foreign architectures I also packaged rust-garde and rust-garde-derive, which are part of the pile of work needed to get the ruff packaging back in shape (which is a project I haven’t decided if I’m going to take on for real, but I thought I’d at least chip away at a bit of it). ## Other bits and pieces * arch-test: Remove build dependency on binutils-mips64el-linux-gnuabi64 (NMU) ## Code reviews * debconf: Add BMP version of debian-logo (merged and uploaded) * openssh: Reorder pam_selinux(7) usage (merged and uploaded) * openssh-client: use sysusers.d, drop superflous dependencies (merged and uploaded) * openssh: Stop deleting system user on remove/purge (merged and uploaded) * openssh: Do not link against libcrypt on GNU/Hurd (merged and uploaded) * partman-prep: Align PReP descriptions with other partition types (merged) * python-better-exceptions (sponsored upload for Seyed Mohamad Amin Modaresi)
09.03.2026 13:13 👍 0 🔁 0 💬 0 📌 0
Sven Hoexter: Latest pflogsumm from unstable on trixie If you want the latest pflogsumm release form unstable on your Debian trixie/stable mailserver you've to rely on pining (Hint for the future: Starting with apt 3.1 there is a new `Include` and `Exclude` option for your sources.list). For trixie you've to use e.g.: $ cat /etc/apt/sources.list.d/unstable.sources Types: deb URIs: http://deb.debian.org/debian Suites: unstable Components: main #This will work with apt 3.1 or later: #Include: pflogsumm Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp $ cat /etc/apt/preferences.d/pflogsumm-unstable.pref Package: pflogsumm Pin: release a=unstable Pin-Priority: 950 Package: * Pin: release a=unstable Pin-Priority: 50 Should result in: $ apt-cache policy pflogsumm pflogsumm: Installed: (none) Candidate: 1.1.14-1 Version table: 1.1.14-1 950 50 http://deb.debian.org/debian unstable/main amd64 Packages 1.1.5-8 500 500 http://deb.debian.org/debian trixie/main amd64 Packages ### Why would you want to do that? Beside of some new features and improvements in the newer releases, the pflogsumm version in stable has an issue with parsing the timestamps generated by postfix itself when you write to a file via maillog_file. Since the Debian default setup uses logging to stdout and writing out to `/var/log/mail.log` via rsyslog, I never invested time to fix that case. But since Jim picked up pflogsumm development in 2025 that was fixed in pflogsumm 1.1.6. Bug is #1129958, originally reported in #1068425 Since it's an arch:all package you can just pick from unstable, I don't think it's a good candidate for backports, and just fetching the fixed version from unstable is a compromise for those who run into that issue.
09.03.2026 11:13 👍 0 🔁 0 💬 0 📌 0
Dirk Eddelbuettel: RProtoBuf 0.4.26 on CRAN: More Maintenance A new maintenance release 0.4.26 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. The new release is also already as a binary via r2u. This release brings an update to aid in an ongoing Rcpp transitions from `Rf_error` to `Rcpp::stop`, and includes a few more minor cleanups including one contributed by Michael. The following section from the NEWS.Rd file has full details. > #### Changes in RProtoBuf version 0.4.26 (2026-03-06) > > * Minor cleanup in DESCRIPTION depends and imports > > * Remove obsolete check for `utils::.DollarNames` (Michael Chirico in #111) > > * Replace `Rf_error` with `Rcpp::stop`, turn remaining one into `(Rf_error)` (Dirk in #112) > > * Update `configure` test to check for RProtoBuf 3.3.0 or later > > Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
07.03.2026 15:07 👍 0 🔁 0 💬 0 📌 0
Steinar H. Gunderson: A286874(14) = 28 There's a logic puzzle that goes like this: A king has a thousand bottles of wine, where he knows that one is poisoned. He also has ten disposable servants that could taste the wine, but for whatever reason (the usual explanation is that the poison is slow-working and the feast is nearing), they can only take one sip each, possibly mixed from multiple bottles. How can he identify the bad bottle? The solution is well-known and not difficult; you give each bottle a number 0..999 and write it out in binary, and use the ones to assign wines to servants. (So there's one servant that drinks a mix of all the odd-numbered wines, and that tells you if the poisoned bottle's number is odd or even. Another servant drinks a mix of bottles 2, 3, 6, 7, 10, 11, etc., and that tells you the second-lowest bit. And so on.) This works because ten servants allow you to test 2^10 = 1024 bottles. It is also easy to extend this to “ _at most_ one bottle is poisoned”; give the wines numbers from 1..1000 instead, follow the same pattern, and if no servant dies, you know the answer is zero. (This allows you to test at most 1023 bottles.) Now, let's tweak the puzzle: What if there's zero, one or _two_ poisoned bottles? How many bottles can the king test with his ten servants? (If you're looking for a more real-world application of this, replace “poisoned bottles” with “COVID tests” and maybe it starts to sound less arbitrary.) If course, the king can easily test ten bottles by having each servant test exactly one bottle each, but it turns out you can get to 13 by being a bit more clever, for instance: 0123456789 ← Servant number 0 0000000111 1 0000011001 2 0000101010 3 0000110100 4 0001001100 5 0010010010 6 0011000001 7 0100100001 8 0101000010 9 0110000100 10 1001010000 11 1010100000 12 1100001000 ↑ Bottle number It can be shown (simply by brute force) that no two rows here are a subset of another row, so if you e.g. the “servant death” vector is 0110101110 (servants 1, 2, 4, 6, 7 and 8 die), the only way this could be is if bottle 2 and 9 are poisoned (and none else). Of course, the solution is nonunique, since you could switch around the number of servants or wines and it would stil work. But if you don't allow that kind of permutation, there are only five different solutions for 10 servants and 13 wines. The maximum number of possible wines to test is recorded in OEIS A286874, and the number of different solutions in A303977. So for A286874, a(10) = 13 and for A303977, a(10) = 5. We'd like to know what these values for higher values, in particular A286874 (A303977 is a bit more of a curiosity, and also a convenient place to write down all the solutions). I've written before about how we can create fairly _good_ solutions using error-correcting codes (there are also other possible constructions), but _optimal_ turns out to be hard. The only way we know of is some form of brute force. (I used a SAT solver to confirm a(10) and a(11), but it seemed to get entirely stuck on a(12).) I've _also_ written about my brute-force search of a(12) and a(13), so I'm not going to repeat that, but it turned out that with a bunch of extra optimizations and 210 calendar days of near-continuous calculation, I could confirm that: * A286874 a(14) = 28 * A303977 a(14) = 788 (!!) The latter result is very surprising to me, so it was an interesting find. I would have assumed that with this many solutions, we'd find a(14) = 29. I don't have enough CPU power to test a(15) or a(16) (do contact me if you have a couple thousand cores to lend out for some months or more), but I'm going to do a search in a given subset of the search space (5-uniform solutions), which is much faster; it won't allow us to fix more elements of either of the sequences, but it's possible that we'll find some new records and thus lower bounds for A286874. Like I already posted, we know that a(15) >= 42. (Someone should also probably go find some bounds for a(17), a(18), etc.—when the sequence was written, the posted known bounds were far ahead of the sequence itself, but my verification has caught up and my approach is not as good in creating solutions heuristically out of thin air.)
07.03.2026 11:07 👍 0 🔁 0 💬 0 📌 0
Thorsten Alteholz: My Debian Activities in February 2026 ### **Debian LTS/ELTS** This was my hundred-fortieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on: * [DLA 4474-1] rlottie security update to fix three CVEs related to boundary checks. * [DLA 4477-1] munge security update to fix one CVE related to a buffer overflow. * [DLA 4483-1] gimp security update to fix four CVEs related to arbitrary code execution. * [DLA 4487-1] gegl security update to fix two CVEs related to heap-based buffer overflow. * [DLA 4489-1] libvpx security update to fix one CVE related to a buffer overflow. * [ELA-1649-1] gimp security update to fix three CVEs in Buster and Stretch related to arbitrary code execution. * [ELA-1650-1] gegl security update to fix two CVEs in Buster and Stretch related to heap-based buffer overflow. Some CVEs could be marked as _not-affected_ for one or all LTS/ELTS-releases. I also worked on package _evolution-data-server_ and attended the monthly LTS/ELTS meeting. ### **Debian Printing** This month I uploaded a new upstream versions: * … ipp-usb to unstable. * … brlaser to unstable. * … gutenprint to unstable. **This work is generously funded byFreexian!** ### **Debian Lomiri** This month I continued to worked on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform. **This work is generously funded byFre(i)e Software GmbH!** ### **Debian Astro** This month I uploaded a new upstream version or a bugfix version of: * … c-munipack to unstable. This package now contains a version without GTK support. Upstream is working on a port to GTK3 but seems to need some more time to finish this. * … libasi to unstable. * … libdfu-ahp to unstable. * … libfishcamp to unstable. * … libinovasdk to unstable. * … libmicam to unstable. * … siril to unstable (sponsored upload). ### **Debian IoT** This month I uploaded a new upstream version or a bugfix version of: * … pyicloud to unstable. Unfortunately development of _openoverlayrouter_ finally stopped, so I had to remove this package from the archive. ### **Debian Mobcom** This month I uploaded a new upstream version or a bugfix version of: * … libsmpp34 to unstable. ### **misc** This month I uploaded a new upstream version or a bugfix version of: * … nuspell to unstable. I also sponsored the upload of some Matomo dependencies. Thanks a lot to William for preparing the packages
06.03.2026 19:05 👍 0 🔁 0 💬 0 📌 0
Preview
Russell Coker: Links March 2026 Krebs has an interesting article about the Kimwolf botnet which uses residential proxy relay services [1]. cory Doctorow wrote an insightful blog post about code being a liability not an asset [2]. Aigars Mahinovs wrote an interesting review of the BMW i4 M50 xDrive and the BMW i5 eDrive40 which seem like very impressive vehicles [3]. I was wondering what BMW would do now that all the features they had in the 90s have been copied by cheaper brands but they have managed to do new and exciting things. Arstechnica has an interesting article about the recently declassified JUMPSEAT surveillance satellites that ran from 1971 to 1987 [4]. Cory Doctorow wrote an interesting blog post about OgApp which briefly allowed viewing Instagram without ads and the issues of US corporations misusing EU copyright law [5]. ZDNet has an interesting article about new planned developments for the web of trust for Linux kernel coders (and others) [6]. Last month India had a 300 million person strike, we need more large scale strikes against governments that support predatory corporations [7]. Techdirt has an insightful article on the ways the fascism is bad for innovation and a market based economy [8]. The Acknowledgements section from the Scheme Shell (scsh) reference is epic [9]. Vice has an insightful article on research about “do your own research” and how simple Google searches tend to reinforce conspiracy theories [10]. A problem with Google is that it’s most effective if you already know the answer. Issendai has an interesting and insightful series of blog posts about estranged parents forums which seems a lot like Incel forums in the way they promote abuse [11]. Caitlin Johnstone wrote an interesting article about how “the empire” caused the rebirth of a real counterculture by their attempts to coerce support for Israeli atrocities [12]. Radley Balko wrote an interesting article about “the courage to be decent” concerning the Trump regime’s attempts to scare lawyers into cooperating with them [13]. Terry Tan wrote a useful resource on the API for Google search, this could be good for shell scripts and for 3rd party programs that launch a search [14]. The Proof has an interesting article about eating oysters and mussels as a vegan [15]. All Things Linguistic has an interesting and amusing post about Yoda’s syntax in non-English languages [16]. * 1][ https://tinyurl.com/2ypyzh5w * 2][ https://tinyurl.com/2b9kyl5x * 3][ https://aigarius.com/blog/2026/01/07/sedan-experience/ * 4][ https://tinyurl.com/23ekabmj * 5][ https://pluralistic.net/2026/01/30/zucksauce/#gandersauce * 6][ https://tinyurl.com/29j6zzyc * 7][ https://tinyurl.com/2xvfmslu * 8][ https://tinyurl.com/2b7m8pwa * 9][ https://en.wikipedia.org/wiki/Scsh * 10][ https://tinyurl.com/2aajkoyv * 11][ https://tinyurl.com/ywd3kqel * 12][ https://tinyurl.com/2cqep7cj * 13][ https://radleybalko.substack.com/p/the-courage-to-be-decent * 14][ https://serpapi.com/blog/every-google-udm-in-the-world/ * 15][ https://theproof.com/eating-oysters-and-mussels-as-a-vegan/ * 16][ https://tinyurl.com/229soykv Related posts: 1. Links March 2024 Bruce Schneier wrote an interesting blog post about his workshop... 2. Links September 2020 MD5 cracker, find plain text that matches MD5 hash [1].... 3. Links February 2026 Charles Stross has a good theory of why “AI” is...
06.03.2026 13:05 👍 0 🔁 0 💬 0 📌 0
Preview
Antoine Beaupré: Wallabako retirement and Readeck adoption Today I have made the tough decision of retiring the Wallabako project. I have rolled out a final (and trivial) 1.8.0 release which fixes the uninstall procedure and rolls out a bunch of dependency updates. # Why? The main reason why I'm retiring Wallabako is that I have completely stopped using it. It's not the first time: for a while, I wasn't reading Wallabag articles on my Kobo anymore. But I had started working on it again about four years ago. Wallabako itself is about to turn 10 years old. This time, I stopped using Wallabako because there's simply something better out there. I have switched away from Wallabag to Readeck! And I'm also tired of maintaining "modern" software. Most of the recent commits on Wallabag are renovate-bot. This feels futile and pointless. I guess it _must_ be done at some point, but it also feels we went wrong somewhere there. Maybe Filippo Valsord is right and one should turn dependabot off. # Moving from Wallabag to Readeck Readeck is pretty fantastic: it's fast, it's lightweight, everything Just Works. All sorts of concerns I had with Wallabag are just gone: questionable authentication, questionable API, weird bugs, mostly gone. I am still looking for multiple tags filtering but I have a much better feeling about Readeck than Wallabag: it's written in Golang and under activ development. In any case, I don't want to throw shade at the Wallabag folks either. They did solve most of the issues I raised with them and even accepted my pull request. They have helped me collect thousands of articles for a long time! It's just time to move on. The migration from Wallabag was impressively simple. The importer is well-tuned, fast, and just works. I wrote about the import in this issue, but it took about 20 minutes to import essentially all the articles, and another 5 hours to refresh all the contnts. There are minor issues with Readeck which I have filed (after asking!): * add justified view for articles (Android app) * more metadata in article display (Android app) * show the number of articles in the label browser * ignore duplicates (Readeck will happily add duplicates, whereas Wallabag at least _tries_ to deduplicate articles -- but often fails) But overall I'm happy and impressed with the result. I'm also a mix of happy and sad at letting go of my first (and only, so far) Golang project. I loved writing in Go: it's a clean language, fast to learn, and a beauty to write parallel code in (at the cost of a rather obscure runtime). It would have been _much_ harder to write this in Python, but my experience in Golang help me think about how to write more parallel code in Python, which is kind of cool. The GitLab project will remain publicly accessible, but archived, for the foreseeable future. If you're interested in taking over stewardship for this project, contact me. Thanks Wallabag folks, it was a great ride!
06.03.2026 05:04 👍 0 🔁 0 💬 0 📌 0
Vincent Bernat: Automatic Prometheus metrics discovery with Docker labels Akvorado, a network flow collector, relies on Traefik, a reverse HTTP proxy, to expose HTTP endpoints for services implemented in a Docker Compose setup. Docker labels attached to each service define the routing rules. Traefik picks them up automatically when a container starts. Instead of maintaining a static configuration file to collect Prometheus metrics, we can apply the same approach with Grafana Alloy, making its configuration simpler. * Traefik & Docker * Metrics discovery with Alloy * Discovering Docker containers * Relabeling targets * Scraping and forwarding * Built-in exporters # Traefik & Docker Traefik listens for events on the Docker socket. Each service advertises its configuration through labels. For example, here is the Loki service in Akvorado: services: loki: # … expose: - 3100/tcp labels: - traefik.enable=true - traefik.http.routers.loki.rule=PathPrefix(`/loki`) Once the container is healthy, Traefik creates a router forwarding requests matching `/loki` to its first exposed port. Colocating Traefik configuration with the service definition is attractive. How do we achieve the same for Prometheus metrics? # Metrics discovery with Alloy Grafana Alloy, a metrics collector that can scrape Prometheus endpoints, includes a `discovery.docker` component. Just like Traefik, it connects to the Docker socket.1 With a few relabeling rules, we can teach it to use Docker labels to locate and scrape metrics. We define three labels on each service: * `metrics.enable` set to `true` enables metrics collection, * `metrics.port` specifies the port exposing the Prometheus endpoint, and * `metrics.path` specifies the path to the metrics endpoint. If there is more than one exposed port, `metrics.port` is mandatory, otherwise it defaults to the only exposed port. The default value for `metrics.path` is `/metrics`. The Loki service from earlier becomes: services: loki: # … expose: - 3100/tcp labels: - traefik.enable=true - traefik.http.routers.loki.rule=PathPrefix(`/loki`) - metrics.enable=true - metrics.path=/loki/metrics Alloy’s configuration is split into four parts: 1. **discover** containers through the Docker socket, 2. **filter and relabel** targets using Docker labels, 3. **scrape** the matching endpoints, and 4. **forward** the metrics to Prometheus. ## Discovering Docker containers The first building block discovers running containers: discovery.docker "docker" { host = "unix:///var/run/docker.sock" refresh_interval = "30s" filter { name = "label" values = ["com.docker.compose.project=akvorado"] } } This connects to the Docker socket and lists containers every 30 seconds.2 The `filter` block restricts discovery to containers belonging to the `akvorado` project, avoiding interference with unrelated containers on the same host. For each discovered container, Alloy produces a target with labels such as `__meta_docker_container_label_metrics_port` for the `metrics.port` Docker label. ## Relabeling targets The relabeling step filters and transforms raw targets from Docker discovery into scrape targets. The first stage keeps only targets with `metrics.enable` set to `true`: discovery.relabel "prometheus" { targets = discovery.docker.docker.targets // Keep only targets with metrics.enable=true rule { source_labels = ["__meta_docker_container_label_metrics_enable"] regex = `true` action = "keep" } // … } The second stage overrides the discovered port when we define `metrics.port`: // When metrics.port is set, override __address__. rule { source_labels = ["__address__", "__meta_docker_container_label_metrics_port"] regex = `(.+):\d+;(.+)` target_label = "__address__" replacement = "$1:$2" } Next, we handle containers in `host` network mode. When `__meta_docker_network_name` equals `host`, the address is rewritten to `host.docker.internal` instead of `localhost`:3 // When host networking, override __address__ to host.docker.internal. rule { source_labels = ["__meta_docker_container_label_metrics_port", "__meta_docker_network_name"] regex = `(.+);host` target_label = "__address__" replacement = "host.docker.internal:$1" } The next stage derives the job name from the service name, stripping any numbered suffix. The instance label is the address without the port: rule { source_labels = ["__meta_docker_container_label_com_docker_compose_service"] regex = `(.+)(?:-\d+)?` target_label = "job" } rule { source_labels = ["__address__"] regex = `(.+):\d+` target_label = "instance" } If a container defines `metrics.path`, Alloy uses it as a path. Otherwise, it defaults to `/metrics`: rule { source_labels = ["__meta_docker_container_label_metrics_path"] regex = `(.+)` target_label = "__metrics_path__" } rule { source_labels = ["__metrics_path__"] regex = "" target_label = "__metrics_path__" replacement = "/metrics" } ## Scraping and forwarding With the targets properly relabeled, scraping and forwarding are straightforward: prometheus.scrape "docker" { targets = discovery.relabel.prometheus.output forward_to = [prometheus.remote_write.default.receiver] scrape_interval = "30s" } prometheus.remote_write "default" { endpoint { url = "http://prometheus:9090/api/v1/write" } } `prometheus.scrape` periodically fetches metrics from the discovered targets. `prometheus.remote_write` sends them to Prometheus. # Built-in exporters Some services do not expose a Prometheus endpoint. Redis and Kafka are common examples. Alloy ships built-in Prometheus exporters that query these services and expose metrics on their behalf. prometheus.exporter.redis "docker" { redis_addr = "redis:6379" } discovery.relabel "redis" { targets = prometheus.exporter.redis.docker.targets rule { target_label = "job" replacement = "redis" } } prometheus.scrape "redis" { targets = discovery.relabel.redis.output forward_to = [prometheus.remote_write.default.receiver] scrape_interval = "30s" } The same pattern applies to Kafka: prometheus.exporter.kafka "docker" { kafka_uris = ["kafka:9092"] } discovery.relabel "kafka" { targets = prometheus.exporter.kafka.docker.targets rule { target_label = "job" replacement = "kafka" } } prometheus.scrape "kafka" { targets = discovery.relabel.kafka.output forward_to = [prometheus.remote_write.default.receiver] scrape_interval = "30s" } Each exporter is a separate component with its own relabeling and scrape configuration. The `job` label is set explicitly since there is no Docker metadata to derive it from. * * * With this setup, adding metrics to a new service with a Prometheus endpoint is a few-label change in `docker-compose.yml`, just like adding a Traefik route. Alloy picks it up automatically. 🩺 * * * 1. Both Traefik and Alloy require access to the Docker socket, which grants root-level access to the host. A Docker socket proxy mitigates this by exposing only the read-only API endpoints needed for discovery. ↩︎ 2. Unlike Traefik, which watches for events, Grafana Alloy polls the container list at regular intervals—a behavior inherited from Prometheus. ↩︎ 3. The Alloy service needs `extra_hosts: "host.docker.internal:host-gateway"]` in its definition. [↩︎
05.03.2026 21:03 👍 0 🔁 0 💬 0 📌 0
Dirk Eddelbuettel: RcppGSL 0.3.14 on CRAN: Maintenance A new release 0.3.14 of RcppGSL is now on CRAN. The RcppGSL package provides an interface from R to the GNU GSL by relying on the Rcpp package. It has already been uploaded to Debian, and is also already available as a binary via r2u. This release, the first in over three years, contains mostly maintenance changes. We polished the `fastLm` example implementation a little more, updated continunous integration as one does over such a long period, adopted the Authors@R convention, switched the (pre-made) pdf vignette to a new driver now provided by Rcpp, updated vignette references and URLs, and updated one call to `Rf_error` to aid in a Rcpp transition towards using only `Rcpp::stop` which unwinds error conditions better. (Technically this was a false positive on `Rf_error` but on the margin worth tickling this release after all this time.) The NEWS entry follows: > #### Changes in version 0.3.14 (2026-03-05) > > * Updated some internals of `fastLm` example, and regenerated `RcppExports.*` files > > * Several updates for continuous integration > > * Switched to using Authors@R > > * Replace `::Rf_error` with `(Rf_error)` in old example to aid Rcpp transition to `Rcpp::stop` (or this pass-through) > > * Vignette now uses the `Rcpp::asis` builder for pre-made pdfs > > * Vignette references have been updated, URLs prefer https and DOIs > > Thanks to my CRANberries, there is also a diffstat report for this release. More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
05.03.2026 19:03 👍 0 🔁 0 💬 0 📌 0
Sean Whitton: dgit-as-a-service retrospective We recently launched tag2upload, aka _cloud dgit_ or _dgit-as-a-service_. This was something of a culmination of work I’ve been doing since 2016 towards modernising Debian workflows, so I thought I’d write a short personal retrospective. When I started contributing to Debian in 2015, I was not impressed with how packages were represented in Git by most package maintainers, and wanted a pure Git workflow. I read a couple of Joey Hess’s blog posts on the matter, a rope ladder to the dgit treehouse and upstream git repositories and made a bug report against dgit hoping to tie some things together. The results of that early work were the git-deborig(1) program and the dgit-maint-merge(7) tutorial manpage. Starting with Joey’s workflow pointers, I developed a complete, pure Git workflow that I thought would be suitable for all package maintainers in Debian. It was certainly well-suited for my own packages. It took me a while to learn that there are packages for which this workflow is too simple. We now also have the dgit-maint-debrebase(7) workflow which uses git-debrebase, something which wasn’t invented until several years later. Where dgit-maint-merge(7) won’t do, you can use dgit-maint-debrebase(7), and still be doing pretty much pure Git. Here’s a full, recent guide to modernisation. The next most significant contribution of my own was the `push-source` subcommand for dgit. `dgit push` required a preexisting `.changes` file produced from the working tree. I wanted to make `dgit push-source` prepare that `.changes` file for you, but _also_ not use the working tree, instead consulting `HEAD`. The idea was that you were doing a git push – which doesn’t care about the working tree – direct to the Debian archive, or as close as we could get. I implemented that at DebConf18 in Taiwan, I think, with Ian, and we also did a talk on git-debrebase. We ended up having to change it to look at the working tree in addition to `HEAD` to make it work as well as possible, but I think that the idea of a command which was like doing a Git push direct to the archive was perhaps foundational for us later wanting to develop tag2upload. Indeed, while tag2upload’s client-side tool git-debpush does look at the working tree, it doesn’t do so in a way that is essential to its operation. tag2upload is `dgit push-source`-as-a-service. And finally we come to tag2upload, a system Ian and I designed in 2019 during a two-person sprint at his place in Cambridge, while I was visiting the UK from Arizona. With tag2upload, appropriately authorised Debian package maintainers can upload to Debian with only pure Git operations – namely, making and pushing a signed Git tag to Debian’s GitLab instance. Although we had a solid prototype in 2019, we only finally launched it last month, February 2026. This was mostly due to political delays, but also because we have put in a lot of hours making it better in various ways. Looking back, one thing that seems notable to me is that the core elements of the pure Git workflows haven’t changed much at all. Working out all the details of dgit-maint-merge(7), designing and writing git-debrebase (Ian’s work), and then working out all the details of dgit-maint-debrebase(7), are the important parts, to me. The rest is mostly just large amounts of compatibility code. git-debrebase and dgit-maint-debrebase(7) are very novel but dgit-maint-merge(7) is mostly just an extrapolation of Joey’s thoughts _from 13 years ago_. And yet, adoption of these workflows remains low. People prefer to use what they are used to using, even if the workflows have significant inconveniences. That’s completely understandable; I’m really interested in good workflows, but most other contributors care less about it. But you would expect enough newcomers to have arrived in 13 years that the new workflows would have a higher uptake. That is, packages maintained by contributors that got involved after these workflows became available would be maintained using newer workflows, at least. But the inertia seems to be too strong even for that. Instead, new contributors used to working purely out of Git are told they need to learn Debian’s strange ways of representing things, tarballs and all. It doesn’t have to be that way. We hope that tag2upload will make the pure Git workflows seem more appealing to people.
05.03.2026 17:03 👍 0 🔁 0 💬 0 📌 0
Sean Whitton: Southern Biscuits with British ingredients I miss the US more and more, and have recently been trying to perfect Southern Biscuits using British ingredients. It took me eight or nine tries before I was consistently getting good results. Here is my recipe. ## Ingredients * 190g plain flour * 60g strong white bread flour * 4 tsp baking powder * ¼ tsp bicarbonate of soda * 1 tsp cream of tartar (optional) * 1 tsp salt * 100g unsalted butter * 180ml buttermilk, chilled * If your buttermilk is thicker than the consistency of ordinary milk, you’ll need around 200ml. * extra buttermilk for brushing ## Method 1. Slice and then chill the butter in the freezer for at least fifteen minutes. 2. Preheat oven to 220°C with the fan turned off. 3. Twice sieve together the flours, leaveners and salt. Some salt may not go through the sieve; just tip it back into the bowl. 4. Cut cold butter slices into the flour with a pastry blender until the mixture resembles _coarse_ crumbs: some small lumps of fat remaining is desirable. In particular, the fine crumbs you are looking for when making British scones are not wanted here. Rubbing in with fingertips just won’t do; biscuits demand keeping things cold even more than shortcrust pastry does. 5. Make a well in the centre, pour in the buttermilk, and stir with a metal spoon until the dough comes together and pulls away from the sides of the bowl. Avoid overmixing, but I’ve found that so long as the ingredients are cold, you don’t have to be too gentle at this stage and can make sure all the crumbs are mixed in. 6. Flour your hands, turn dough onto a floured work surface, and pat together into a rectangle. Some suggest dusting the top of the dough with flour, too, here. 7. Fold the dough in half, then gather any crumbs and pat it back into the same shape. Turn ninety degrees and do the same again, until you have completed a total of eight folds, two in each cardinal direction. The dough should now be a little springy. 8. Roll to about ½ inch thick. 9. Cut out biscuits. If using a round cutter, do not twist it, as that seals the edges of the biscuits and so spoils the layering. 10. Transfer to a baking sheet, placed close together (helps them rise). Flour your thumb and use it to press an indent into the top of each biscuit (helps them rise straight), brush with buttermilk. 11. Bake until flaky and golden brown: about fifteen minutes. ## Gravy It turns out that the “pepper gravy” that one commonly has with biscuits is just a white/béchamel sauce made with lots of black pepper. I haven’t got a recipe I really like for this yet. Better is a “sausage gravy”; again this has a white sauce as its base, I believe. I have a vegetarian recipe for this to try at some point. ## Variations * These biscuits do come out fluffy but not so flaky. For that you can try using lard instead of butter, if you’re not vegetarian (vegetable shortening is hard to find here). * If you don’t have a pastry blender and don’t want to buy one you can try not slicing the butter and instead coarsely grating it into the flour out of the freezer. * An alternative to folding is cutting and piling the layers. * You can try rolling out to 1–1½ inches thick. * Instead of cutting out biscuits you can just slice the whole piece of dough into equal pieces. An advantage of this is that you don’t have to re-roll, which latter also spoils the layering. * Instead of brushing with buttermilk, you can take them out after they’ve started to rise but before they’ve browned, brush them with melted butter and put them back in. ## Notes * I’ve had more success with Dale Farm’s buttermilk than Sainsbury’s own. The former is much runnier. * Southern culture calls for biscuits to be made the size of cat’s heads. * Bleached flour is apparently usual in the South, but is illegal(!) here. Apparently bleaching can have some effect on the development of the gluten which would affect the texture. * British plain flour is made from soft wheat and has a lower percentage of protein/gluten, while American all-purpose flour is often(?) made from harder wheat and has more protein. In this recipe I mix plain and strong white flour, in a ratio of 3:1, to emulate American all-purpose flour. I am not sure why this works best. In the South they have soft wheats too, and lower protein percentages. The famous White Lily flour is 9%. (Apparently you can mix US cake flour and US all-purpose flour in a ratio of 1:1 to achieve that; in the UK, Shipton Mill sell a “soft cake and pastry flour” which has been recommended to me as similar.) This would suggest that British plain flour ought to be closer to Southern flour than the standard flour available in most of the US. But my experience has been that the biscuits taste better with the plain and strong white 3:1 mix. Possibly Southeners would disprefer them. I got some feedback that good biscuits are about texture and moistness and not flavour. * Baking powder in the US is usually double-acting but ours is always single-acting, so we need double quantities of that.
05.03.2026 17:03 👍 0 🔁 0 💬 0 📌 0
Jonathan Dowland: More lava lamps Mathmos had a sale on spare Lava lamp bottles around Christmas, so I bought a couple of new-to-me colour combinations. The lamp I have came with orange wax in purple liquid, which gives a strong red glow in a dark room. I bought blue wax in purple liquid, which I think looks fantastic and works really nicely with my Rob Sheridan print. The other one I bought was pink in clear, which is nice, but I think the coloured liquids add a lot to the tone of lighting in a room. Recently, UK vid-blogger Techmoan did some really nice videos about Mathmos lava lamps: Best Lava Lamp? and LAVA LAMPS Giant, Mini & Neo.
04.03.2026 17:00 👍 0 🔁 0 💬 0 📌 0
Dirk Eddelbuettel: tidyCpp 0.0.9 on CRAN: More (forced) Maintenance Another maintenance release of the tidyCpp package arrived on CRAN this morning. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the vignette for motivating examples. This release follows a similar release in November and had its hand forced by rather abrupt and forced overnight changes in R-devel, this time the removal of `VECTOR_PTR` in [this commit]. The release also contains changes accumulated since the last release (including some kindly contritbuted by Ivan) and those are signs that the R Core team can do more coordinated release management when they try a little harder. Changes are summarize in the NEWS entry that follows. > #### Changes in tidyCpp version 0.0.9 (2026-03-03) > > * Several vignette typos have been corrected (#4 addressing #3) > > * A badge for r-universe has been added to the README.md > > * The vignette is now served via GitHub Pages and that version is referenced in the README. > > * Two entry points reintroduced and redefined using permitted R API function (Ivan Krylov in #5). > > * Another entry has been removed to match R-devel API changes. > > * Six new attributes helpers have been added for R 4.6.0 or later. > > * `VECTOR_PTR_RO(x)` replaces the removed `VECTOR_PTR`, a warning or deprecation period would have been nice here. > > Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
04.03.2026 14:59 👍 0 🔁 0 💬 0 📌 0
Michael Ablassmeier: pbsindex - file backup index If you take backups using the proxmox-backup-client and you wondered what backup may include a specific file, the only way to find out is to mount the backup and search for the files. For regular file backups, the Proxmox Backup Server frontend provides a pcat1 file for download, whose binary format is somewhat undocumented but actually includes a listing of the files backed up. A Proxmox backup server datastore includes the same pcat1 file as blob index (.pcat1.didx). So to actually beeing able to tell which backup contains which files, one needs to: 1) Open the .pcat1.didx file and find out required blobs, see format documentation 2) Reconstruct the .pcat1 file from the blobs 3) Parse the pcat1 file and output the directory listing. I’ve implemented this in pbsindex which lets you create a central file index for your backups by scanning a complete PBS datastore. Lets say you want to have a file listing for a specific backup, use: pbsindex --chunk-dir /backup/.chunks/ /backup/host/vm178/2026-03-02T10:47:57Z/catalog.pcat1.didx didx uuid=7e4086a9-4432-4184-a21f-0aeec2b2de93 ctime=2026-03-02T10:47:57Z chunks=2 total_size=1037386 chunk[0] start=0 end=344652 size=344652 digest=af3851419f5e74fbb4d7ca6ac3bc7c5cbbdb7c03d3cb489d57742ea717972224 chunk[1] start=344652 end=1037386 size=692734 digest=e400b13522df02641c2d9934c3880ae78ebb397c66f9b4cf3b931d309da1a7cc d ./usr.pxar.didx d ./usr.pxar.didx/bin l ./usr.pxar.didx/bin/Mail f ./usr.pxar.didx/bin/[ size=55720 mtime=2025-06-04T15:14:05Z f ./usr.pxar.didx/bin/aa-enabled size=18672 mtime=2025-04-10T15:06:25Z f ./usr.pxar.didx/bin/aa-exec size=18672 mtime=2025-04-10T15:06:25Z f ./usr.pxar.didx/bin/aa-features-abi size=18664 mtime=2025-04-10T15:06:25Z l ./usr.pxar.didx/bin/apropos It also lets you scan a complete datastore for all existing .pcat1.didx files and store the directory listings in a SQLite database for easier searching.
03.03.2026 12:57 👍 0 🔁 0 💬 0 📌 0
Matthew Garrett: To update blobs or not to update blobs A lot of hardware runs non-free software. Sometimes that non-free software is in ROM. Sometimes it’s in flash. Sometimes it’s not stored on the device at all, it’s pushed into it at runtime by another piece of hardware or by the operating system. We typically refer to this software as “firmware” to differentiate it from the software run on the CPU after the OS has started1, but a lot of it (and, these days, probably most of it) is software written in C or some other systems programming language and targeting Arm or RISC-V or maybe MIPS and even sometimes x862. There’s no real distinction between it and any other bit of software you run, except it’s generally not run within the context of the OS3. Anyway. It’s code. I’m going to simplify things here and stop using the words “software” or “firmware” and just say “code” instead, because that way we don’t need to worry about semantics. A fundamental problem for free software enthusiasts is that almost all of the code we’re talking about here is non-free. In some cases, it’s cryptographically signed in a way that makes it difficult or impossible to replace it with free code. In some cases it’s even encrypted, such that even examining the code is impossible. But because it’s code, sometimes the vendor responsible for it will provide updates, and now you get to choose whether or not to apply those updates. I’m now going to present some things to consider. These are not in any particular order and are not intended to form any sort of argument in themselves, but are representative of the opinions you will get from various people and I would like you to read these, think about them, and come to your own set of opinions before I tell you what my opinion is. THINGS TO CONSIDER * Does this blob do what it claims to do? Does it suddenly introduce functionality you don’t want? Does it introduce security flaws? Does it introduce deliberate backdoors? Does it make your life better or worse? * You’re almost certainly being provided with a blob of compiled code, with no source code available. You can’t just diff the source files, satisfy yourself that they’re fine, and then install them. To be fair, even though you (as someone reading this) are probably more capable of doing that than the average human, you’re likely not doing that even if you **are** capable because you’re also likely installing kernel upgrades that contain vast quantities of code beyond your ability to understand4. We don’t rely on our personal ability, we rely on the ability of those around us to do that validation, and we rely on an existing (possibly transitive) trust relationship with those involved. You don’t know the people who created this blob, you likely don’t know people who do know the people who created this blob, these people probably don’t have an online presence that gives you more insight. Why should you trust them? * If it’s in ROM and it turns out to be hostile then nobody can fix it ever * The people creating these blobs largely work for the same company that built the hardware in the first place. When they built that hardware they could have backdoored it in any number of ways. And if the hardware has a built-in copy of the code it runs, why do you trust that that copy isn’t backdoored? Maybe it isn’t and updates _would_ introduce a backdoor, but in that case if you buy new hardware that runs new code aren’t you putting yourself at the same risk? * Designing hardware where you’re able to provide updated code and nobody else can is just a dick move5. We shouldn’t encourage vendors who do that. * Humans are bad at writing code, and code running on ancilliary hardware is no exception. It contains bugs. These bugs are sometimes very bad. This paper describes a set of vulnerabilities identified in code running on SSDs that made it possible to bypass encryption secrets. The SSD vendors released updates that fixed these issues. If the code couldn’t be replaced then anyone relying on those security features would need to replace the hardware. * Even if blobs are signed and can’t easily be replaced, the ones that aren’t encrypted can still be examined. The SSD vulnerabilities above were identifiable because researchers were able to reverse engineer the updates. It can be more annoying to audit binary code than source code, but it’s still possible. * Vulnerabilities in code running on other hardware can still compromise the OS. If someone can compromise the code running on your wifi card then if you don’t have a strong IOMMU setup they’re going to be able to overwrite your running OS. * Replacing one non-free blob with another non-free blob increases the total number of non-free blobs involved in the whole system, but doesn’t increase the number that are actually executing at any point in time. Ok we’re done with the things to consider. Please spend a few seconds thinking about what the tradeoffs are here and what your feelings are. Proceed when ready. I trust my CPU vendor. I don’t trust my CPU vendor because I want to, I trust my CPU vendor because I have no choice. I don’t think it’s likely that my CPU vendor has designed a CPU that identifies when I’m generating cryptographic keys and biases the RNG output so my keys are significantly weaker than they look, but it’s not literally impossible. I generate keys on it anyway, because what choice do I have? At some point I will buy a new laptop because Electron will no longer fit in 32GB of RAM and I will have to make the same affirmation of trust, because the alternative is that I just don’t have a computer. And in any case, I will be communicating with other people who generated their keys on CPUs I have no control over, and I will also be relying on them to be trustworthy. If I refuse to trust my CPU then I don’t get to computer, and if I don’t get to computer then I will be sad. I suspect I’m not alone here. Why would I install a code update on my CPU when my CPU’s job is to run my code in the first place? Because it turns out that CPUs are complicated and messy and they have their own bugs, and those bugs may be functional (for example, some performance counter functionality was broken on Sandybridge at release, and was then fixed with a microcode blob update) and if you update it your hardware works better. Or it might be that you’re running a CPU with speculative execution bugs and there’s a microcode update that provides a mitigation for that even if your CPU is slower when you enable it, but at least now you can run virtual machines without code in those virtual machines being able to reach outside the hypervisor boundary and extract secrets from other contexts. When it’s put that way, why would I _not_ install the update? And the straightforward answer is that theoretically it could include new code that doesn’t act in my interests, either deliberately or not. And, yes, this is theoretically possible. Of course, if you don’t trust your CPU vendor, why are you buying CPUs from them, but well maybe they’ve been corrupted (in which case don’t buy any new CPUs from them either) or maybe they’ve just introduced a new vulnerability by accident, and also you’re in a position to determine whether the alleged security improvements matter to you at all. Do you care about speculative execution attacks if all software running on your system is trustworthy? Probably not! Do you need to update a blob that fixes something you don’t care about and which might introduce some sort of vulnerability? Seems like no! But there’s a difference between a recommendation for a fully informed device owner who has a full understanding of threats, and a recommendation for an average user who just wants their computer to work and to not be ransomwared. A code update on a wifi card may introduce a backdoor, or it may fix the ability for someone to compromise your machine with a hostile access point. Most people are just not going to be in a position to figure out which is more likely, and there’s no single answer that’s correct for everyone. What we _do_ know is that where vulnerabilities in this sort of code have been discovered, updates have tended to fix them - but nobody has flagged such an update as a real-world vector for system compromise. My personal opinion? You should make your own mind up, but also you shouldn’t impose that choice on others, because your threat model is not necessarily their threat model. Code updates are a reasonable default, but they shouldn’t be unilaterally imposed, and nor should they be blocked outright. And the best way to shift the balance of power away from vendors who insist on distributing non-free blobs is to demonstrate the benefits gained from them being free - a vendor who ships free code on their system enables their customers to improve their code and enable new functionality and make their hardware more attractive. It’s impossible to say with absolute certainty that your security will be improved by installing code blobs. It’s also impossible to say with absolute certainty that it won’t. So far evidence tends to support the idea that most updates that claim to fix security issues do, and there’s not a lot of evidence to support the idea that updates add new backdoors. Overall I’d say that providing the updates is likely the right default for most users - and that that should never be strongly enforced, because people should be allowed to define their own security model, and whatever set of threats I’m worried about, someone else may have a good reason to focus on different ones. * * * 1. Code that runs on the CPU _before_ the OS is still usually described as firmware - UEFI is firmware even though it’s executing on the CPU, which should give a strong indication that the difference between “firmware” and “software” is largely arbitrary ↩︎ 2. And, obviously 8051 ↩︎ 3. Because UEFI makes everything more complicated, UEFI makes this more complicated. Triggering a UEFI runtime service involves your OS jumping into firmware code at runtime, in the same context as the OS kernel. Sometimes this will trigger a jump into System Management Mode, but other times it won’t, and it’s just your kernel executing code that got dumped into RAM when your system booted. ↩︎ 4. _I_ don’t understand most of the diff between one kernel version and the next, and I don’t have time to read all of it either. ↩︎ 5. There’s a bunch of reasons to do this, the most reasonable of which is probably not wanting customers to replace the code and break their hardware and deal with the support overhead of that, but not being able to replace code running on hardware I own is always going to be an affront to me. ↩︎
03.03.2026 04:57 👍 0 🔁 0 💬 0 📌 0
Preview
Isoken Ibizugbe: Wrapping Up My Outreachy Internship at Debian ## Twelve weeks ago, I stepped into the Debian ecosystem as an Outreachy intern with a curiosity for Quality Assurance. It feels like just yesterday, and time has flown by so fast! Now, I am wrapping up that journey, not just with a completed project, but with improved technical reasoning. I have learned how to use documentation to understand a complex project, how to be a good collaborator, and that learning is a continuous process. These experiences have helped me grow much more confident in my skills as an engineer. ### **My Achievements** As I close this chapter, I am leaving a permanent “Proof-of-Work” in the Debian repositories: * **Full Test Coverage:** I automated apps_startstop tests for Cinnamon, LXQt, and XFCE, covering both Live images and Netinst installations. * **Synergy:** I used symbolic links and a single Perl script to handle common application tests across different desktops, which reduces code redundancy. * **The Contributor Style Guide:** I created a guide for future contributors to make documentation clearer and reviews faster, helping to reduce the burden on reviewers. ### **Final Month: Wrap Up** In this final month, things became easier as my understanding of the project grew. I focused on stability and finishing my remaining tasks: * I spent time exploring different QEMU video options like VGA, qxl, and virtio on KDE desktop environment . This was important to ensure screen rendering remained stable so that our “needles” (visual test markers) wouldn’t fail because of minor glitches. * I successfully moved from familiarizing to test automation for the XFCE desktop. This included writing “prepare” steps and creating the visual needles needed to make the tests reliable. * One of my final challenges was the app launcher function. Originally, my code used else if blocks for each desktop. I proposed a unified solution, but hit a blocker: XFCE has two ways to launch apps (App Finder and the Application Menu). Because using different methods sometimes caused failures, I chose to use the application menu button across the board. ### **What’s Next?** I don’t want my journey with Debian to end here. I plan to stay involved in the community and extend these same tests to the **LXDE** desktop to complete the coverage for all major Debian desktop environments. I am excited to keep exploring and learning more about the Debian ecosystem. ### **Thank You** This journey wouldn’t have been possible without the steady guidance of my mentors: **Tassia Camoes Araujo, Roland Clobus, and Philip Hands.** Thank you for teaching me that in the world of Free and Open Source Software (FOSS), your voice and your code are equally important. To my fellow intern **Hellen** and the entire Outreachy community, thank you for the shared learning and support. It has been an incredible 12 weeks.
02.03.2026 22:56 👍 0 🔁 0 💬 0 📌 0
Preview
Hellen Chemtai: The Last Week of My Journey as an Outreachy Intern at Debian OpenQA Hello world . I’m Hellen Chemtai, an intern at Outreachy working with the Debian OpenQA team on Images Testing. This is the final week of the internship. This is just a start for me as I will continue contributing to the community .I am grateful for the opportunity to work with the Debian OpenQA team as an Outreachy intern. I have had the best welcoming team to Open Source. #### My tasks and contributions I have been working on network install and live images tasks : 1. Install live Installers ( Ventoy , Rufus and Balenaetcher) and test the live USBs made by these live installers. – These tasks were completed and is running on the server. 2. Use different file systems (btrfs , jfs , xfs) for installation and then test. – This task was completed and running on the server. It still needs some changes to ensure automation for each file system 3. Use speech synthesis to capture all audio. – This task is almost complete. We are testing to ensure no errors will occur in the server. 4. Publish temporary assets. – This task is not a priority and will be worked on once we’ve wrapped up the other tasks. I have enjoyed working on testing both live images and net install images. This was one of the goals that I had highlighted in my application. I have also been working with fellow contributors in this project. #### My team As I had stated , I have had the best welcoming team to Open Source . They have been working with me and ensuring I have the proper resources for contributions. I am grateful to my three mentors and the work they have done. 1. Roland Clobus is a project maintainer. He is in charge of code review , pointing out what we need to learn and works on technical issues. He considers every solution we contributors think of and will go into detailed explanations for any issue we have. 2. Tassia Camoes is a community coordinator. She is in charge of communication, co-ordination between contributors and networking within the community. She on-boarded us and introduced us to the community. 3. Philip Hands is also a project maintainer. He is in charge of technical code , ensuring sources work and also working on server and its issues. He also gives detailed explanations for any issue we have. I wish to learn more with the team. On my to do list, I would like to gain more skills on ports and packages so to contribute more technically. I have enjoyed working on the tasks and learning #### The impact of this project The automated tests done by the team help the community in some of the following examples: 1. Check the installation and system behavior of the Operating System images versions 2. Help developers and users of Operating Systems know which versions of applications e.g live installers run well on system 3. Check for any issues during installation and running of Operating Systems and their flavors I have also networked with the greater community and other contributors. During the contribution phase, I found many friends who were learning together with me . I hope to continue networking with the community and continue learning.
02.03.2026 18:56 👍 0 🔁 0 💬 0 📌 0
Valhalla's Things: A Pen Case (or a Few) Posted on March 2, 2026 Tags: madeof:atoms, FreeSoftWear, craft:sewing For my birthday, I’ve bought myself a fancy new expensive1 fountain pen. Such a fancy pen, of course requires a suitable case: I couldn’t use the failed prototype of a case I’ve been keeping my Preppys in, so I had to get out the nice vegetable tanned leather… Yeah, nope, I don’t have that (yet). I got out the latex and cardboard material that is sold as a (cheaper) leather substitute, doesn’t look like leather at all, but is quite nice (and easy) to work with. The project is not vegan anyway, because I used waxed linen thread, waxing it myself with a lot of very nicely smelling beeswax. I got the measurements2 from the less failed prototype where I keep my desktop pens, and this time I made a proper pattern I could share online, under the usual Free Culture license. From the width of the material I could conveniently cut two cases, so that’s what I did, started sewing the first one, realized that I got the order of stitching wrong, and also that if I used light blue thread instead of the black one it would look nice, and be easier to see in the pictures for the published pattern, started sewing the second one, and kept alternating between the two, depending on the availability of light for taking pictures. One of the two took the place of my desktop one, where I had one more pen than slots, and one of the old prototypes was moved to keep my bedside pen, and the other new case was used for the new pen in my handbag, together with a Preppy, and now I have a free slot and you can see how this is going to go wrong, right? :D * * * 1. 16€. plus a 9€ converter, and another 6€ pen to get the EF nib from, since it wasn’t available for the expensive pen.↩︎ 2. I have them written down somewhere. I couldn’t find them. So I measured the real thing, with some approximation.↩︎
02.03.2026 16:56 👍 0 🔁 0 💬 0 📌 0
Ben Hutchings: FOSS activity in February 2026 * Debian packages: * firmware-free: * Bugs: * closed #890601: firmware-linux-free uses prebuilt blobs instead of building from source * Uploads: * uploaded version 20241210-3 to unstable * firmware-nonfree: * Bugs: * closed #481234: firmware-nonfree: Include firmware for p54 driver * closed #484177: firmware-nonfree: keyspan * closed #534379: [firmware-nonfree] Please consider including dvb-usb-af9015.fw * closed #548745: firmware-linux: Fix licence and include edgeport firmware * closed #588142: Add r8192u_usb (aka rtl8192u) firmware * closed #597897: RFP: alsa-firmware – firmware binaries used by each alsa-firmware-loader program * closed #999485: Please add brcmfmac43456-sdio.* files as it’s not just used in RPi devices * opened and closed #1126794: Undistributable file under qcom/qdu100 * closed #1126846: Qualcomm AudioReach topology files are covered by separate licence * replied to #1126896: firmware-nvidia-graphics: Cannot upgrade from bookworm-backports to trixie-backports * Merge requests: * closed !128: Draft: Add Provides: based ABI versioning mechanism * merged !134: Update to 20251125 * reviewed and merged !135: Drop DSP firmware, migrated to hexagon-dsp-binaries source * reviewed and merged !136: debian/copyright: correct licence issues * opened and closed !137: d/copyright, qcom-soc: Exclude undistributable QDU100 firmware * opened and merged !138: Update to 20260110 * opened and merged !139: Update to 20260221 * Uploads: * uploaded version 20251111-1~bpo13+1 to trixie-backports * uploaded version 20251125-1 to unstable * uploaded version 20260110-1 to unstable * hexagon-dsp-binaries: * Bugs: * opened #1129001: Missing binaries - should this package use XS-Autobuild? * initramfs-tools: * Bugs: * closed #1126611: mkinitramfs: failed to determine device for / * Merge requests: * merged !191: tests fail on arm64 because they call qemu-system-arm64 * iptables: * Bugs: * replied to #1128561: iptables: virsh net-start no longer works: Failed to run firewall command iptables -w –table filter –list-rules * ktls-utils: * Merge requests: * merged !3: d/t/test-common: Move inclusion of extensions when signing the certificate * libvirt: * Bugs: * replied to #1124549: libvirt passes invalid flags for network interface deletion * linux: * Bugs: * replied to #1121192: kworker: Events_unbound, kworker processes, continually using CPU. * replied to #1126710: linux-image-6.18.5+deb14-amd64: unable to mount existing XFS V4 filesystem because kernel CONFIG_XFS_SUPPORT_V4 is not set * replied to #1128397: linux-image-6.18.10+deb14-amd64: open(/proc/$pid/maps) is empty after $pid exec()s, unless you read a partial line from the fd before, in which case it has the rest of the line only * replied to and closed #1128567: linux-image-6.18.5+deb13-amd64: amdgpu.dc=0 causes Xorg 1:7.7+24 error “no screens found” * closed #1129029: Bug on VirtualBox and KVM conflict kernel 6.12 (Debian 12) * Merge requests: * reviewed !1682: Unsplit configs for some kernel architectures * reviewed !1821: riscv64 config update for linux 6.19 * reviewed and merged !1824: db-mok: Remove unused function * opened !1831: CI: Update build job to work after another common pipeline change * Uploads: * (LTS) uploaded version 5.10.249-1 to bullseye-security * uploaded version 6.12.63-1~bpo12+1 to bookworm-backports * uploaded version 6.12.69-1~bpo12+1 to bookworm-backports * uploaded version 6.12.73-1~bpo12+1 to bookworm-backports * uploaded version 6.18.12-1~bpo13+1 to trixie-backports * uploaded version 6.18.5-1~bpo13+1 to trixie-backports * uploaded version 6.18.9-1~bpo13+1 to trixie-backports * (LTS) updated the bullseye-security branch to 5.10.251, but did not upload it * (LTS) linux-6.1: * Uploads: * uploaded version 6.1.162-1~deb11u1 to bullseye-security * linux-base: * Bugs: * closed #1128355: linux-base: indirectly missing perl dependency? * nfs-utils: * Merge requests: * reviewed and merged !36: Drop installation of blkmapd and nfs-blkmap.service systemd service * wireless-regdb: * Bugs: * replied to and closed #1104022: wireless-regdb: Consider importing setregdomain and udev rule from Fedora * closed #1122785: wireless-regdb: Please remove/replace usage of dh_movetousr * closed #1126431: wireless-regdb: Unnecessary Build-Depends: python3-m2crypto * Uploads: * uploaded version 2026.02.04-1 to unstable * uploaded version 2026.02.04-1~deb12u1 to bookworm * uploaded version 2026.02.04-1~deb13u1 to trixie * (LTS) updated the bullseye-security branch to 2026.02.04-1, but did not upload it * Debian non-package bugs: * release.debian.org: * opened #1128507: trixie-pu: package wireless-regdb/2026.02.04-1~deb13u1 * opened #1128510: bookworm-pu: package wireless-regdb/2026.02.04-1~deb12u1 * Mailing lists: * debian-kernel: * posted Agenda items for kernel-team meeting on 2026-02-04 * posted Agenda items for kernel-team meeting on 2026-02-25 * (LTS) replied to Discrepancies between Commits list in changelog of debian and upstream linux git repo. * (LTS) replied to [Pkg-libvirt-maintainers] Processed: retitle 1124549 to libvirt passes invalid flags for network interface deletion …, tagging 1124549 * replied to linux 7.0 * debian-lts-announce: * posted [SECURITY] [DLA 4475-1] linux security update * posted [SECURITY] [DLA 4476-1] linux-6.1 security update * klibc: * replied to [PATCH 1/2] [klibc] explicitly close arm64 syscall stub generator output * replied to [PATCH] [klibc] fix arm stub alignment * replied to [PATCH] [klibc] remove unneeded syscalls.mk dependencies * linux-hwmon: * replied to [PATCH] hwmon: (max16065) Use READ/WRITE_ONCE to avoid compiler optimization induced race * linux-wireless: * posted [PATCH] wireless-regdb: Fix regulatory.bin signing with new M2Crypto * posted [PATCH] wireless-regdb: Replace M2Crypto with cryptography package * platform-driver-x86: * replied to [PATCH] platform/x86: hp-bioscfg: Support allocations of larger data * stable: * (LTS) replied to Please apply commit 9990ddf47d41 (“net: tunnel: make skb_vlan_inet_prepare() return drop reasons”) down to 6.1.y at least * (LTS) reviewed and replied to various patches for 5.10 … … … * (LTS) posted [PATCH 5.10,5.15] ip6_tunnel: Fix usage of skb_vlan_inet_prepare() * replied to [PATCH 6.12 519/567] gpiolib: acpi: Move quirks to a separate file
02.03.2026 16:56 👍 0 🔁 0 💬 0 📌 0
Preview
Benjamin Mako Hill: Pronunciation Had a discussion about how to pronounce the name of Google’s chatbot. Turns out, we were all wrong.
01.03.2026 20:54 👍 0 🔁 0 💬 0 📌 0
Junichi Uekawa: The next Debconf happens in Japan. The next Debconf happens in Japan. Great news. Feels like we came a long way, but I didn't personally do much, I just made the first moves.
01.03.2026 04:51 👍 0 🔁 0 💬 0 📌 0
Daniel Baumann: Debian Fast Forward: An alternative backports repository The Debian project releases a new `stable` version of its Linux distribution approximately every two years. During its life time, a `stable` release usually gets security updates only, but in general no feature updates. For some packages it is desirable to get feature updates earlier than with the next `stable` release. Some new packages included in Debian after the initial release of a `stable` distribution are desirable for `stable` too. Both use-cases can be solved by recompiling the newer version of a package from `testing/unstable` on `stable` (aka backporting). Packages are backported together with only the minimal amount of required build-depends or depends not already fulfilled in `stable` (if any), and without any changes unless required to fix building on `stable` (if needed). There are official Debian Backports available, as well as several well-known unofficial backports repositories. I have been involved in one of these unofficial repositories since 2005 which subsequently turned 2010 into its own Debian derivative, mixing both backports and modified packages in one repository for simplicity. Starting with the Debian 13 (trixie) release, the (otherwise unmodified) backports of this derivative have been split out from the derivative distribution into a separate repository. This way the backports are more accessible and useful for all interested Debian users too. ## TL;DR: Debian Fast Forward - https://fastforward.debian.net > * is an alternative Debian repository containing complementary backports from `testing/unstable` to `stable` > > * with packages organized in an opinionated, self-contained selection of coherent sets > > * supporting `amd64`, `i386`, and `arm64` architectures > > * containing around 400 packages in `trixie-fastforward-backports` > > * with 1’800 uploads since July 2025 > > End user documentation about how to enable Debian Fast Forwards is available. Have fun!
28.02.2026 20:50 👍 0 🔁 0 💬 0 📌 0
Mike Gabriel: Debian Lomiri Tablets 2025-2027 - Project Report (Q3/2025) ### Debian Lomiri for Debian 13 (previous project) In our previous project around Debian and Lomiri (lasting until July 2025), we achieved to get Lomiri 0.5.0 (and with it another 130 packages) into Debian (with two minor exceptions [1]) just in time for the Debian 13 release in August 2025. ### Debian Lomiri for Debian 14 At DebConf in Brest, a follow-up project has been designed between the project sponsor and Fre(i)e Software GmbH [2]. The new project (on paper) started on 1st August 2025 and project duration was agreed on to be 2 years, allowing our company to work with an equivalent of ~5 FTE on Lomiri targetting the Debian 14 release some time in the second half of 2027 (an assumed date, let's see what happens). Ongoing work would be covered from day one of the new project and once all contract details had been properly put on paper end of September, Fre(i)e Software GmbH started hiring a new team of software developers and (future) Debian maintainers. (More of that new team in our next Q4/2025 report). The ongoing work of Q3/2025 was basically Guido Berhörster and myself working on Morph Browser Qt6 (mostly Guido together with Bhushan from MiraLab [3]) and package maintenance in Debian (mostly me). ### Morph Browser Qt6 The first milestone we could reach with the Qt6 porting of Morph Browser [4] and related components (LUITK aka lomiri-ui-toolkit (big chunk! [5]), lomiri-content-hub, lomiri-download-manager and a few other components) was reached on 21st Sep 2025 with an upload of Morph Browser 1.2.0~git20250813.1ca2aa7+dfsg-1~exp1 to Debian experimental and the Lomiri PPA [6]). ### Preparation of Debian 13 Updates (still pending) In background, various Lomiri updates for Debian 13 have been prepared during Q3/2025 (with a huge patchset), but publishing those to Debian 13 are still pending as tests are still not satisfying. [1] lomiri-push-service and nuntium 2] [https://freiesoftware.gmbh 3] [https://miralab.one/ 4] [https://gitlab.com/ubports/development/core/morph-browser/-/merge_reques... et al. 5] [https://gitlab.com/ubports/development/core/lomiri-ui-toolkit/-/merge_re... et al. 6] [https://launchpad.net/~lomiri
28.02.2026 18:50 👍 0 🔁 0 💬 0 📌 0
Petter Reinholdtsen: Free software toolchain for the simplest RISC-V CPU in a small FPGA? On Wednesday I had the pleasure of attending a presentation organized by the Norwegian Unix Users Group on implementing RISC-V using a small FPGA. This project is the result of a university teacher wanting to teach students assembly programming using a real instruction set, while still providing a simple and transparent CPU environment. The CPU in question implements the smallest set of opcodes needed to still call the CPU a RISC-V CPU, the RV32I base set. The author and presenter, Kristoffer Robin Stokke, demonstrated how to build both the FPGA setup and a small startup code providing a "Hello World" message over both serial port and a small LCD display. The FPGA is programmed using VHDL, the entire source code is available from github, but unfortunately the target FPGA setup is compiled using the proprietary tool Quartus. It is such a pity that such a cool little piece of free software should be chained down by non-free software, so my friend Jon Nordby set out to see if we can liberate this small RISC-V CPU. After all, it would be unforgivable sin to force students to use non-free software to study at the University of Oslo. The VHDL code for the CPU instructions itself is only 1138 lines, if I am to believe `wc -l lib/riscv_common/* lib/rv32i/*`. On the small FPGA used during the talk, the entire CPU, ROM, display and serial port driver only used up half the capacity. These days, there exists a free software toolchain for FPGA programming not only in Verilog but also in VHDL, and we hope the support in yosys, ghdl, and yosys-plugin-ghdl (sadly and strangely enough, removed from Debian unstable) is complete enough to at least build this small and simple project with some minor portability fixes. Or perhaps there are other approaches that work better? The first patches are already floating on github, to make the VHDL code more portable and to test out the build. If you are interested in running your own little RISC-V CPU on a FPGA chip, please get in touch. At the moment we sadly have hit a GHDL bug, which we do not quite know how to work around or fix: > > ******************** GHDL Bug occurred *************************** > Please report this bug on https://github.com/ghdl/ghdl/issues > GHDL release: 5.0.1 (Debian 5.0.1+dfsg-1+b1) [Dunoon edition] > Compiled with unknown compiler version > Target: x86_64-linux-gnu > /scratch/pere/src/fpga/memstick-fpga-riscv-upstream/ > Command line: > > Exception CONSTRAINT_ERROR raised > Exception information: > raised CONSTRAINT_ERROR : synth-vhdl_expr.adb:1763 discriminant check failed > ****************************************************************** > Thus more work is needed. For me, this simple project is the first stepping stone for a larger dream I have of converting the MESA machine controller system to build its firmware using a free software toolchain. I just need to learn more FPGA programming first. :) As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address **15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b**.
27.02.2026 22:47 👍 0 🔁 0 💬 0 📌 0