LK Seiling's Avatar

LK Seiling

@lkseiling

DE/EN. 📍 Potsdam / Berlin coordination of @dsa40collaboratory.bsky.social, various research at @weizenbauminstitut.bsky.social among other things: http://zusammenfuergleichstellung.de

204
Followers
292
Following
57
Posts
03.01.2024
Joined
Posts Following

Latest posts by LK Seiling @lkseiling

It’s not a privilege, it’s a researchers right! #DSA40

@lkseiling.bsky.social and I summarize what the European Commission decision in their case against X (I feel we should follow John Oliver and from now on call it Twitter again) means for researchers right to platform data access [in German]

24.02.2026 10:06 👍 4 🔁 3 💬 0 📌 0
Zitatkachel mit dem GFF-Juristen Joschka Selinger: „Auch die großen Digitalkonzerne stehen nicht über dem Gesetz. Der DSA ist ein scharfes Schwert für digitale Rechte und Plattformregulierung.“

Zitatkachel mit dem GFF-Juristen Joschka Selinger: „Auch die großen Digitalkonzerne stehen nicht über dem Gesetz. Der DSA ist ein scharfes Schwert für digitale Rechte und Plattformregulierung.“

Durchbruch für digitale Rechte – und für den Kampf gegen #Desinformation und #Wahlbeeinflussung online: Das Kammergericht Berlin entschied heute, dass die Plattform #X unserer Partnerorganisation @democracyreporting.bsky.social Zugang zu #Forschungsdaten geben muss. 💪

17.02.2026 17:19 👍 116 🔁 52 💬 2 📌 1

Heute verhandelt das Kammergericht Berlin, ob @democracyreporting.bsky.social von #X Plattformdaten bekommt, um zu Wahlbeeinflussung und Desinformation zu forschen. Obwohl DRI nach dem #DSA der EU ein Recht darauf hat, verweigert X die Herausgabe.

Mehr Infos im Quotepost.

17.02.2026 09:26 👍 67 🔁 14 💬 2 📌 0

There have been increasingly shrill accusations against the EU over its digital legislation, based on accusations of "censorship" by defenders of "free speech" -- including, so it appears, the right to peddle an AI app that seemingly produces child sexual abuse material (CSAM).
1/9

05.02.2026 19:20 👍 65 🔁 33 💬 1 📌 3
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 👍 3787 🔁 1896 💬 110 📌 390

Anyone interested in drafting an Art. 40(4) DSA access request to make use of this moment? Looks like a great chance to figure out, how the @grok account fits into X' internal governance/moderation structures.

13.01.2026 21:48 👍 1 🔁 0 💬 0 📌 0
Preview
Why Are Grok and X Still Available in App Stores? Elon Musk’s chatbot has been used to generate thousands of sexualized images of adults and apparent minors. Apple and Google have removed other “nudify” apps—but continue to host X and Grok.

"Why Are Grok and X Still Available in App Stores?
Elon Musk’s chatbot has been used to generate thousands of sexualized images of adults and apparent minors. Apple and Google have removed other “nudify” apps—but continue to host X and Grok." www.wired.com/story/x-grok...

08.01.2026 20:56 👍 16 🔁 7 💬 0 📌 0
Preview
Grok turns off image generator for most users after outcry over sexualised AI imagery X to limit editing function to paying subscribers after platform threatened with fines and regulatory action

The Guardian doing inexplicable voluntary free public relations work for a corporation profiting from sexual abuse by headlining this as the image gen being "turned off". It isn't off: they've just monetised it. What the fuck are we doing here people, come on.

www.theguardian.com/technology/2...

09.01.2026 09:18 👍 1015 🔁 340 💬 32 📌 29

liked undressing any woman or child on the platform? throw us some cash and you can keep doing it

09.01.2026 12:24 👍 104 🔁 29 💬 3 📌 1
Fuse – 39C3: Power Cycles Streaming Live streaming from the 39th Chaos Communication Congress

Gleich gehts los auf dem #39c3: Simone und Jürgen ziehen in „Hacking Karlsruhe - 10 years later“ nach zehn Jahren GFF Bilanz. Hier gehts zum Stream: streaming.media.ccc.de/39c3/fuse

29.12.2025 10:52 👍 27 🔁 8 💬 2 📌 0
Video thumbnail

#Trump has been accusing the #EU of #censorship via its #DigitalServicesAct, but is any of what they are saying true?

Let's investigate 🧵🔽
#EUpol #USpol

26.12.2025 23:19 👍 1 🔁 7 💬 1 📌 0

Did you know that the new European Omnibus threatens research based on data donations?

We wrote an open letter highlighting the problematic amendment (see below).

Please consider reading and signing it. If the amendment goes through, this might well be the end of data donation research...

25.11.2025 11:10 👍 2 🔁 0 💬 0 📌 0

We recently concluded a special article series, “Seeing the Digital Sphere: The Case for Public Platform Data,” in collaboration with the Knight-Georgetown Institute, in which experts explored why access to public platform data is critical. Here’s a snapshot: (1/9)

17.11.2025 18:15 👍 14 🔁 11 💬 1 📌 0
Preview
Trotz Kritik: Ab wann die BW-Polizei eine Palantir-Software nutzen darf Trotz viel Kritik und einer Petition hat der Landtag das Polizeigesetz geändert und die Nutzung der Datenanalyse-Software Palantir beschlossen.

Die Grünen sind nur Bürgerrechtspartei, wenn sie nicht in der Regierung sind. www.swr.de/swraktuell/b...

13.11.2025 06:46 👍 81 🔁 23 💬 4 📌 1
Preview
In Critical Condition – How To Stabilize Researcher Data Access? | TechPolicy.Press Mark Scott and LK Seiling discuss the struggle for researcher access to social media data and an alternative future where transparency is seen as a civic good.

@lkseiling.bsky.social & @markscott.bsky.social plea in @techpolicypress.bsky.social for a new data access regime emerging from a "generation of decentralized platforms that treat data transparency not as a regulatory burden but as a civic and scientific good"

www.techpolicy.press/in-critical-...

12.11.2025 19:44 👍 14 🔁 8 💬 0 📌 0

The most transparent, de-gamified way to do social media would be a “more like this / less like this” interface that nobody else sees. The “like” is pointless here anyway, while sharing is everything.

01.11.2025 02:56 👍 7 🔁 4 💬 0 📌 0
Preview
How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Here are seven unusual financial agreements helping to drive the ambitions of the poster child of the A.I. revolution.

"Many of the deals OpenAI has struck — with chipmakers, cloud computing companies and others — are strangely circular. OpenAI receives billions from tech companies before sending those billions back to the same companies to pay for computing power and other services." www.nytimes.com/interactive/...

31.10.2025 10:57 👍 537 🔁 203 💬 38 📌 77

two for one, nice 😎

01.11.2025 05:18 👍 460 🔁 54 💬 13 📌 3
Preview
Abschiebung trotz Ausbildungsvertrag: Aus dem Bett geholt, ins Flugzeug gesetzt Rouaa und Ibrahim hatten ihre Ausbildungsverträge unterschrieben und hätten damit bis zum Abschluss bleiben dürfen. Trotzdem wurden sie abgeschoben.

"Gegen vier Uhr morgens drangen Be­am­t:in­nen in die Wohnung der syrischen Familie Seleman ein, führten die Geschwister Rouaa (24) und Ibrahim (28) ab und setzten sie in ein Flugzeug. Dabei sollten beide eine Ausbildung antreten. Der Flüchtlingsrat Schleswig-Holstein nennt das „Behördenwahnsinn“.

24.10.2025 17:19 👍 878 🔁 384 💬 64 📌 48

Since Meta and TikTok have been asked to respond individually, achieving such harmonisation seems unlikely—unless both researchers and regulators actively push for it. I have high hopes 😉

24.10.2025 12:21 👍 2 🔁 0 💬 0 📌 0

Researchers need a BASELINE STANDARD OF PUBLICLY ACCESSIBLE DATA: a minimum set of comparable, high-quality data from all platforms, along with robust quality checks to ensure validity.

24.10.2025 12:21 👍 2 🔁 0 💬 1 📌 0

2) Burdensome tools
The data access tools provided under Article 40(12) are also inadequate. Both Meta and TikTok reportedly supply incomplete data with questionable accuracy.

24.10.2025 12:21 👍 2 🔁 0 💬 1 📌 0

Researchers need STANDARDISED APPLICATION FORMS and FAIR, TRANSPARENT TERMS ACROSS PLATFORMS. Without them, access to data will remain extremely resource-intensive, discouraging cross-platform research and keeping much of this work in a legal grey area.

24.10.2025 12:21 👍 2 🔁 0 💬 1 📌 0

For example, both platforms request detailed information about researchers’ qualifications, while Meta even asks for a date of birth and phone number. On top of that, researchers must agree to contradictory or restrictive terms just to apply for data access.

24.10.2025 12:21 👍 2 🔁 0 💬 1 📌 0

1) Burdensome procedures
In practice, this means that the application processes set up by these platforms may violate the DSA’s provisions. TikTok’s application form reportedly includes around 40 required fields, while Meta’s goes up to 50 - many without connection to the requirements in Art. 40(8)

24.10.2025 12:21 👍 2 🔁 0 💬 1 📌 0

The key terms here are:
1) burdensome procedures
2)burdensome tools

While the findings themselves are not public, let's take a closer look 🧵👇

24.10.2025 12:21 👍 8 🔁 4 💬 1 📌 0

I’ll be presenting this work at #CSCW2025 in Bergen on Tuesday at 2:30PM! We will be part of the session “Core Concepts in Privacy Research” (in the Bekken room) chaired by @emtseng.bsky.social ☺️

18.10.2025 22:45 👍 16 🔁 3 💬 0 📌 0

The parsing code and info on the schema based on Activity Streams 2.0 we suggested: gitlab.weizenbaum-institut.de/lukas.seilin...

@lionw.bsky.social's latest paper on data donations: doi.org/10.12758/mda... (follow him for more to come soon!)

10.10.2025 10:19 👍 3 🔁 0 💬 0 📌 0

"Instagram head says company is not using your microphone to listen to you (with AI data, it won’t need to)" techcrunch.com/2025/10/01/i...

01.10.2025 23:19 👍 13 🔁 8 💬 0 📌 1

das Policy Paper:
bsky.app/profile/weiz...

24.09.2025 16:34 👍 1 🔁 0 💬 0 📌 0