You can also see various explainers and demos at our website mintresearch.org
You can also see various explainers and demos at our website mintresearch.org
We can then call an agent in our slack who can do literature reviews, find papers, and just talk to our corpus of papers. philosophyofcomputing.substack.com/p/how-to-use...
...shares it with me for further curation, then generates daily summaries for my lab, finds and ingests pdfs (and markdown files) of all articles mentioned, and incorporates them into a vector store with rich analysis and summaries for future searches...
Here's my contribution on using agents to support academic research.
I've got a pipeline going now with coding agents that checks arxiv, twitter, bluesky, philpapers, a bunch of journals, many RSS feeds and more, classifies it against a long statement of my lab's interests...
mixed bag tbh...
To anyone encountering Moltbook this week and wondering about AI personhood, consciousness, sentience, etc---we published a very relevant paper in October: A Pragmatic View of AI Personhood.
arxiv.org/abs/2510.26396
π If you are enjoying the feed, please like and share it with others for discoverability!
I made this feed ages ago, and it was crap then. Now itβs pretty good?
Are there any other better ones that have been made since? Surely?
bsky.app/profile/did:...
Will read this with interest.
π¨ New Study π¨
@arxiv.bsky.social has recently decided to prohibit any 'position' paper from being submitted to its CS servers.
Why? Because of the "AI slop", and allegedly higher ratios of LLM-generated content in review papers, compared to non-review papers.
Turns out, there are a TON of image/video AI models hosted on CivitAI with dogwhistles for NCII and/or CSAM in their names. π
Max Kamachee and I just updated our "Video Deepfake Abuse" paper with this new fig:
π papers.ssrn.com/sol3/papers....
In a new Science study, researchers train a neural classifier to spot #AI-generated Python functions in over 30 million GitHub commits by 160,097 software developers, tracking how fast, and where, these tools take hold. https://scim.ag/4aeUdAV
And this is from Anthropic... We need to get LLMs out of learning contexts
"Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation..."
arxiv.org/abs/2601.20245
Meta in effort to fix safety, factually, hallucinations at *pretraining* they ensure the model is trained to generate only high-quality safe tokens, even for unsafe prompts.
"Self-Improving Pretraining: using post-trained models to pretrain better models" ( arxiv.org/abs/2601.21343 )
I'm a huge fan of Playwright MCP or Chrome Dev Tools MCP but I just came across Playwright CLI this morning and something tells me using this with a skill is going to be my preferred way of using browser automation with Agents. github.com/microsoft/pl...
A study reveals 'graph probing,' showing that the neural topology of large language models predicts language abilities better than traditional methods, enhancing understanding of LLMs and enabling applications like pruning and hallucination detection. https://arxiv.org/abs/2506.01042
What do you think of the definition in the paper?
In a new paper in our AI & Democratic Freedoms series, Rachel M. Kim, Blaine Kuehnert, @sethlazar.org, Ranjit Singh, & Hoda Heidari propose creating an AI Power Disparity Index, designed to measure and signal the changing distribution of power in the AI ecosystem. knightcolumbia.org/content/the-...
And there's a great paper by @_FelixSimon_ and Sacha Altay on the actual impact of generative AI on elections. And more that I'll be writing about later, (incl great art from Seb Krier).
Read them all atΒ buff.ly/zTrRirO. Thanks so much to @knightcolumbia (and especially Katy) for making it happen
My symposium on AI & Democratic Freedoms (edited with @katygb.bsky.social) is shaping up amazingly. @random_walker and @sayashk 's already influential 'AI as Normal Technology' is there. So is @danielsusskind's insightful investigation of what will remain for humans to do in our automated future.
Building democratic resilience for the era of AI agents, and for AGI beyond, is an urgent challenge. If you're building civic agents, I'd love to talk.
The paperβwritten with the visionary Mariano-Florentino CuΓ©llarβis published here: buff.ly/dMM0r7K
More important still, if we want to preserve democratic values in this radical period of transition, is to make democratic institutions more resilient to the changes ahead. This can't just be about going back to how things were. Their stressors are not all exogenous.
But computing has always been janus-faced for democracy, and this time won't be different. We could build civic agents that advance democratic values and disrupt concentrated power. Some features of modern AI could help. But civic agents must be built, they're not the default.
And there will be novel threatsβthe erosion of cognitive autonomy, accelerated cyberwarfare, the ability for executives to wear the administrative state like a mech suit that implements without question their every whim.
Soaring inequality? Check.
Concentration of corporate power? Also check.
Stuffed-up information ecosystem? Yep that too.
Backsliding as the 'autocratic legalism' playbook gets rolled out in one nation after the next? Agents could be a helpful software Stasi.
Capable LLM-based agents are already here. For any domain where you can build a good RL verifier, current knowledge will get us to human-level performance and better. Further advances are on the horizon. Agents are bound to exacerbate the trends already stressing democracies.
How will AI agents impact democratic values? Democracies areβfor independent reasonsβalready under acute pressure. Since WWII Moore's Law and democratisation went up and to the right in lockstep. Not any more.
In the latest essay in our AI & Democratic Freedoms series, @sethlazar.org and Tino CuΓ©llar (@carnegieendowment.org)
discuss how AI agents might affect the realization of democratic values. knightcolumbia.org/content/ai-a...
"Democracies are weaker than they have been for decades," write Carnegie president Mariano-Florentino CuΓ©llar and @sethlazar.org for @knightcolumbia.org. "A great wave is coming, and they are ill-prepared."
AI agents could help or hurt. And they won't protect democratic values on their own.
@caseynewton.bsky.social in re an old discussion about AI denialists. , hope youβve caught knightcolumbia.org/events/artif...