The context is diff as well, when you use concensus, elicit etc to do research you know its research and there's no-one correct answer. When you ask a question about policy or rules relating to courses fee it better be 100% correct!
The context is diff as well, when you use concensus, elicit etc to do research you know its research and there's no-one correct answer. When you ask a question about policy or rules relating to courses fee it better be 100% correct!
"Chatbots" are not all the same. Undermind, Elicit etc are ironically given "easier" tasks - search the academic literature for support for answers vs typical question you get at reference desk where its that AND pretty much anything in the world including policies etc
If you have a online library chat service staffed by librarians, my personal view is it's probably usually a bad idea to replace it with a chatbot. Researchers using undermind, elicit, Consensus etc is a different thing that can be useful though. Not sure if this is contradictory.
Trying this after enabling agent teams. No idea if it will work
Claude Code - subagents vs agent teams - playing with this for literature review tasks. code.claude.com/docs/en/agen... (1)
sigh the thing about writing on cutting edge things is my blog posts get outdated within 6 months if not faster
Looking forward to FORCE2026 keynote- Surviving the Disruption: Scholarly Communication in the Age of AI by Ian Mulvany, Chief Technology Officer, BMJ Group
event.fourwaves.com/force2026/sc... - Early Bird Registration for FORCE 2026, 3-5 Jun,Singapore is still available!
@ianmulvany.bsky.social
No. Is that mac only? Or does it have windows versions
This is probably somewhat true but a lot of us move in echo chambers...people who are the most extreme deny ai camp probably still have not updated (how could they given they Proudly don't try beyond sharing news of LLM fails) , but we don't see them
Most of this prob wont get published in top tier journals but really amazing you can wonder about something and get results....
Im hearing people do something like (1) state some hypothesis (2) ask Claude find suitable open data repositories with data that might be used (3) it grabs/scraps data (4) runs the statistical test to answer (1) (5) write results out in latex. (4)
And i did this just with Gemini canvas. I could have done even more fancy stuff by asking Claire code to just automate all the steps and do the analysis. But I wanted to play myself (3)
So ai vibe coded an app that implemented the rules. I asked it look for files with the right include/exclude decisions in csv that it could use which it did. Then I played with it manually. In a sense I did a rough replication just like that (2)
I really think things are speeding up. Let's not even talk about agentic ai. The fact that you can easily ask ai to vibe code basic things on the fly is a game changer for someone like me. I was reading a paper suggesting a stopping rule for screening & I was curious to see how well it worked (1)
I may understand search technology better than many librarians but I was never a really good reference librarian .... (3)
He wants to do better than undermind & it seems obvious to me you can do so by adding sources lightly covered by undermind eg your library discovery sys for books, non journal content, openai deep research for grey literature etc but beyond that... you facing the wicked problem of search (2)
I was talking to a young faculty who is a very early adopter of Claude code and is automating a lot of research tasks and he was asking for advice about combining mcp servers/skills for literature review. He quickly realised the problem isnt technical or coding these days but workflow (1)
elicit.com/blog/elicit-... @elicit.com releases their api, yet another tool you can easily integrate into your custom research agent
of course leaving aside the labels, the fundamental diff is openalex is assigning at the article leve, while ASJC is well obviously at journal level so all articles in journal x belong to the same area.
spent the last 2 weeks really looking at the openalex topic hierachy - domain, field, subfield, topic vs ASJC subject area, main category and subcategory. Can see how openalex first 3 levels, domain , field, subfield is sort of modelled on elsevier's ASJC at least in terms of names. (1)
[Blogged] The agentic researcher - building custom, transparent and extensible workflows with Claude & MCP aarontay.substack.com/p/creating-y...
I really need to study this agentic stuff more
Read this and be even more confused katinamagazine.org/content/arti...
Eg jennai, prism, generic llm in canvas mode, even past versions of Keenious where you entered text and it used vector search to find recommended citations. It didn't work very well (which may be why it isn't critiqued), but many librarians still subscribed...
Its all of these things...libkey nomad comparisons is the easiest that jumps to mind because we are familar with that. But I think libkey nomad, lean library will all evolve the same way.. frankly tools where you write something and it suggests a citation has been around for years
"That is, Nexus offers to replace both cited sources that don't exist and cited sources that "aren't scholarly" with another real source." (2)
Does clarivate understand what citations are for?
"With Nexus, Clarivate are essentially integrity-washing synthetic text, giving it an academic sheen without any academic rigour." "... it offers to "Find Verified Alternative ... (1)
www.hughrundle.net/does-clariva...
While serious researchers will try and figure out which tools work for which scenarios, students particularly undergraduates are less likely too to check if the ai tool actually understands the task rather than just ignoring instructions & just generates a list of results or answers (4)
What the report doesn't say is many of these behaviors imported from general purpose chatbots may not only lead to worse results but will outright fail! Many tools branded as AI research assistants run fixed workflows & cannot adapt to diff scenarios.See aarontay.substack.com/p/how-agenti... (3)
"users expect AI research tools to function as collaborative research partners with capabilities similar to general-purpose chatbots. They bring habits from general-purpose LLMs prompt engineering, persona assignment, template filling, and collaborative writing into a domain-specific platform" (2)