Sorry I was not clear. This paper is by Anthropic but the calculation of potential job market use values (blue area) comes from this paper published by open AI, as cited in the appendix.
arxiv.org/pdf/2303.10130
Sorry I was not clear. This paper is by Anthropic but the calculation of potential job market use values (blue area) comes from this paper published by open AI, as cited in the appendix.
arxiv.org/pdf/2303.10130
Also these predictions of disruption are completely driven by AI companies. This report based its analysis on a paper published by Open AI. Independent analysis from MIT and Harvard business find less than 10% of tasks being handled by LLMs and 90% of companies failing to integrate them successfully
Host Kris Perry dives into the βdigital exposomeβ with Ran Barzilay, MD, PhD, in the NEW episode of #ScreenDeep. Tune in to hear about how researchers are beginning to measure and quantify online environmental factors in child health and more. Listen NOW: https://bit.ly/4tVGhUe
I use to think that ai could synthesizing docs somewhat. Then I read more on how they canβt read many PDFs.
bsky.app/profile/edte...
I'm disappointed and saddened tonight by the Algonquin College Board of Governors' decision to cut 30 critically important programs. Programs that led to employment, that built our community, that made our economy stronger. This goes way beyond Mr. Ford's "basket weaving" quip.
Could not agree more: www.theglobeandmail.com/gift/cb0fda6...
Job alert! Share!
My department is hiring an Assistant professor School/Counselling psychology.
Looking for βapplicants with research and clinical skills to prepare professionals and researchers in School Psychologyβ
Come thrive in Montreal, Canada!
mcgill.wd3.myworkdayjobs.com/McGill_Caree...
My pleasure. Thanks for starting this conversation!
Reporting here. bsky.app/profile/edte...
Donβt threaten. Do it. Lead.
apple.news/Ad9qmujKcRUW...
Since yesterday, the Einstein AI cheatbot website underwent some rebranding. The tagline changed from "Einstein does the busywork so you don't have to," to "Einstein is the personal tutor every student deserves." In the FAQ, "How does Einstein do all this?" became "How does Einstein help me learn?"
Einstein AI rebranding is a textbook example of EduWashing. They change how they market it using educational language but donβt change how the product educates.
Nice article on this practice with eduapps at link. Now it is AI apps.
www.tandfonline.com/doi/full/10....
Another tech company wanting the power but not the responsibility.
Donβt be disappointed by their inaction. Be angry. Regulate.
Today, Anthropic abandoned the lowest bar of accountability, not creating fully autonomous AI weapons.
Tech wonβt regulate itself
π¨π¦ must
www.cbc.ca/news/politic...
For teens, the most common use of GenAI is learning. Too bad itβs not a great learning tool for novices.
Also this one. bsky.app/profile/chil...
I agree! For those interested, here is the Decoder podcast I was involved in discussing this topic. Mire importanty, they interviewed teachers! bsky.app/profile/edte...
The issue, as explained in this great piece, PDFs store information in a way that is not easily read by LLMs, visually. LLMs attempt to read it, fail, and still produce outputs regardless. Ever try to highlight text from a journal article and the highlight goes across columns? Same issue at heart.
LLMs struggle to extract info from PDFs, instead they summarize and hallucinate content based on what they can see.
Itβs not an easy problem to solve.
Despite this, researchers and students are using them to produce literature reviews. Uh oh.
www.theverge.com/ai-artificia...
@theverge.com
I mean the same design logic. It does not use the same code. So, you canβt import experiments.
Open sesame is a great open source free alternative. I used Eprime for years and open sesame uses the same logic. Check it out.
But, the LLM did not use the methodology it reported (not an EBM). It constructed a paragraph based on training data that aligns with the topic of methodology for building a portfolio. This person is impressed by a method that does not exist.
This should worry anyone taking his advice on AI
Screenshot of job ad for postdoc. Follow link for details.
T32 Postdoc position in developmental psychopathology at UMN Institute of Child Development! There are many excellent primary mentors available, as well as an even better set of secondary mentors to choose from (including me). See link for details, due April 1. drive.google.com/file/d/18SoO...
My theories of learning class tonight discussed how education is enculturation and how power determines whose culture is centered.
This indoctrination is one hell of an example.
Illegitimi non carborundum!
Get ready for the impending AI glass shortage.
Hoard your pint glasses while you still got them.
Why mass shootings canβt be reduced to a mental illness diagnosis
In case you ever wondered how edtech companies, academic publishers, and big AI corporations, as well as HE institutions, make money out of your academic work, here's our new paper starting to unpack the assetization of academic content
The news is now public: "A majority of academics in the Faculty of Science have joined the Association of McGill Professors of Science."
Welcome AMPS, to the #unionuniversity !
AMPL - AMPD β₯οΈ mcgillscience.ca
When people use LLMs for medical diagnosis accuracy is below 50%. Strong case against automated benchmarks of LLM βability.β
If your uncle canβt use it to diagnose a migraine, it canβt diagnose a migraine. A doctor could, cause she understands what questions to ask.
www.nature.com/articles/s41...
Chat bots give the wrong medical advice more than half the time. www.nytimes.com/2026/02/09/w...
Infographic from Children and Screens titled "Findings on Children and AI Assistants & Smart Toys." The graphic highlights key research findings: nearly 60% of children interact daily with voice-activated assistants. Communication findings indicate children (3-6 years old) found kids communicate less with agents compared to humans. Perceived intelligence findings show children (6-10 years old) see personal assistants and smart toys as more intelligent. Trust findings reveal children (4-8 years old) show more trust and preference in voice assistants than a human for factual information.
Nearly 60% of children interact daily with a voice-activated #AI assistant. What does the research say about how children perceive AI?
Download your copy of the Research-at-a-Glance on βAI and Childrenβ to learn more: bit.ly/4bZnXje