Teaching the little one cause and effect
@pwgtennant
Epidemiologist interested in causal inference. Currently Visiting Faculty at @yalesph.bsky.social. See my Intro to Causal Inference Course: https://www.causal.training/ See the Causal Inference Interest Group: https://cls-data.github.io/CIIG/ #EpiSky
Teaching the little one cause and effect
It's hard to be a scientist right now & realise that many of your colleagues are driven by completely different values.
Not a desire for truth, understanding, or insight. But something else entirely. Something where learning & problem solving have no intrinsic value.
Something AI can do for you.
Amazing news, congratulations!
Good morning! Yes, this is he
Oh... I saw people share that. I was tempted to reply/quote but it seemed so off-the-scale idiotic that I couldn't be bothered.
It's hard to be a scientist right now & realise that many of your colleagues are driven by completely different values.
Not a desire for truth, understanding, or insight. But something else entirely. Something where learning & problem solving have no intrinsic value.
Something AI can do for you.
Sorry to be dejected on main but man it is really a time of disappointment in people hey!
So many people with whom I thought I shared a worldview and felt camaraderie with, and I was totally wrong, they just don't really share that worldview.
Perhaps because it's a highly skilled task that often required a lot of time? Both of which are disincentives in an industry that rewards volume of output rather than quality.
good lord, this is art
You're making the strange assumption that they don't take the same slap dash approach to the rest of their work.
I myself am not sure that the experiments are a masterpiece. More likely they're a rushed mess that they want to rush out for a quick publication. And an AI figure is entirely fitting.
Why canβt people (especially scientists) see that AI generated figures and diagrams shout out that they were AI generated and look awful? Youβve spend many months designing and performing experiments only to cover the resulting masterpiece in clear plastic like a cheap sofa.
No. You would legitimise them in unhelpful ways.
First, it says that your are fine with the way they've treated.
Second, you lend your authority to a sham debate with an obscure aim, that is surely intended to normalise Palantir and position any opposition as "extreme liberal whining" etc.
Question for Bluesky: The Spectator has asked me to take part in a debate on AI with Palantirβs very own Louis Moseley. And, possibly, Michael Gove. This is eg of the Spectatorβs coverage of me. Should I accept?
16 co-authors from Pfizer signed off on sharing this with the world after generating it from training data based on their own protocols and SAPs π
Literally the first thing academics pointed out about LLMs YEARS AGO was that they would be used for catastrophic levels of fraudβhow nice that now the AI bros are confirming what we all knew by running fraud experiments to tell cheats which LLM to use to cheat
But, also, can we talk about the logic of asking an AI to write a regulatory document to "save 1-2 days work"?
Perhaps I have a different understanding or risk/benefit, but this seems like an extremely idiotic thing to do to save not very much time?
I've worked with clients who have drafted SAPs using AI.
It ends up taking *more* time, because you have to ask questions about bizarre, unclear, or meaningless choices and then waste time explaining all the changes.
if you're gonna write a paper claiming "LLMs can generate a statistical analysis plan for a clinical trial", and then provide THIS as an example of good output... big yikes
I am guilty of using a lot of gifs and memes based on millennial shows.
But I try to make sure the material requires zero contextual knowldge.
This is especially important when teaching a diverse international audience.
Dying to know who they are... π€
Good news: Scientists were wrong about how bad sea level rise is.
Bad news: Itβs even worse than we thought.
βthey've seen people nude, using the toilet and engaging in sexual activity, along with credit card numbers and other sensitive informationβ
Theyβre probably also seeing a lot of domestic violence and sexual assault. We already know from Alexa mods theyβve seen a ton of that andβ¦..done nothing.
Alternative version:
The cylons were defeated during the pilot episode by cloudfare because they couldn't log into any of the colonial technology.
I always like the reassurance that Iβm not a cylon. Actually battlestar galactica would have been quite different if they had this technology.
This is the internet now. Just endlessly verifying that you're human.
Useful for any other Europeans who are currently living in the USA!
The notion of an 'AI therapist' is oxymoronic.
Therapy is relational. It involves forming a relationship with another human being. Someone who can empathise with and validate your experiences.
You cannot have a relationship with an word prediction machine!
www.theguardian.com/lifeandstyle...
I'm also confused as to why so many (from my collection) have joined Bluesky and Threads but added no bio information or links?
Even if they never post, they can own the handle as a holding space and point people to where they are active.
Dozens haven't!
Important update. My admiration goes out to all those involved for their leadership and good judgement.