's Avatar

@davpoole

AI Educator, researcher, author. Professor Emeritus in CS at University of British Columbia. One of the founders of Statistical Relational AI. https://www.cs.ubc.ca/~poole/

526
Followers
157
Following
8
Posts
14.11.2024
Joined
Posts Following

Latest posts by @davpoole

Drs. Mackworth and Poole recognized for making AI education accessible to students worldwide

Drs. Mackworth and Poole recognized for making AI education accessible to students worldwide

UBC Computer Science Professors Emeriti Alan Mackworth and David Poole were awarded the AAAI/EAAI Patrick Henry Winston Outstanding Educator Award for developing free online resources to learn foundations of AI. Congratulations! Read more: www.cs.ubc.ca/news/2026/02...

23.02.2026 17:16 πŸ‘ 4 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

Delighted to receive this award with @davpoole.bsky.social. @cs.ubc.ca @aaai.org Talk details here aaai.org/conference/a... #ai

19.01.2026 17:01 πŸ‘ 6 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0
Preview
AAAI-26 Invited Speakers - AAAI

More details on their talk here: aaai.org/conference/a...

19.01.2026 13:17 πŸ‘ 4 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Why AI can’t take over creative writing As a computer scientist, I would hope that human creativity is more than regurgitating what others have written.

"Generating text from LLMs can be seen as plagiarism, one word at a time." is a way to explain stochastic parrots of @emilymbender.bsky.social @timnitgebru.bsky.social Angelina McMillan-Major and @mmitchell.bsky.social

Why AI can’t take over creative writing theconversation.com/why-ai-cant-...

10.04.2025 15:55 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Why AI can’t take over creative writing As a computer scientist, I would hope that human creativity is more than regurgitating what others have written.

I wrote this piece on AI and creative writing after reading articles by creative writers lamenting the impact of AI theconversation.com/why-ai-cant-...

02.04.2025 15:56 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Andrew Barto and Richard Sutton are the recipients of the 2024 ACM A.M. Turing Award for developing the conceptual and algorithmic foundations of reinforcement learning. Andrew Barto and Richard Sutton as the recipients of the 2024 ACM A.M. Turing Award for developing the conceptual and algorithmic foundations of reinforcement learning. In a series of papers beginning...

The 2024 Turing Prize winners announced this morning: Barto & Sutton for "developing the conceptual and algorithmic foundations of reinforcement learning". Well deserved. Built on 1959 Donald Michie's idea; matchboxes and colored beads learning tic-tac-toe. awards.acm.org/about/2024-t...

05.03.2025 15:18 πŸ‘ 35 πŸ” 10 πŸ’¬ 0 πŸ“Œ 2

AI is like plastic β€” a lot of people hate it because it often comes across as fake and tacky, but it’s flexible and it’s cheap and there’s so many things that would be impossible or impractical without it.

And yes, a lot of AI will be junk. Just like everything else. (cf Sturgeon’s Law.)

17.01.2025 05:23 πŸ‘ 15 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0

The "Real World" is unfair. It is biased. And when it comes to estimating treatment effects, the "Big Data" can't fix a bias that's baked into how the data are collected (I would say "design", but there is usually no design involved). So pardon me if I prefer boring old "Useful Evidence".

14.01.2025 05:48 πŸ‘ 24 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
The Washington post
Breaking News
Post Exclusive
8 minutes ago
Police use facial recognition as it was never intended: As a shortcut to finding and arresting suspects without further evidence
Confident in unproven facial recognition technology, sometimes investigators skip steps, and at least eight Americans have been wrongfully arrested, a Washington Post investigation found.

The Washington post Breaking News Post Exclusive 8 minutes ago Police use facial recognition as it was never intended: As a shortcut to finding and arresting suspects without further evidence Confident in unproven facial recognition technology, sometimes investigators skip steps, and at least eight Americans have been wrongfully arrested, a Washington Post investigation found.

I’m sorry, this might be a great investigation, but β€œas it was never intended” is obscene. This is precisely how many people specifically and repeatedly told us β€” warned us β€” it would be used.

13.01.2025 22:22 πŸ‘ 6852 πŸ” 1851 πŸ’¬ 195 πŸ“Œ 133
Post image

Explanation of how the myth of the market being "the natural expression of human freedom" is well a myth.

As Right-Wing Libertarians enjoy spreading this myth, we thought it's important to share this explanation.

14.01.2025 04:39 πŸ‘ 224 πŸ” 80 πŸ’¬ 3 πŸ“Œ 6

I genuinely cannot believe Google is now showing Gemini-generated medical snippets including *drug summaries and medical advice* to treatserious health conditions.

I checked. It does. With barely a tiny disclaimer about GenAI. How can a (formerly?) trusted company be so incredibly reckless?

11.01.2025 12:57 πŸ‘ 169 πŸ” 48 πŸ’¬ 11 πŸ“Œ 2
Post image

This passage from the book, Bullshit Jobs, is worth a read.

11.01.2025 00:11 πŸ‘ 733 πŸ” 230 πŸ’¬ 20 πŸ“Œ 28
Preview
"AI now beats humans at basic tasks": Really? Two weeks ago, Nature, one of the world’s most prestigious journals, had this jarring headline:

See aiguide.substack.com/p/ai-now-bea...

06.01.2025 23:06 πŸ‘ 12 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0

Autonomous vehicles accelerate the trend begun by cars of isolating travellers from their neighbourhoods, reducing neighbors to mere obstacles our sensors try to avoid, and hence dissolving the social bonds that hold together communities

03.01.2025 18:25 πŸ‘ 63 πŸ” 11 πŸ’¬ 4 πŸ“Œ 0

🎊 Happy new year folks! 🎊

πŸ‘€ ready to start working on that paper deadline? πŸ‘€

03.01.2025 09:27 πŸ‘ 17 πŸ” 6 πŸ’¬ 1 πŸ“Œ 0
David Poole - Research

I have created a list of open research problem from my work and from writing our textbook. Something to work on for the new year! Comments please! #relationallearning #causality #AI #aifca #starAI www.cs.ubc.ca/~poole/resea...

31.12.2024 19:16 πŸ‘ 6 πŸ” 3 πŸ’¬ 2 πŸ“Œ 0
Artificial Intelligence: Foundations of Computational Agents

For a textbook introduction to agentic AI see our recent Cambridge University Press AI textbook. We are much less bullish about LLMs and don't use the term "agentic". artint.info

30.12.2024 18:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Google AI reporting β€œno drug interactions” for two drugs that definitely have drug interactions. When can we collectively decide the AI experiment is over?

24.12.2024 22:46 πŸ‘ 169 πŸ” 30 πŸ’¬ 8 πŸ“Œ 1
Preview
UK plans to favour AI firms over creators with a new copyright regime One of the biggest uncertainties in the ongoing AI revolution is whether these systems can legally be trained on copyrighted data. Now, the UK says it plans to clarify the matter with a change to the ...

The UK is consulting on plans to favour AI firms over creators when it comes to copyright. For @newscientist.bsky.social, I wrote about what that means www.newscientist.com/article/2461...

17.12.2024 11:55 πŸ‘ 6 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image Post image Post image

Pleased to share the latest version of my paper with Arthur Spirling and @lexipalmer.bsky.social on replication using LMs

We show:

1. current applications of LMs in political science research *don't* meet basic standards of reproducibility...

17.12.2024 19:50 πŸ‘ 438 πŸ” 164 πŸ’¬ 18 πŸ“Œ 21
Screenshot of Table of Contents (Part 1)

Contents
1 Introduction 217
2 Positionality 221
3 Overview of Risks and Harms Associated with Computer
Vision Systems and Proposed Mitigation Strategies 223
3.1 Representational Harms . . . . . . . . . . . . . . . . . . . 223
3.2 Quality-of-Service and Allocative Harms . . . . . . . . . . 229
3.3 Interpersonal Harms . . . . . . . . . . . . . . . . . . . . . 237
3.4 Societal Harms: System Destabilization and Exacerbating
Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 245
4 Frameworks and Principles for Computer Vision
Researchers 266
4.1 Guidelines for Responsible Data and Model Development . 267
4.2 Measurement Modeling . . . . . . . . . . . . . . . . . . . 271
4.3 Reflexivity . . . . . . . . . . . . . . . . . . . . . . . . . . 273
5 Reorientations of Computer Vision Research 276
5.1 Grounded in Historical Context and Considering
Power Dynamics . . . . . . . . . . . . . . . . . . . . . . . 276
5.2 Small, Task Specific . . . . . . . . . . . . . . . . . . . . . 279
5.3 Community-Rooted . . . . . . . . . . . . . . . . . . . . . 280

Screenshot of Table of Contents (Part 1) Contents 1 Introduction 217 2 Positionality 221 3 Overview of Risks and Harms Associated with Computer Vision Systems and Proposed Mitigation Strategies 223 3.1 Representational Harms . . . . . . . . . . . . . . . . . . . 223 3.2 Quality-of-Service and Allocative Harms . . . . . . . . . . 229 3.3 Interpersonal Harms . . . . . . . . . . . . . . . . . . . . . 237 3.4 Societal Harms: System Destabilization and Exacerbating Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 245 4 Frameworks and Principles for Computer Vision Researchers 266 4.1 Guidelines for Responsible Data and Model Development . 267 4.2 Measurement Modeling . . . . . . . . . . . . . . . . . . . 271 4.3 Reflexivity . . . . . . . . . . . . . . . . . . . . . . . . . . 273 5 Reorientations of Computer Vision Research 276 5.1 Grounded in Historical Context and Considering Power Dynamics . . . . . . . . . . . . . . . . . . . . . . . 276 5.2 Small, Task Specific . . . . . . . . . . . . . . . . . . . . . 279 5.3 Community-Rooted . . . . . . . . . . . . . . . . . . . . . 280

Screenshot of Table of Contents (Part 2)

6 Systemic Change 285
6.1 Collective Action and Whistleblowing . . . . . . . . . . . . 285
6.2 Refusal/The Right not to Build Something . . . . . . . . . 287
6.3 Independent Funding Outside of Military and Multinational
Corporations . . . . . . . . . . . . . . . . . . . . . . . . . 289
7 Conclusion 291
References 293

Screenshot of Table of Contents (Part 2) 6 Systemic Change 285 6.1 Collective Action and Whistleblowing . . . . . . . . . . . . 285 6.2 Refusal/The Right not to Build Something . . . . . . . . . 287 6.3 Independent Funding Outside of Military and Multinational Corporations . . . . . . . . . . . . . . . . . . . . . . . . . 289 7 Conclusion 291 References 293

Dear computer vision researchers, students & practitionersπŸ”‡πŸ”‡πŸ”‡

Remi Denton & I have written what I consider to be a comprehensive paper on the harms of computer vision systems reported to date & how people have proposed addressing them, from different angles.

PDF: cdn.sanity.io/files/wc2kmx...

16.12.2024 16:52 πŸ‘ 387 πŸ” 165 πŸ’¬ 8 πŸ“Œ 10
Post image

Josh Tenenbaum on scaling up vs growing up and the path to human-like reasoning #NeurIPS2024

15.12.2024 18:14 πŸ‘ 82 πŸ” 7 πŸ’¬ 1 πŸ“Œ 3

2) how frustrated student (university) researchers are. In discussions at poster sessions, many said they couldn't attempt to answer scientific questions because they couldn't afford the compute time. This is a problem for the future of the field, and is one way big-data AI will hit a wall.

16.12.2024 04:33 πŸ‘ 5 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Two things I learned from #neurips2024 1) transcriptions are still terrible. The simultaneous transcriptions of the talks didn't take into account the vocabulary of the papers being presented. It's probably the fault of one-size-fits-all language models. There were slides with the vocabulary... 2/

16.12.2024 04:28 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Artificial Intelligence: Foundations of Computational Agents

2/ decision networks, MDPs, reinforcement learning, multiagent systems, logic programming, knowledge graphs, relational learning. A release candidate for version 1.0, so comments please! Based on our AI textbook artint.info See aipython.org @davpoole.bsky.social and @alanmackworth.bsky.social

10.12.2024 03:24 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
AIPython

We are pleased to announce the latest version of AIPython.org: open-source, runnable pseudocode (in Python) for all your favorite AI algorithms, including search, CSPs, logic, planning, supervised machine learning, neural network, graphical models, unsupervised learning, causality, /2

10.12.2024 03:23 πŸ‘ 15 πŸ” 4 πŸ’¬ 1 πŸ“Œ 1

Human minds, intelligences and states of consciousness are beautifully diverseβ€”I just don't buy that the right approach for AI is "all you need is more compute", pretending AI has an objective view from nowhere, and being owned by a small number of homogenously white-bread tech firms

07.12.2024 20:40 πŸ‘ 39 πŸ” 4 πŸ’¬ 1 πŸ“Œ 1

Truly the stupidest idea I have ever seen in journalism.

06.12.2024 16:12 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

OpenAI and Anduril.

Unreliable generative AI for warfare.

What could possibly go wrong?

05.12.2024 01:12 πŸ‘ 42 πŸ” 8 πŸ’¬ 10 πŸ“Œ 2
Cover of book "Artificial Intelligence: Foundations of Computational Agents" (3rd Edition) by David L. Poole and Alan K. Mackworth, background of green stained neurons

Cover of book "Artificial Intelligence: Foundations of Computational Agents" (3rd Edition) by David L. Poole and Alan K. Mackworth, background of green stained neurons

β€œArtificial Intelligence: Foundations of Computational Agents” (3rd Edition) by @davpoole.bsky.social and
@alanmackworth.bsky.social is available in hardcopy and with full, free, open access online: artint.info. Check out the endorsements: artint.info#endorsements

18.11.2024 22:19 πŸ‘ 9 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0