We hope this prompts thinking and conversation around the fine grained alignment of what we ask students to do, we actually assess, and how we link that to what we value. Thoughts welcome!
We hope this prompts thinking and conversation around the fine grained alignment of what we ask students to do, we actually assess, and how we link that to what we value. Thoughts welcome!
We also talk about the slipperyness of assessment design, where, e.g., we might intend to assess performance or process but end up assessing a product (e.g. a written account of process, or powerpoint slides as part of the presentation) which entail different capabilities from those intended.
We discuss the indirectness of evidence of learning. For example, product assessments don't tell us about how work was done or who contributed what. Process assessments don't tell us about student agency in making choices. Performance assessments only give a snapshot of situated capability.
I wrote this open access paper with Dave Boud and Phill Dawson to help educators align assessments with the kinds of learning they care about.
We talk about 4P's (product, process, performance & practices) and what these different forms of evidence can tell us.
www.tandfonline.com/doi/full/10....
This paper was retracted in May for being too saucy (or because it was published early by mistake). Finally, it's out (again)!
asmepublications.onlinelibrary.wiley.com/doi/10.1111/...
A parody of AI-infused healthcare education that feels less farcical each day. With @insideoutanatomy.bsky.social
Thanks Yurgos!
Well done, Ian. Amazing.
Thanks! I feel like I should have figured that out ๐คฆโโ๏ธ
Is there a quicker way to follow people back on Bluesky without opening each profile?
I feel that we inevitably do assess some different things with a for learning lens because feedback gives us a chance to say more things we care about. But I am wondering, philosophically, perhaps, if it is possible to assess some things "for learning" that are impossible to assess "of learning"
This probably means you are doing it right. Sorry.
The awful truth is you don't actually find the right themes, you just stop at some point and go with the ones you have at that point. And then those might even change as you "write it up". But what you are doing right now, that's the process (IMO)
Can you assess things via assessment for learning that you can't assess via assessment of learning?
Raspberries cost about $2 per second at my house.
The panellists challenged us to slow down, think relationally, and critically examine the systems shaping education. From risks of homogenisation to the importance of sovereignty and refusal, Tristan and Tamika offered thoughts about how institutions might navigate AI ethically and inclusively.
How can we respect Indigenous Knowledges in responding to the challenges of AI in education?
Watch Prof. Tristan Kennedy DVC Indigenous (Monash), Dr Tamika Worrell (Centre for Critical Indigenous Studies, Macquarie), chaired by Prof. Claire Palermo.
teaching-community.monash.edu/respecting-i...
An old book, for PDSE standards (published in 2021!) yet so timely and relevant today :) link.springer.com/book/10.1007...
To me, all of this points to the need for educators to be trying to figure out this evolving educational context with students. Supportive, careful exploration and critique. I don't think extremes of resistance or adoption will help students where they are at. But that's up for debate!
We also have focus group papers in the pipeline, looking at how students are thinking about GenAI, their questions and positions. Complex, diverse, entangled in rich conditions, contexts and beliefs, is what we're seeing. A lot of them are using GenAI even as they are worried about using it.
...Students were concerned about GenAI feedback reliability + contextual / disciplinary expertise and valued relational aspect of teacher feedback.
GenAI / teacher feedback serve different functions. It's not either/or.
New open access paper from AIinHE project, led by Michael Henderson www.tandfonline.com/doi/full/10....
Survey of 7k students from 4 Aus unis. Students valued GenAI feedback ease of access, timeliness, volume, understandability + felt less risky than seeking teacher feedback. But...
New chapter in "The Remaking of Memory in the Age of the Internet and Social Media", edited by Qi Wang and @andrewhoskins.bsky.social.
academic.oup.com/book/59599/c...
It critiques simplistic claims about effects of technology on memory. DM if you can't access it.
2024 was a busy year, I'm pretty sure I've forgotten a bunch of things I did.
It's a good paper, you should be proud.
D'oh, wrong James Lamb! Sorry to both Jameses.
New open access paper on the complexities of hybrid teaching, learning spaces, materiality, choreography and improvisation, all that good stuff! Led by @jameslamb.bsky.social with @jenrossity.bsky.social and @joenote.bsky.social
www.tandfonline.com/doi/full/10....
So far, I have thought about the following:
- apprenticeship
- lab work
- collaborative projects where educator is part of the group
- supervised practice
- co-authoring with students
- peer-led teaching sessions
- coaching (where coach has to collaborate with student, as in tennis)
Hi all. I'm thinking about assessments where the assessor/educator works alongside students and gets a sense of their learning from collaborative involvement.
Do you have examples (real or hypothetical) that you can offer for discussion?
In terms of declarations made in advance of an outcome, the best we can say is that something is "a likely solution", taking into account the people, context and the conditions we believe to be in play.
It might not always solve all related problems. Which means it is only a solution to the problems it has already solved (rather than future, related problems) and it is probably a contingent solution, reliant on other factors.
If it has solved a problem, it's probably in combination with other forces and influencing factors, in conditions conducive to the outcome that happened.
And it's not inherently authentic, either.