Fritz's team has been taking notes from Zohran's ads and I am here for it
Fritz's team has been taking notes from Zohran's ads and I am here for it
There's a PhD thesis worth of things to unpack here within the "self hosted" ecosystem - so many software projects that don't have the engagement imperative behind their development, and I find that they can be a little harder to use but are infinitely more respectful of my intentions.
You inspired me to write this up, basically collecting what I tell people one on one over and over again... www.cs.uic.edu/~ckanich/sig...
My proposal: as the first project, students groups would propose different policies for ChatGPT use in their final project, including radically restructuring the final project. Because things might have to change, deeply.
And then they would argue, and vote, and I would do whatever they said.
Donβt get me started on the 8 bus! One of the most high volume routes get suspiciously absent from the frequent network.
For a very physical representation, look up at the beginning of Terminal A at MDW: www.flickr.com/photos/artfa...
A corollary to this is that the mental difficulty of WRITING an N page paper might end up being lower overall than generating and then fact-checking every last word, even if getting something that LOOKS complete can be done in seconds. That's what's so insidious about it
Students would probably generate the screenshot text, then you're back at square one with respect to Brandolini's law. Call me crotchety, but this goes back to how socially unacceptable lying should be - I don't care if you used AI every step of the way, UNTIL you pass off a hallucination as true.
Zed for sure. There's still AI stuff to turn off, the debugger isn't in yet and it doesn't have the python notebook functionality that vscode has, but I like it.
You could also take the "Metroidvania approach" where you introduce the students at a superficial level to the fully correct built out solution, then take it all away and build back up to that spot. That's roughly what I do in my secure web dev class.
This is going to hurt American competitiveness and result in fewer scientific advancements, fewer chances to train the next generation of scientists, and a lower quality of life for everyone. We need to reverse these harmful changes to our critical scientific infrastructure.
I agree in principle - to accomplish it we should make βChicagolandβ more prominent an identifier. They do this well in Northern California with βBay Area.β
This is one of the worst violations of research ethics I've ever seen. Manipulating people in online communities using deception, without consent, is not "low risk" and, as evidenced by the discourse in this Reddit post, resulted in harm.
Great thread from Sarah, and I have additional thoughts. π§΅
One of the many reasons to fix things here in the USA rather than search for greener pastures.
Even accepting the premise that AI produces useful writing (which no one should), using AI in education is like using a forklift at the gym. The weights do not actually need to be moved from place to place. That is not the work. The work is what happens within you.
For novices, I often describe "writing more than 2-3 lines of code without testing it" as "Wile E. Coyote running off the side of a canyon." You think you're making progress, until you look down. LLMs crank this phenomenon up to 11. Hard to see a non-abstinence way to learn in this environment.
While I can see LLMs in general as being potentially useful for CS education, copilot-style autocomplete is unequivocally harmful to learning. Reading this paper feels like watching a horror movie, where I'm screaming "turn it off! just turn it off!" at the screen constantly
Academia isnβt perfect, but it offers a rare space to pursue knowledge for its own sake. If research were fully privatized, only profit-driven questions would get asked. Yet many of the most transformative discoveries began as curiosity-driven inquiries whose value wasnβt clear for years.
I may need to take a Metra trip just to see the busway!
nope, I think it's just a garden variety "that administrator made it a priority, so it happened" type situation.
Overall cheating, not yet; we did draft a genAI policy for graduate theses however. I had to advocate pretty hard to prevent any prescriptive requirements, and focus on academic integrity as an agreement with the advisor/program on how the thesis is completed.
Image illustrating the stages of AI impact, combining a flowchart and a detailed table, with a caption and citation. Top Flowchart: Shows three stages in boxes. "Invention" (blue box) leads via an arrow to "Innovation" (purple box), which leads via an arrow to "Diffusion" (orange box). A curved arrow loops from "Diffusion" back to "Innovation", indicating a cycle. Table Below: Organizes information about AI impact across four stages corresponding roughly to the flowchart. Rows are labeled: "Stage", "Example", "Appropriate metrics", "Speed limits". Columns represent the stages: 1. **Methods / capabilities** (Blue background, relates to Invention): Example: LLMs; Metric: Benchmarks; Limits: Herding, "Last mile" problem. 2. **Products / Applications** (Purple background, relates to Innovation): Example: AI code editor; Metric: Uplift studies; Limits: Capability-reliability gap, Dependence on diffusion. 3. **Early adoption** (Orange background, part of Diffusion): Example: Individuals creating simple apps using AI; Metric: Surveys; Limits: Learning curves, Safety. 4. **Adaptation** (Orange background, part of Diffusion): Example: Retraining software engineers to emphasize AI skills; Metric: Economic indicators; Limits: Organizational changes, Laws and norms. Figure 1 Caption: "Like other general-purpose technologies, the impact of AI is materialized not when methods and capabilities improve, but when those improvements are translated into applications and are diffused through productive sectors of the economy. There are speed limits at each stage." Citation (bottom right): "4. Jeffrey Ding, 2024. Technology and the Rise of Great Powers: How Diffusion Shapes Economic Competition. Princeton University Press, Princeton."
A lot to chew through in this fantastic piece by @randomwalker.bsky.social and @sayash.bsky.social knightcolumbia.org/content/ai-a... The models exist now as "Normal Technology," how we use them and how society adapts to them is where the impactful and interesting work is happening.
Agreed - my question is, if the traditional path is to learn/practice a, then b, c, d, ... then z to build actual expertise, will the path in the presence of AI be to do everything the same, but don't get distracted by the answer-generation-machine, or something else?
This is the best metaphor I have seen that may actually resonate with the students I teach re LLM usage.
RETVRN
The Aussies understand this - probably one of my favorite takeaways from a Bluey episode
βFrom April 14-18, select UBC graduate programs at UBC Vancouver will re-open their applications for US citizens to be considered for Sept 2025 or Jan 2026 entry - they are ready to provide quick admissions decisions for these applicantsβ #gradstudent #gradschool
www.grad.ubc.ca/us-applicant...
The advice I'm hearing is that students on visas should be checking SEVIS daily. Ideally, students' colleges & universities could do that for them - if they have the capacity.
Fantastic work by @tressiemcphd.bsky.social as always www.nytimes.com/2025/03/29/o...
reminds me of the @authorjmac.bsky.social quote:
I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.
if a computer is a bicycle for the mind, LLMs are zero gravity for the mind: seems magical, allows you to do things you never thought possible, causes massive muscle atrophy if you don't religiously exercise, and over the long term causes irreparable bodily harm