This paper is being honored with a Best Short Paper Award at IEEE VIS 2025! I'm truly grateful for the recognition and looking forward to discussing this work at the conference. #ieeevis
This paper is being honored with a Best Short Paper Award at IEEE VIS 2025! I'm truly grateful for the recognition and looking forward to discussing this work at the conference. #ieeevis
Decision theory informs us more about what to show than how to show (or how people interpret vis in practice), so I think theory building efforts need to synthesize it with theories of perception, cognition, and social/cultural dynamics.
My suggestion about using decision theory as a deductive framework during design studies is one path toward codifying what we know about decision context in ways that may be useful for making testable predictions about when a particular vis design is likely to be effective.
Thanks Carlos, I didn't realize Gordon was on here.
Your work on AVD was a huge inspiration for this paper. I'd be curious to hear your thoughts on the connects we draw between disclosure and AVD vulnerabilities.
Ultimately, this paper is about the epistemology of data communication. I close with reflections on how to develop a more robust logic of generalization about decision support, and how social norms demanding that research yield guidelines that transcend context are an obstacle.
This work demonstrates the conceptual value of decision theory in visualization research and practice, and provides a primer for unfamiliar readers. My hope is that this will increase the adoption of ideas from decision theory in the visualization community.
I draw on decision theory to analyze the heterogeneity in decision problems that are studied by visualization research. By comparing examples, I call attention to the dimensions on which decision problems vary and when we can(not) expect results to transfer to different settings.
The dominant logic of generalization in visualization research (often implicitly) holds a strong assumption that efficacy of decoding is relatively context invariant, however, much modern research problematizes this assumption. Where does this leave us?
In visualization research and practice, we care a lot about supporting decision-making. However, it's not always clear how findings from empirical research generalizes to various decision contexts.
Announcing my forthcoming short paper at #ieeevis! In this solo-author work, I examine and critique the logic of generalization about decision support in visualization research. arxiv.org/abs/2508.06751
This ambitious conceptual/theoretical work is Krisha's first paper and my first PhD-student led paper at UChicago. It's a big milestone for both of us, and I think she's done a wonderful job! Check out the paper and the talk at #ieeevis for more. arxiv.org/abs/2508.08383
I'm extremely excited to share this new paper out of the Data Cognition Lab! In it @krisha-mehta.bsky.social, Gordon Kindlmann, and I reframe visualization as a mechanism for data disclosure and develop a vocabulary for how visualization design induces loss on underlying data signals.
Office desk without any personal items or decor
Today, along with 2,000 other NIH employees, I had to clear out my office π
It was truly the honor of my life to work with such incredibly passionate people focused on improving human health. Iβve never experienced a more positive culture where *everyone* cared about their job and serving others.
I wish more HCI research was this careful about measurement. If we do turn the corner, I think weβre gonna find out that a lot of our study results donβt mean what the authors originally said they meant.
Thanks to the wonderful folks at @dsi-uchicago.bsky.social for making this video!
I also use that data set, but I ask students to encode all the variables in the table, which rules out a CDF because it doesnβt show the bacteria names. Unless they add the names with annotations. That could be cool. Avoiding occlusion and crowding would be a challenge.
Such a disappointment. Itβs illustrative of a deep political problem on the left, a sort of allergy to disagreement or information that goes against oneβs priorities. This just isnβt a workable approach in a democracy, especially not for a βbig tentβ party.
What tech company will be the first to stand up and say that gutting the NSF is bad actually?
The economic argument seems pretty clear
Less NSF --> fewer PhD students --> fewer researchers --> smaller AI tech pipeline --> slower progress --> less competitive globally
Most of your life wonβt go as planned. Stop placing your happiness, health, and self-actualization on the horizon.
My take is that the βpauseβ rhetoric is bullshit. They want to defund the universities, full stop. Idk why people are in denial that this is the game plan, to eviscerate centers of power on the left starting with educational institutions and government agencies staffed by lefty technocrats.
βThe executive orders make it quite clear that certain DEI activities are considered to violate current law, and we cannot fund anything that includes activities deemed unlawful.β Okay, but until a few weeks ago, promises about DEI were required. Idk how any of our grants survive this litmus test.
This is what I thought you were saying, that true marginal density estimation requires different samples, but I wasnβt completely sure. Thanks!
I think Iβm guilty of misusing Monte Carlo integration in this way. Not 100% sure what to do differently, but I appreciate your explanation of the issue.
Somewhere in the U.S., thereβs a scientist staring at their NSF/NIH grant application wondering why they bother. This post is for you. Science and society both need you. Hang in there and know there is a whole community supporting you.
A great thread if youβre into visualization for model interpretability and Bayesian stats!
I would like to know more about this too because it seems like this is how most people use posterior samples in practice.
Ah, thanks! Itβs not at all clear that that icon is a toggle button and not just some shiny logo put there to taunt you.
Guarding our creative authenticity against the proliferation of trite AI slop will be a massive humanist project for the coming years.
On principle, I donβt want AI assistance on because I donβt want there to be any mistaking the provenance of my words and ideas. If you read my writing, have no doubt that I wrote every word with intention.
Anyone know how to turn off the AI assistance feature in Overleaf? I didnβt want this. I didnβt consent to this. I find the suggestions distracting in a way that basically makes the tool unusable for deep work. Am I alone in feeling this way?
Thanks for pointing this out.