Unfortunately there is a significant tension between (i) the increasing partisan attachment of academia to the left and (ii) the expectation that broad audiences will listen to us
Unfortunately there is a significant tension between (i) the increasing partisan attachment of academia to the left and (ii) the expectation that broad audiences will listen to us
Working on a paper on this with @anthlittle.bsky.social and @carloprato.bsky.social. Would welcome suggestions for papers that explore similar point.
Trump very popular w/ base, -and- due to single member districts, can inflict electoral penalty even if they force him out. In party list systems an ousted leader canβt punish incumbents to same degree.
Also with life
They get a raise on account of broader reach of their field. We get a cut on account of less unique expertise.
Powerful people at my university actually did propose changing the operational definition of academic merit so that priority of DEI objectives was explicitly required. I think thatβs a serious mistake for the quality of our university and itβs not because I only want to hire whites.
Iβm sure there are (bad) people who think this, but this post is not a good depiction of actual objections to changes in hiring criteria in the past 10 years. The first time in my life I heard explicit identity preferences in hiring was for anyone but a white man. That really happened and itβs bad.
The judgeβs remark seems directly relevant under the proscription on βdisplacement of a US workerβ in the H1B visa act. We can create new research with supply of competent faculty, but why canβt we train them? Given underemployment of large numbers of Americans we seem to be missing an opportunity.
Hello, your thread references case law on foreign citizens *in* the US. What does this imply, and what is the relevant law, on requirements for content neutrality *at the moment of entry*? For example the US cannot remove a foreign citizen for saying βdeath to Americaβ, but can it bar their entry?
Let me add a poll to help you aggregate opinions
o Yes
o Hell yes
Itβs good to see in this study that the law is upheld by ground level admins. Iβm less sure on how to tell, in this setting, that itβs upheld because the admins intrinsically want to, or because discriminatory action is deterred by extrinsic incentives.
Is it your contention that the potential for monitoring had no effect on the actions taken?
Viewpoint discrimination is illegal at any public u. and probably many private uβs in this sample. Ordinarily Iβd think that monitoring affects the propensity to do illegal things.
Responsible parties theory posited responsibility to a system, which is why it was always a waste of time. If the system doesnβt provide incentives for the required elite behavior, then itβs not a theory, itβs a wish list. Itβs like theorizing that everyone should have a pony.
Iβm sorry this is not true. It actually is unfair in a transparent sense to say βwe will not hire a white man for this positionβ and that is a real thing that people said and say.
DEI is not just one policy but not all of them are good and I donβt think saying that is racist or anti-integration.
Or, for that matter, that we would just try to read Kagan out of existence while sheβs on the court
I did not expect McNollgast to be the progressive position and Kagan (2001) to be the conservative one
Iβve learned to set aside labels like βdescriptiveβ and βinferenceβ in pol sci because they say little about the action and much about the discipline.
Descriptive = low status
Causal = high status
Measurement = low status
Inference = high status
The analytic distinctions are small.
Thanks for replying. I donβt read enough surveys to see anything other than βhow do we affect Y,β where Y is whatever β support for minority candidate, support for nuclear war in defense of allies. If everyone agrees this is a meaningful causal claim then Iβm good. But it doesnβt sound like it.
I lament this state of affairs but I canβt lament it more for the survey experimenters than the other a-theoretical identificationists out there.
Therefore the claim βthis type of experiment is actually measuring a preference, not inferring a causal quantityβ is not true. It is doing both of those things. βPreferenceβ is simply the word we give for the effect of choice attributes on choices under an assumed theory of respondent behavior.
I see erudite, thoughtful posts like this and I think thereβs just no chance I will ever understand causal inference. An identification problem *is* a measurement problem and vice versa.
Survey experiments are both inferring and measuring causal effects, of the prime on the survey response.
How much is the setting we study βlikeβ the setting we want to effect? Thereβs no way to measure this ex ante. But I see no principled reason to claim itβs in general worse with survey experiments than other kinds.
Now E()>0 in a survey expt doesnβt mean E()>0 in real world settings of interest, but thatβs true of all credibly identified findings and is a problem that already has a name, external validity.
That makes sense.
To your larger point it seems to me that for many scholars there *isnβt* a theoretical quantity of interest besides E(Y1-Y0). It is like discovering βdrugsβ that move politics and it doesnβt matter what the channel of effect is, it just has to work. E() > 0 means it works.
Why does it matter, the order in which the assumptions are stated and the results obtained? If I only think of the theoretical model that E(.) maps to after the experiment, it still maps there.
You were clear enough Iβm just curmudgeonly :)
But re. Optimality, yes I think we can all see that the fieldβs demand for causal claims is distorting what we do and how we learn. People like John Huber noticed this distortion long ago.
Their problem is that you canβt necessarily use the measured effect to inform any decision you might make. But that is also no different than plenty of credibly identified work.
I donβt see how you can exclude survey experiments from the credibility club in a principled way. They are very clear about the assumptions you must accept to believe that the prime has a causal effect on the outcome, which I thought was the meaning of credibility.
This all just reads to me like policing club boundaries.
I donβt think there is a meaningful distinction between βmeasurement toolsβ and research designs for causal inference. They too are measurement tools. The question is simply what are you measuring and how does it help you see effects of actions you might take in the future?