Published in Hastings Center Report. @purduepolsci.bsky.social @GRAILcenter.bsky.social
onlinelibrary.wiley.com/doi/abs/10....
With Daniel Susser, Sara Gerke, Laura Y. Cabrera, I. Glenn Cohen, & team
Published in Hastings Center Report. @purduepolsci.bsky.social @GRAILcenter.bsky.social
onlinelibrary.wiley.com/doi/abs/10....
With Daniel Susser, Sara Gerke, Laura Y. Cabrera, I. Glenn Cohen, & team
Synthetic data should complement real-world data, not replace it. The choice ahead: Will we use this technology to bridge healthcare gaps or deepen inequities?
For governance teams & researchers working on AI in healthcareβcurious what you're seeing?
#SyntheticData #AIinHealthcare #Bioethics
We argue synthetic data isn't a magic fixβit's a powerful tool that demands robust safeguards π‘οΈ
Key needs:
β’ Standards for accuracy & reliability
β’ Privacy protections
β’ Transparent policies
β’ Continued investment in diverse, real-world datasets
But the risks are real:
β’ Accuracy issues for rare disease algorithms
β’ Potential privacy leaks despite synthetic nature
β’ Bias amplification from flawed source data
β’ Regulatory gaps exploiting "non-identifiable" status
β’ Justice concerns about sidelining real-world diversity
What synthetic data promises:
β’ Privacy protection through artificial datasets
β’ Inclusive modeling of rare diseases & underserved groups
β’ Enhanced AI training capabilities
β’ Scalable research opportunities
The potential is substantial β‘
Enter synthetic data: AI-generated datasets that mimic real-world patterns without containing actual patient information
Sounds perfectβprivate, inclusive, scalable. But our analysis in Hastings Center Report reveals significant ethical complexities π¨
The challenge: Healthcare research is data-rich but insight-poor π
Privacy laws, demographic gaps, and underrepresentation of rare conditions prevent researchers from fully utilizing available EHRs, public datasets, and lab studies
Synthetic data promises to revolutionize healthcare researchβsolving privacy issues, modeling rare diseases, expanding equity. But it's also an ethical minefield that demands careful navigation π§΅
onlinelibrary.wiley.com/doi/abs/10....
#8 For policy practitioners, governance teams, and org leaders: curious what you're seeing in your hiring? Paper below π
@purduepolsci.bsky.social @GRAILcenter.bsky.social
doi.org/10.1109/TTS...
#7 AI ethics and governance aren't "nice-to-haves"βthey're becoming non-negotiable pillars of responsible AI development. As industries adopt AI at scale, these roles will define how society benefits from this technology βοΈ
#6 What's driving alignment? New AI regulations demand compliance. Employers recognize public trust is critical for AI adoption. Universities race to create relevant programs. More than 100K professionals needed annually
#5 Finance and Information industries dominate demand, with AI ethics/governance roles growing fastest there. Highly regulated sectors can't afford ethical lapses as AI adoption scales π¦
#4 Demand is surging π AI ethics roles grew from 35K in 2018 to 109K in 2022. Governance roles hit 96K in 2022. Even as overall AI hiring dipped in 2023, these roles remained stable. Results suggest sustained market need
#3 Key finding: AI ethics β AI governance. Employers seek distinct skills:
πΉ Ethics: Data privacy, bias mitigation, critical thinking
πΉ Governance: Risk management, policy development, leadership
Both require interdisciplinary knowledge
#2 Our study analyzed 4.4M+ AI-related job postings to uncover trends in demand for AI ethics (fairness, transparency) and AI governance (regulatory compliance, risk management) skills. Published in IEEE Transactions on Technology and Society
#1 We're seeing an "AI skills gap"βa shortage of professionals equipped with both technical expertise AND the ability to handle ethical dilemmas and regulatory challenges. AI is transforming industries, but with great power comes great responsibility π
The AI job market is evolving beyond coding. Employers now demand AI ethics and governance skills at unprecedented rates. Our analysis of 4M+ job postings from 2018-2023 reveals what's driving this shift π§΅
doi.org/10.1109/TTS...
7/7 Curious what you thinkβdoes this match what you're seeing in AI education assessment?
For researchers and educators working on AI literacy:
www.sciencedirect.com/science/art...
6/7 π¬ Next steps: Validation beyond Western university samples, workplace applications, and cross-cultural AI literacy research.
With Arne Bewersdorff and Marie Hornberger. Thanks to Google Research for funding a portion of this work
@purduepolsci.bsky.social @GRAILcenter.bsky.social
5/7 π Why this matters for AI governance:
Scalable assessment tools are essential for evaluating education programs, informing policy decisions, and ensuring citizens can navigate an AI-driven world.
AILIT-S makes systematic evaluation feasible.
4/7 π― Best use cases:
βοΈ Program evaluation
βοΈ Group comparisons
βοΈ Trend analysis
βοΈ Large-scale research
β Avoid for individual diagnostics
The speed enables broader participation and better population-level insights.
3/7 β
Results show AILIT-S delivers:
β’ ~5 minutes completion time (vs 12+ for full version)
β’ 91% congruence with comprehensive assessment
β’ Strong performance for group-level analysis
Trade-off: slightly lower individual reliability (Ξ± = 0.61 vs 0.74)
2/7 π AILIT-S covers 5 core themes:
β’ What is AI?
β’ What can AI do?
β’ How does AI work?
β’ How do people perceive AI?
β’ How should AI be used?
Special emphasis on technical understandingβthe foundation of true AI literacy.
1/7 β‘ The challenge: Existing AI literacy tests take 12+ minutes, making them impractical for large-scale assessment.
Our solution distills a robust 28-item instrument into 10 key questionsβvalidated with 1,465 university students across the US, Germany, and UK.
How do you measure AI literacy in under 5 minutes? π§΅
We developed AILIT-Sβa 10-item test that maintains 91% accuracy of longer assessments while being practical for real-world use.
Published in Computers in Human Behavior: Artificial Humans
www.sciencedirect.com/science/art...
Published in Computers and Education: Artificial Intelligence with my brilliant collaborators & PhD students Lucas Wiese and Indira Patil
www.sciencedirect.com/science/art...
@purduepolsci.bsky.social @GRAILcenter.bsky.social
π AI ethics education has grown rapidly but is still finding its footing.
By focusing on interdisciplinary teaching, hands-on learning & better assessments, we can prepare the next generation to build AI systems that serve humanity responsibly.
π οΈ What needs to happen:
β
Develop tools measuring behavioral impact of ethics education
β
Integrate ethics across all levels (K-12 to university)
β
Fund initiatives prioritizing formative assessments
β
Align assessments with real-world skills
π§ Major challenges we identified:
β’ Keeping up with AI's rapid evolution
β’ Teaching abstract concepts to diverse audiences
β’ Shortage of trained educators
β’ Misalignment between teaching goals & assessment methods
β The assessment gap: Programs aim to develop ethical reasoning & communication skills, but few measure if students are actually learning.
Summative assessments dominate (grades), but formative feedbackβthe kind that drives growthβis rare.