Behnam Karami's Avatar

Behnam Karami

@drbehnamkarami

Postdoc researcher in Cognitive Science

41
Followers
159
Following
14
Posts
26.11.2024
Joined
Posts Following

Latest posts by Behnam Karami @drbehnamkarami

Post image Post image Post image Post image

5️⃣ Morality is encoded in modular subspaces.
PLSC reveals orthogonal latent dimensions corresponding to individual foundations, promising for interpretability and alignment.

07.12.2025 11:24 👍 1 🔁 0 💬 0 📌 0

3️⃣ LLM activations predict human judgments.
Mid-layer states reliably forecast participants’ wrongness ratings for the same moral vignettes.
4️⃣ Neural alignment emerges.
fMRI during moral reading shows representational alignment in PCC (moral/social hub) and even somatosensory cortex

07.12.2025 11:24 👍 0 🔁 0 💬 1 📌 0

1️⃣ Moral foundations are decodable inside LLMs.
Mid-layers of mid-sized models show clean, separable representations of the Moral Foundations.
2️⃣ LLM geometry mirrors human moral structure.
They reproduce the hierarchical MFT layout, individualizing vs. binding foundations, distinct from social norms.

07.12.2025 11:24 👍 0 🔁 0 💬 1 📌 0
Emergent Moral Representations in Large Language Models Aligns with Human Conceptual, Neural, and Behavioral Moral Structure Large language models (LLMs) increasingly operate in ethically sensitive settings, yet it remains unclear whether they internally encode structured representations of morality. Here we examine the act...

🚨 New Preprint Out!
“Emergent Moral Representations in Large Language Models Align with Human Conceptual, Neural, and Behavioral Moral Structure”

www.researchsquare.com/article/rs-8...

Do LLMs internally represent morality like humans?
Our results point to a striking yes!
Key findings:

07.12.2025 11:24 👍 1 🔁 1 💬 1 📌 0

1️⃣ Moral foundations are decodable inside LLMs.
Mid-layers of mid-sized models show clean, separable representations of the Moral Foundations.
2️⃣ LLM geometry mirrors human moral structure.
They reproduce the hierarchical MFT layout, individualizing vs. binding foundations, distinct from social norms.

07.12.2025 11:12 👍 0 🔁 0 💬 0 📌 0

Sure, but how do we know the baby “experiences pain” rather than just reacts? Behavior isn’t consciousness. My dream toothache shows that the same nociceptive signal can exist without the feeling of pain until a self-model interprets it. Maybe even a baby’s hurt is already a primitive story of self

13.10.2025 07:59 👍 0 🔁 0 💬 1 📌 0

Maybe pain needs a story to exist.

13.10.2025 06:45 👍 0 🔁 0 💬 1 📌 0

meaning of the pain gets displaced into the dream world.
When I wake up, the same signal is processed within the awake narrative frame of self-in-the-world. Then it becomes pain — localized, owned (“my tooth”), temporal (“it started last night”), and affective (“it hurts”).

13.10.2025 06:45 👍 0 🔁 0 💬 1 📌 0

During sleep, my brain still receives nociceptive signals (e.g., from my tooth), but since the waking self-model is offline, my brain weaves the signal into a dream narrative. Instead of “I have a toothache,” the dream constructs a story like “I’m late for school,” or “something’s wrong.” The

13.10.2025 06:45 👍 0 🔁 0 💬 1 📌 0

If pain is purely raw and doesn’t need interpretation, why do I sometimes dream my toothache as “being late for school” instead of feeling pain? The pain, as I consciously experience it(if i do not feel it like aliens!!), is not just raw sensory input,it's interpreted and narrativized by my mind.

13.10.2025 06:45 👍 0 🔁 0 💬 1 📌 0

Adding a realizer layer that links LLMs to physical and chemical processes would reconnect them to the causal hierarchy—allowing the system to be-in-the-world (Dasein) and potentially achieve consciousness.

09.10.2025 07:21 👍 1 🔁 0 💬 0 📌 0

In the hierarchical higher-order pointer theory, each layer points to the one below it, forming an unbroken causal chain. In AI, this chain is severed—the software doesn’t “point down” to physical reality, making it mere simulation.

09.10.2025 07:21 👍 1 🔁 0 💬 2 📌 0
Preview
Artificial Phantasia: Evidence for Propositional Reasoning-Based Mental Imagery in Large Language Models This study offers a novel approach for benchmarking complex cognitive behavior in artificial systems. Almost universally, Large Language Models (LLMs) perform best on tasks which may be included in th...

Imagine an apple 🍎. Is your mental image more like a picture or more like a thought? In a new preprint led by Morgan McCarty—our lab's wonderful RA—we develop a new approach to this old cognitive science question and find that LLMs excel at tasks thought to be solvable only via visual imagery. 🧵

01.10.2025 01:26 👍 116 🔁 38 💬 5 📌 8
Post image
07.06.2025 13:03 👍 0 🔁 0 💬 0 📌 0

‘Being’ is already the newest version
100 upgraded, 1.5M newly installed, 1M to remove
MyMind@MyBrain: ~$

07.06.2025 12:36 👍 0 🔁 0 💬 0 📌 0
Post image Post image Post image

@suryaganguli.bsky.social gives great talk at LLM workshop at Berkley

-LLM<->brain is still a new topic, less progress so far than LLM<->vision
-lets train LLM foundation models of specific brain systems and then reverse engineer them
-emerging paradigm: read-write experiments in brains+machines

03.02.2025 19:49 👍 16 🔁 5 💬 0 📌 0