Carlos González-García's Avatar

Carlos González-García

@gonzalezgarcia

Cognitive Neuroscientist @ University of Granada https://ugr.es/~cgonzalez/

259
Followers
256
Following
27
Posts
27.10.2023
Joined
Posts Following

Latest posts by Carlos González-García @gonzalezgarcia

Suddenly finding the solution to a problem sometimes feels more insightful than others, but why?

Beautiful work by Johannah showing this is the result of both the accuracy of initial predictions and the uncertainty awarded to them.

Feeling lucky to have shared this journey with her and Juan!✨

03.03.2026 17:41 👍 8 🔁 0 💬 0 📌 0
Post image Post image Post image

Celebrating #AndalusiaDay at CIMCYC with a classic: bread, oil and sugar.

🥖✨ It was a wonderful opportunity to socialize, laugh and enjoy music.

Happy #28F from our center!

27.02.2026 12:21 👍 6 🔁 2 💬 0 📌 0

Why do people seem to prioritize semantic stuff when holding info in working memory? Had lots of fun trying to shed some light on this question, together with the great @ckerren.bsky.social and @lindedomingo.bsky.social

26.02.2026 08:17 👍 6 🔁 2 💬 0 📌 0

🚨🚨 We are hiring a Postdoc at CIMCYC – University of Granada (Spain) 🚨🚨

2 year position in Neuroscience of Human Experience

Building new ways to study lived experience as it unfolds in time and maps onto brain, body and behavior

Please share‼️

26.02.2026 03:41 👍 9 🔁 19 💬 1 📌 0

the benefit from writing the paper is not having a 30-page paper as a result

the benefit is the thinking that goes along with writing the paper — that is part of writing the paper.

If all that's done is the pre-analysis plan and the analysis then that limits the new thinking

24.02.2026 21:15 👍 94 🔁 13 💬 4 📌 3
Post image

New preprint 🚨

Across multiple tasks, we show that higher-level info is more readily accessible in WM before evidence accumulation begins. Attention then boosts perceptual detail.

www.biorxiv.org/content/10.6...

A lot of fun with my colleagues @ckerren.bsky.social and @gonzalezgarcia.bsky.social

20.02.2026 15:22 👍 47 🔁 16 💬 0 📌 1
Preview
BAMB! 2026 | Barcelona Summer School for Advanced Modeling of Behavior Intensive training for experienced researchers in cognitive science, computational neuroscience and neuro-AI. Five interconnected modules, expert faculty, hands-on projects. July 12-23, 2026.

Applications for BAMB! 2026 are officially open!

Join us in Barcelona (July 12–23) to master the art of behavioral modeling with our incredible faculty:

@meganakpeters.bsky.social
@marcelomattar.bsky.social
@khamascience.bsky.social
@thecharleywu.bsky.social

Apply now here: www.bambschool.org

12.02.2026 13:08 👍 26 🔁 27 💬 1 📌 3

Ánimo!

12.02.2026 20:26 👍 1 🔁 0 💬 0 📌 0
Preview
AI Doesn’t Reduce Work—It Intensifies It One of the promises of AI is that it can reduce workloads so employees can focus more on higher-value and more engaging tasks. But according to new research, AI tools don’t reduce work, they consisten...

You could replace AI with cocaine in this Harvard Business Review article and it would totally fit. Working at a faster pace into more hours of the day at first soon giving way to "cognitive fatigue, burnout, and weakened decision-making." Sounds familiar.

10.02.2026 19:48 👍 13 🔁 7 💬 0 📌 0

@wicseurope.bsky.social event for women’s month is out!

Join us on if you’re drawn to computational ways of thinking about the mind.

🗓️ 23 February 2026
⏰ 13:00-14:30 (CET)

More info & free registration here:
shorturl.at/Jqm4M

06.02.2026 12:25 👍 8 🔁 6 💬 2 📌 0

Felicidades!!

01.02.2026 15:44 👍 1 🔁 0 💬 0 📌 0
Preview
A CIMCYC study reveals how the brain organizes information to adapt to new situations Every day, we face new situations with surprising ease: from understanding a rule that has just been explained to reacting appropriately to an unfamiliar sign. But how does the brain manage to turn a ...

🧠How does your brain turn a brand-new instruction into a precise action? A new study reveals how our brain organizes information to adapt to the unknown.

cimcyc.ugr.es/en/informati...

29.01.2026 09:58 👍 4 🔁 4 💬 1 📌 0
Preview
European Alternatives We help you find European alternatives for digital service and products, like cloud services and SaaS products.

European alternatives for many digital products: email services, VPNs, alternatives to Slacks and more: european-alternatives.eu

Btw, we use Elements at work and I really like it.

08.01.2026 20:30 👍 7 🔁 1 💬 0 📌 0
Preview
CONNECTS, la paradoja del pensamiento - Canal UGR Conciencia de Canal Sur dedica un reportaje al proyecto ERC Starting Grant «Cognitive and Neural Computations of Semantics» (CONNECTS) que dirige el investigador del CIMCYC y del Departamento de Psicología Experimental de la UGR, Javier Ortiz-Tudela. Esta iniciativa busca entender cómo el conocimiento semántico influye en nuestra forma de procesar y recordar nueva información. A… Seguir Leyendo CONNECTS, la paradoja del pensamiento

🧠 La ciencia no deja de descubrir secretos acerca del cerebro

👨‍🔬 El proyecto CONNECTS de la UGR busca resolver la paradoja del pensamiento

📺 Conoce más detalles en este reportaje de ConCiencia 👇

05.01.2026 08:02 👍 4 🔁 2 💬 0 📌 0
Home – CIMCYC Workshop on Learning and Attention

In Granada next week? Join us for the CIMCYC Workshop on Learning and Attention, with @davluque.bsky.social, @mavadillo.bsky.social, Teodóra Vékony and @mikelepelley.bsky.social discussing how learning and attention interact across different domains.
franfrutos.github.io/learning_att...

11.12.2025 11:59 👍 14 🔁 8 💬 1 📌 0
Will you incorporate LLMs and AI prompting into the course in the future?
No.

Why won’t you incorporate LLMs and AI prompting into the course?
These tools are useful for coding (see this for my personal take on this).

However, they’re only useful if you know what you’re doing first. If you skip the learning-the-process-of-writing-code step and just copy/paste output from ChatGPT, you will not learn. You cannot learn. You cannot improve. You will not understand the code.

Will you incorporate LLMs and AI prompting into the course in the future? No. Why won’t you incorporate LLMs and AI prompting into the course? These tools are useful for coding (see this for my personal take on this). However, they’re only useful if you know what you’re doing first. If you skip the learning-the-process-of-writing-code step and just copy/paste output from ChatGPT, you will not learn. You cannot learn. You cannot improve. You will not understand the code.

In that post, it warns that you cannot use it as a beginner:

…to use Databot effectively and safely, you still need the skills of a data scientist: background and domain knowledge, data analysis expertise, and coding ability.

There is no LLM-based shortcut to those skills. You cannot LLM your way into domain knowledge, data analysis expertise, or coding ability.

The only way to gain domain knowledge, data analysis expertise, and coding ability is to struggle. To get errors. To google those errors. To look over the documentation. To copy/paste your own code and adapt it for different purposes. To explore messy datasets. To struggle to clean those datasets. To spend an hour looking for a missing comma.

This isn’t a form of programming hazing, like “I had to walk to school uphill both ways in the snow and now you must too.” It’s the actual process of learning and growing and developing and improving. You’ve gotta struggle.

In that post, it warns that you cannot use it as a beginner: …to use Databot effectively and safely, you still need the skills of a data scientist: background and domain knowledge, data analysis expertise, and coding ability. There is no LLM-based shortcut to those skills. You cannot LLM your way into domain knowledge, data analysis expertise, or coding ability. The only way to gain domain knowledge, data analysis expertise, and coding ability is to struggle. To get errors. To google those errors. To look over the documentation. To copy/paste your own code and adapt it for different purposes. To explore messy datasets. To struggle to clean those datasets. To spend an hour looking for a missing comma. This isn’t a form of programming hazing, like “I had to walk to school uphill both ways in the snow and now you must too.” It’s the actual process of learning and growing and developing and improving. You’ve gotta struggle.

This Tumblr post puts it well (it’s about art specifically, but it applies to coding and data analysis too):

Contrary to popular belief the biggest beginner’s roadblock to art isn’t even technical skill it’s frustration tolerance, especially in the age of social media. It hurts and the frustration is endless but you must build the frustration tolerance equivalent to a roach’s capacity to survive a nuclear explosion. That’s how you build on the technical skill. Throw that “won’t even start because I’m afraid it won’t be perfect” shit out the window. Just do it. Just start. Good luck. (The original post has disappeared, but here’s a reblog.)

It’s hard, but struggling is the only way to learn anything.

This Tumblr post puts it well (it’s about art specifically, but it applies to coding and data analysis too): Contrary to popular belief the biggest beginner’s roadblock to art isn’t even technical skill it’s frustration tolerance, especially in the age of social media. It hurts and the frustration is endless but you must build the frustration tolerance equivalent to a roach’s capacity to survive a nuclear explosion. That’s how you build on the technical skill. Throw that “won’t even start because I’m afraid it won’t be perfect” shit out the window. Just do it. Just start. Good luck. (The original post has disappeared, but here’s a reblog.) It’s hard, but struggling is the only way to learn anything.

You might not enjoy code as much as Williams does (or I do), but there’s still value in maintaining codings skills as you improve and learn more. You don’t want your skills to atrophy.

As I discuss here, when I do use LLMs for coding-related tasks, I purposely throw as much friction into the process as possible:

To avoid falling into over-reliance on LLM-assisted code help, I add as much friction into my workflow as possible. I only use GitHub Copilot and Claude in the browser, not through the chat sidebar in Positron or Visual Studio Code. I treat the code it generates like random answers from StackOverflow or blog posts and generally rewrite it completely. I disable the inline LLM-based auto complete in text editors. For routine tasks like generating {roxygen2} documentation scaffolding for functions, I use the {chores} package, which requires a bunch of pointing and clicking to use.

Even though I use Positron, I purposely do not use either Positron Assistant or Databot. I have them disabled.

So in the end, for pedagogical reasons, I don’t foresee me incorporating LLMs into this class. I’m pedagogically opposed to it. I’m facing all sorts of external pressure to do it, but I’m resisting.

You’ve got to learn first.

You might not enjoy code as much as Williams does (or I do), but there’s still value in maintaining codings skills as you improve and learn more. You don’t want your skills to atrophy. As I discuss here, when I do use LLMs for coding-related tasks, I purposely throw as much friction into the process as possible: To avoid falling into over-reliance on LLM-assisted code help, I add as much friction into my workflow as possible. I only use GitHub Copilot and Claude in the browser, not through the chat sidebar in Positron or Visual Studio Code. I treat the code it generates like random answers from StackOverflow or blog posts and generally rewrite it completely. I disable the inline LLM-based auto complete in text editors. For routine tasks like generating {roxygen2} documentation scaffolding for functions, I use the {chores} package, which requires a bunch of pointing and clicking to use. Even though I use Positron, I purposely do not use either Positron Assistant or Databot. I have them disabled. So in the end, for pedagogical reasons, I don’t foresee me incorporating LLMs into this class. I’m pedagogically opposed to it. I’m facing all sorts of external pressure to do it, but I’m resisting. You’ve got to learn first.

Some closing thoughts for my students this semester on LLMs and learning #rstats datavizf25.classes.andrewheiss.com/news/2025-12...

09.12.2025 20:17 👍 331 🔁 99 💬 14 📌 31

Congrats, Alex! Super exciting news!

09.12.2025 21:59 👍 1 🔁 0 💬 0 📌 0
Preview
Bridging Fields in Psychology and Neuroscience with Multidisciplinary Collaboration Strengthening collaboration to encourage novel research connections between scientific areas is central to the CIMCYC - María de Maeztu Unit of Excellence strategy . To encourage this, the CIMCYC has ...

@cimcyc.bsky.social is hiring!

SIX postdoc positions are coming up to dive into collaborative projects bridging together psychological science.

Amazing opportunity to boost a postdoc career in a cutting-edge research center with outstanding human teams!
👇🏽
cimcyc.ugr.es/en/informati...

09.12.2025 12:44 👍 13 🔁 11 💬 0 📌 0

Postdoc job ad! Do you have a background in human learning in aseptic lab studies and wonder how to apply this to pressing societal issues? Or, have you worked in the formation of political attitudes from social psychology but always wanted to understand the cognitive mechanisms behind it? Read on!

09.12.2025 07:25 👍 7 🔁 8 💬 1 📌 1
Preview
Bridging Fields in Psychology and Neuroscience with Multidisciplinary Collaboration Strengthening collaboration to encourage novel research connections between scientific areas is central to the CIMCYC - María de Maeztu Unit of Excellence strategy . To encourage this, the CIMCYC has ...

It’s official! The postdoc positions announcement is here 🚀
If you know great candidates interested in attention, memory transformation and EEG, please help spread the word:
Project (ReDAS) -> cimcyc.ugr.es/en/informati...
Job offer -> cimcyc.ugr.es/en/informati...

09.12.2025 05:50 👍 19 🔁 27 💬 1 📌 1

Asking informally: does anyone know someone who might be interested in a postdoc focused on understanding changes in memory representations driven by attention using EEG? ⚡️Thanks!

01.12.2025 05:03 👍 17 🔁 26 💬 1 📌 0

This raises what I like to call the "AI test for tasks".

If many people use AI to do task X, then that tells you that task X is actually just a brainless administrative exercise.

Any such task should probably be eliminated, and if that's not an option, modified to make automation even easier.

14.11.2025 19:14 👍 65 🔁 8 💬 6 📌 10
Preview
Hippocampal transformations occur along dimensions of memory interference The role of the hippocampus in resolving memory interference has been greatly elucidated by considering the relationship between the similarity of visual stimuli (input) and corresponding similarity o...

🧠🚨 How does the hippocampus transform the visual similarity space to resolve memory interference?

In this new preprint, we found that the hippocampus sequentially inverts the behaviorally relevant dimensions of similarity 🧵

www.biorxiv.org/content/10.1...

14.10.2025 16:48 👍 85 🔁 28 💬 4 📌 0

So proud of Águeda and Germán for putting this together. A short, insightful opinion piece on how drift–diffusion modeling can deepen our understanding of the role of attention in prioritizing working memory contents.

📄 Accepted version: psycnet.apa.org/record/2026-...

Preprint: shorturl.at/p5KA4

10.10.2025 09:24 👍 4 🔁 2 💬 0 📌 0
Post image

Very happy to finally see this out! A while ago I had the wonderful opportunity to meet an awesome group of scientists from very diverse fields, interested in exchanging thoughts, experiences and ideas. In this book, we collect some of these exchanges as a celebration the richness of science.

05.10.2025 14:18 👍 11 🔁 5 💬 1 📌 0
OSF

A memory can be represented at different levels of granularity, from highly specific to generalized.

Different representational formats of a memory can be used at different times or in different contexts, and draw on different neural representations.

doi.org/10.31234/osf...

25.09.2025 18:58 👍 62 🔁 10 💬 3 📌 1
Post image
19.09.2025 17:04 👍 244 🔁 53 💬 7 📌 7

Looking forward to #ICON2025!

Apart from posters (updates coming), we’ll be at the “Sudden Learning Across Systems” symposium talking about factors behind solving visual disambiguation 👀, memory traces of one-shot perceptual learning and more.

Excited to share and get inspired by your work! 🚀

11.09.2025 15:41 👍 12 🔁 5 💬 0 📌 0

What do we remember from highly ambiguous episodes? Here’s our take after many hours of reading and discussing. Congratulations @jvoeller.bsky.social on the first paper of your PhD!

12.09.2025 11:16 👍 8 🔁 0 💬 0 📌 0

New preprint! 🚨We discuss the long-term memory consequences of fast visual perceptual learning: "From sudden perceptual learning to enduring engrams: A representational perspective"

It's so great to work with @jvoeller.bsky.social and the team! Congrats for your first author manuscript, Johannah!

12.09.2025 11:08 👍 16 🔁 5 💬 0 📌 0