I use neovim btw
I use neovim btw
This is what my supervisor is paying me to do:
basics already before they started their PhD but this is mostly not realistic, most PhD students I've met (including me) came from backgrounds without any of these technical concepts. I wasn't even taught calculus 1 at undergrad for example. So I was curious what other people think 8/8
to claim that people should know this given the time limitations. The time you spend learning basics is the time you spend not doing experiments, not analyzing data, not writing grants, not gaining domain expertise, not reading the literature. One argument is that people should've learned the 7/8
But where to draw the line? How basic should somebody be able to do? I know from experience that being able to write a function that performs fourier transform (albeit a slow one, an sft) helped me immensely since we use it all the time in neuroimaging data analysis. But it feels like a stretch 6/8
function and still miss the point, just hammering out the mechanics of it. But when we write some code we also get the instinct to test it and to test it you need some reference. Thinking about that reference usually helps getting the concept. 5/8
Especially in a time when it is so tempting to run an LLM prompt and get some results super fast, in an environment where the objective function of scientific success is maximizing the number of publications and some metrics correlated with that. It is also possible to be able to write the ERP 4/8
Let's say somebody is analyzing ERPs, it is very easy to calculate ERPs with a tool like MNE. But it is also very easy to miss the point of calculating an ERP, i.e. averaging over noise assuming at each time point there is a corresponding probability distribution with some mean and some variance.3/8
That would be error-prone and non-performant. This is nice, but, I've observed that not being able to write some tools which are targets of the paper can lead to weird misconceptions of what we are dealing with. 2/8
I have a sort of dilemma, when we do scientific data analysis, most of what we do is chaining functions from a number of libraries to create a few thousand lines long codebase which handles the analysis. We don't write, for example, the fft routine ourselves. 1/8
#neuroskyence #Science
Some 15 years ago my colleague Parry Clarke said to me, "Dude stop complaining and write your own stats book!" So I did. It definitely changed my life, and I'm glad it has had a positive impact on others.
Huge thanks to my co-conspirators Kaan, Andrea, Angelika and my supervisor Dr. Northoff.
future papers too. It took a lot of time to get it right and I'm glad I put that time in, it resulted in some very interesting stories for me which I won't recount here. If you are interested, you can check out the paper at jneurosci.org/content/45/4....
I really like this paper. This was the first paper I was involved which was exclusively motivated by a theoretical notion (FDT), realized with numerical simulations (Jansen-Rit model) and confirmed in empirical data (MEG analyses). I hope I can follow this mindset in the ...
We found the positive relationship, as can be seen from the results of a hierarchical bayesian model below, in all trial conditions, clusters and in most channels:
We used spatiotemporal permutation cluster test (implemented in MNE) to identify the relevant channel and time clusters for ERFs:
Finally we tested the hypothesis of a positive correlation between INTs and mERFs in empirical MEG data. It was immensely satisfying to see that the results from the theory and the model also held in empirical data.
limitation of the model. In principle, anything that can change the real part of the eigenvalues of the model should also change the INTs and mERFs. Maybe they are most sensitive to the intracolumnar connections, that sounds more plausible to me.
We found that in the small model we tested, the only parameter which influenced both the INTs and the magnitude of the event-related fields (mERFs) was intracolumnar connections. Though I'm a bit skeptical about the exclusivity of intracolumnar connections, this might be a ...
One of my aims at the time was to make sure we can show this relationship in simulated data, inspired by
@rmcelreath.bsky.social's approach to statistics. Statistical Rethinking is a life-changing book, this paper wouldn't be the way it is now without it. Very grateful to McElreath!
In the second part of the paper we modeled the relationship between intrinsic timescales and event-related activity in the Jansen-Rit computational model.
As far as I know Sarracino's paper is the first paper that provides the explicit link between event-related dynamics and intrinsic timescales, if you are interested in this, please check out their work as well.
At the time of writing this paper I wasn't aware of Sarracino et al.'s work on this: journals.aps.org/prresearch/a..., so I couldn't cite their paper but I really should've done that.
What this means in the context of the brain dynamics is a link between resting state intrinsic timescales (estimated from the autocorrelation function) and the event-related activity (which is relaxation from a perturbation).
In particular, the relaxation response to a small perturbation should follow the autocovariance function estimated from the equilibrium dynamics.
In the theory part, we attempted to link the concept of intrinsic timescales (INTs) to the fluctuation-dissipation theorem from statistical mechanics. According to this framework, the average nonequilibrium behavior of a particle can be predicted from its equilibrium statistics.
Our new paper is out now in Journal of Neuroscience!
This is a three part paper: the theory, the modeling and the data.
Banger
Listening to Bohren & Der Club of Gore and decided to switch my computer to dark theme
How can we reform science? I have some ideas. But I am not sure youβll like them, because they donβt promise much. elevanth.org/blog/2025/07...