Dennis Alexis Valin Dittrich's Avatar

Dennis Alexis Valin Dittrich

@davdittrich.economicscience.net

Interested in bounded rationality, trust, discrimination, fair compensation, labor economics, quantitative methods, dataviz, rstats, open source 1stgen Professor of Economics (2005-2022) Senior Economist at the Stepstone Group https://economicscience.net

803
Followers
745
Following
2,667
Posts
06.02.2024
Joined
Posts Following

Latest posts by Dennis Alexis Valin Dittrich @davdittrich.economicscience.net

Preview
GitHub - davdittrich/robscale: Fast robust estimation of location and scale in very small and large samples Fast robust estimation of location and scale in very small and large samples - davdittrich/robscale

https://github.com/davdittrich/robscale

#RStats #RobustStatistics #DataScience #Optimization

orig https://fediscience.org/@davdittrich/116201886682143700 4/4

10.03.2026 00:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
robscale: Faster Robustness: Accelerated Estimation of Location and Scale Robust estimation ensures statistical reliability in data contaminated by outliers. Yet, computational bottlenecks in existing 'R' implementations frequently obstruct both very small sample analysis and large-scale processing. 'robscale' resolves these inefficiencies by providing high-performance implementations of logistic M-estimators and the 'Qn' and 'Sn' scale estimators. By leveraging platform-specific Single Instruction, Multiple Data (SIMD) vectorization and Intel Threading Building Blocks (TBB) parallelism, the package delivers speedups of 11Ҁ“39x for small samples and up to 10x for massive datasets. These performance gains enable the integration of robust statistics into modern, time-critical computational workflows. Replaces 'revss' with an 'Rcpp' backend.

cache-aware median selection.

The result? A 1.6x up to ~28x performance leap over pure-R implementations. The mathematical results remain identical; only the computational underpinnings change.

πŸ“¦ CRAN: https://cran.r-project.org/package=robscale

πŸ’» Code: 3/4

10.03.2026 00:00 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

exceptional reliability, computing them requires intensive math.

robscale 0.1.5 is now on CRAN. It delivers a native C++17/Rcpp implementation designed for absolute speed. The package utilizes SIMD-vectorized $\tanh$ evaluation, Newton-Raphson iteration, and optimal sorting networks for 2/4

10.03.2026 00:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
A two-panel benchmark charting performance multipliers of the optimized C++ robscale package against legacy pure-R implementations across sample sizes from $n = 3$ up to $10^7$, with the vertical axis starting honestly at 0x. The left panel reveals a massive speedup for M-estimators (robLoc, robScale, adm  vs. revss), pushing up to ~28x for robScale. The right panel tracks scale estimators ($Q_n$, $S_n$ vs. robustbase), showing the speedup curve upward from 1.6x, approaching 10x at large sample sizes. Shaded ribbons show 95% bootstrap confidence intervals, visually confirming dramatic computational efficiency.

A two-panel benchmark charting performance multipliers of the optimized C++ robscale package against legacy pure-R implementations across sample sizes from $n = 3$ up to $10^7$, with the vertical axis starting honestly at 0x. The left panel reveals a massive speedup for M-estimators (robLoc, robScale, adm vs. revss), pushing up to ~28x for robScale. The right panel tracks scale estimators ($Q_n$, $S_n$ vs. robustbase), showing the speedup curve upward from 1.6x, approaching 10x at large sample sizes. Shaded ribbons show 95% bootstrap confidence intervals, visually confirming dramatic computational efficiency.

Robust estimation demands highly efficient computation, especially in streaming anomaly detection where latency budgets are tight.

While Rousseeuw & Croux's robust estimators ($Q_n$ and $S_n$), and Rousseeuw & Verboven's M-estimators of location and scale for very small samples, provide 1/4

10.03.2026 00:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

this would be a nice addition to the dichotomisation chapter of discourse.datamethods.org/t/reference-...
(unfortunately, Andrew Althouse doesn't seem to be active on Bluesky?)

18.02.2026 08:15 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

to use AI, which artificially suppresses productivity per worker.

https://archive.md/CABjm

#gdp #LaborMarkets

orig https://fediscience.org/@davdittrich/116090657645560821 3/3

18.02.2026 08:30 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Output.

https://archive.md/vs2B4

…Brynjolfsson is more optimistic.

The #productivity boost is coming, but it is currently masked by the massive intangible investments companies must make to reorganize their workflows. Companies are keeping expensive human staff while they figure out how 2/3

18.02.2026 08:30 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

#EconSky
While there is some "boardroom disillusionment" with #AI …

Companies have seen "micro-efficiencies" (faster emails, quicker coding), but these haven't scaled to "macro-gains," and the Cost of Implementation (energy, licensing, and talent) is currently outstripping the Value of 1/3

18.02.2026 08:30 πŸ‘ 2 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

is retained after #dichotomization

… hope that quantifying the loss of information will discourage researchers from dichotomizing continuous outcomes"

#statistics #rstats

orig https://fediscience.org/@davdittrich/116076960331479669 3/3

15.02.2026 22:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

but also larger standard errors and fewer statistically significant results. We conclude that researchers tend to increase the sample size to compensate for the low information content of #binary outcomes, but not sufficiently.

… estimate that on average, only about 60% of the information 2/3

15.02.2026 22:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
An Empirical Assessment of the Cost of Dichotomization of the Outcome of Clinical Trials ABSTRACT We have studied 21 435 unique randomized controlled trials (RCTs) from the Cochrane Database of Systematic Reviews (CDSR). Of these trials, 7224 (34%) have a continuous (numerical) outcome and 14 211 (66%) have a binary outcome. We find that trials with a binary outcome have larger sample sizes on average, but also larger standard errors and fewer statistically significant results. We conclude that researchers tend to increase the sample size to compensate for the low information content of binary outcomes, but not sufficiently. In many cases, the binary outcome is the result of dichotomization of a continuous outcome, which is sometimes referred to as β€œresponder analysis”. In those cases, the loss of information is avoidable. Burdening more participants than necessary is wasteful, costly, and unethical. We provide a method to convert a sample size calculation for the comparison of two proportions into one for the comparison of the means of the underlying continuous outcomes. This demonstrates how much the sample size may be reduced if the outcome were not dichotomized. We also provide a method to calculate the loss of information after a dichotomization. We apply this method to all the trials from the CDSR with a binary outcome, and estimate that on average, only about 60% of the information is retained after dichotomization. We provide R code and a shiny app at: https://vanzwet.shinyapps.io/info_loss/ to do these calculations. We hope that quantifying the loss of information will discourage researchers from dichotomizing continuous outcomes. Instead, we recommend they β€œmodel continuously but interpret dichotomously”. For example, they might present β€œpercentage achieving clinically meaningful improvement” derived from a continuous analysis rather than by dichotomizing raw data.

An Empirical Assessment of the Cost of Dichotomization of the Outcome of Clinical Trials https://onlinelibrary.wiley.com/doi/10.1002/sim.70402

"Burdening more participants than necessary is wasteful, costly, and unethical.

… trials with a binary outcome have larger sample sizes on average, 1/3

15.02.2026 22:30 πŸ‘ 3 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0

that work is harder to step away from, especially as organizational expectations for speed and responsiveness rise."

#LaborEcon

orig https://fediscience.org/@davdittrich/116048241413469185 5/5

10.02.2026 21:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

employees juggle multiple AI-enabled workflows

… overwork can impair judgment, increase the likelihood of errors, and make it harder for organizations to distinguish genuine productivity gains from unsustainable intensity

… the cumulative effect is fatigue, #burnout, and a growing sense 4/5

10.02.2026 21:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

a continual switching of attention, frequent checking of #AI outputs, and a growing number of open tasks. This created #cognitiveload and a sense of always juggling

… What looks like higher #productivity in the short run can mask silent workload creep and growing cognitive strain as 3/5

10.02.2026 21:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

parallel, or reviving long-deferred tasks because AI could β€œhandle them” in the background. They did this, in part, because they felt they had a β€œpartner” that could help them move through their workload.

While this sense of having a β€œpartner” enabled a feeling of momentum, the reality was 2/5

10.02.2026 21:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
AI Doesn’t Reduce Workβ€”It Intensifies It One of the promises of AI is that it can reduce workloads so employees can focus more on higher-value and more engaging tasks. But according to new research, AI tools don’t reduce work, they consistently intensify it: In the study, employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so. That may sound like a win, but it’s not quite so simple. These changes can be unsustainable, leading to workload creep, cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems. To correct for this, companies need to adopt an β€œAI practice,” or a set of norms and standards around AI use that can include intentional pauses, sequencing work, and adding more human grounding.

#EconSky
AI Doesn’t Reduce Workβ€”It Intensifies It https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it

β€œAI introduced a new rhythm in which workers managed several active threads at once: manually writing code while AI generated an alternative version, running multiple agents in 1/5

10.02.2026 21:00 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

disproportionately attract more educated and experienced workers

… stringent #RTO mandates may induce the most productive employees to leave firms that do not offer WFH."

#LaborMarkets

orig https://fediscience.org/@davdittrich/116048099955111626 4/4

10.02.2026 20:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

understate #inequality, as the best-paid workers are also more likely to receive the WFH amenity.

… changes in WFH policies (e.g., through widely debated RTO mandates) could have important implications for the allocation of talent and for aggregate productivity: firms offering WFH 3/4

10.02.2026 20:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

negotiation skills or bargaining power). Indeed, WFH was more prevalent for workers who already had high hourly wages before the pandemic, and was not associated with higher post-pandemic wage growth.

… in a world with more widespread #WFH, differences in hourly #wages may significantly 2/4

10.02.2026 20:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
The Work-from-home Wage Premium Federal Reserve Bank of San Francisco Working Paper 2026-02

#EconSky
The Work-from-home Wage Premium https://www.frbsf.org/wp-content/uploads/wp2026-02.pdf

"… find that workers who work from home earn higher hourly wages than those who do not.

… premium is driven by selection on unobservable worker characteristics (which could include ability, 1/4

10.02.2026 20:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

https://fediscience.org/@davdittrich/116014030137174545 2/2

04.02.2026 20:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Claude Code Part 12: How I Use Claude Code for Empirical Research My "MixtapeTools" repo

How I Use Claude Code for Empirical Research https://causalinf.substack.com/p/claude-code-part-12-how-i-use-claude

Scott has a lot of good and useful ideas about how to use #claudeCode & Co. I like the referee #2 idea. There is more in his other posts.

#AI #llm

orig 1/2

04.02.2026 20:00 πŸ‘ 3 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0
View of The Legacy of Daniel Kahneman:

#EconSky
The Legacy of Daniel Kahneman https://ejpe.org/journal/article/view/1075/753

by Gerd Gigerenzer

#BoundedRationality

#economics #Psychology #heuristics #Biases #statisticalThinking

orig https://fediscience.org/@davdittrich/116013924158823849

04.02.2026 19:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

they are directly harming another human."

#economics #IntellectualProperty #ExperimentalEcon #ExperimentalLaw

orig https://fediscience.org/@davdittrich/116013622450354382 4/4

04.02.2026 18:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

stolen so everyone take copies,” explicitly rejecting the application of β€œstolen” to discs.

… Humans can state that digital piracy is illegal and take measures to prevent it. However, it will be difficult to cause an individual engaging in piracy to feel guilty as they do when they believe 3/4

04.02.2026 18:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

disc and can still consume the full value of it.

… Participants discuss discs often enough to reveal how they conceptualize the resource. In many instances, they articulate the positive-sum logic of zero-marginal-cost copying. For example, … farmer Almond reasons, β€œok so disks cant be 2/4

04.02.2026 18:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Everyone Take Copies - Econlib I have a new working paper with Bart Wilson titled: β€œYou Wouldn’t Steal a Car: Moral Intuition for Intellectual Property.”  The title of this post, β€œeveryone take copies,” comes from a conversation between the human subjects in an experiment in our lab, on which the paper is based. The experiment was studying how and when […]

#EconSky
Everyone Take Copies https://www.econlib.org/econlog/everyone-take-copies/

"… discs: Non-rivalrous goods are goods that can be used by multiple people without any loss to the other users. If participants exercise the ability to take a disc, then the original disc holder still has a 1/4

04.02.2026 18:30 πŸ‘ 1 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

https://fediscience.org/@davdittrich/116013554710446301 5/5

04.02.2026 18:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

40% chance of vanishing. It is more likely to be reorganized.

… Technology automates, accelerates or reduces the cost of specific tasks within a job, allowing employees to spend more time on higher-value activities. As a result, output expands and #wages often rise."

#LaborMarkets

orig 4/5

04.02.2026 18:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

differently.

… software has automated large portions of bookkeeping and tax preparation without eliminating accountants, who have moved up the value chain toward advisory, forensic and judgment-intensive work.

… A job that scores as 40% β€œexposed” to AI in these rankings doesn’t have a 3/5

04.02.2026 18:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0