Precisely estimating out-of-sample upper quantiles is very important in risk assessment and in engineering practice for structural design to prevent a greater disaster. For this purpose, the generalized extreme value (GEV) distribution has been broadly used. To estimate the parameters of GEV distribution, the maximum likelihood estimation (MLE) and L-moment estimation (LME) methods have been primarily employed. For a better estimation using the MLE, several studies considered the generalized MLE (penalized likelihood or Bayesian) methods to cooperate with a penalty function or prior information for parameters. However, a generalized LME method for the same purpose has not been developed yet in the literature. We thus propose the generalized method of L-moment estimation (GLME) to cooperate with a penalty function or prior information. The proposed estimation is based on the generalized L-moment distance and a multivariate normal likelihood approximation. Because the L-moment estimator is more efficient and robust for small samples than the MLE, we reasonably expect the advantages of LME to continue to hold for GLME. The proposed method is applied to the stationary and nonstationary GEV models with two novel (data-adaptive) penalty functions to correct the bias of LME. A simulation study indicates that the biases of LME are considerably corrected by the GLME with slight increases in the standard error. Applications to US flood damage data and maximum rainfall at Phliu Agromet in Thailand illustrate the usefulness of the proposed method. This study may promote further work on penalized or Bayesian inferences based on L-moments.
arXivππ€
Generalized method of L-moment estimation for stationary and nonstationary extreme value models
By Shin, Shin, Park et al
10.03.2026 03:48
π 0
π 0
π¬ 0
π 0
arXiv:2303.11721v2 Announce Type: replace
Abstract: We discuss estimation and inference of conditional treatment effects in regression discontinuity designs with multiple scores. Aside from the commonly used local linear regression approach and a minimax-optimal estimator recently proposed by Imbens and Wager (2019), we consider two estimators based on random forests -- honest regression forests and local linear forests -- whose construction resembles that of standard local regressions, with theoretical validity following from results in Wager and Athey (2018) and Friedberg et al. (2020). We design a systematic Monte Carlo study with data generating processes built both from functional forms that we specify and from Wasserstein Generative Adversarial Networks that can closely mimic the observed data. We find that no single estimator dominates across all simulations: (i) local linear regressions perform well in univariate settings, but can undercover when multivariate scores are transformed into a univariate score -- which is commonly done in practice -- possibly due to the "zero-density" issue of the collapsed univariate score at the transformed cutoff; (ii) good performance of the minimax-optimal estimator depends on accurate estimation of a nuisance parameter and its current implementation only accepts up to two scores; (iii) forest-based estimators are not designed for estimation at boundary points and can suffer from bias in finite sample, but their flexibility in modeling multivariate scores opens the door to a wide range of empirical applications in multivariate regression discontinuity designs.
arXivππ€
Using Forests in Multivariate Regression Discontinuity Designs
By
10.03.2026 01:36
π 0
π 0
π¬ 0
π 0
Spontaneous reporting system databases are key resources for post-marketing surveillance, providing real-world evidence (RWE) on the adverse events (AEs) of regulated drugs or other medical products. Various statistical methods have been proposed for AE signal detection in these databases, flagging drug-specific AEs with disproportionately high observed counts compared to expected counts under independence. However, signal detection remains challenging for rare AEs or newer drugs, which receive small observed and expected counts and thus suffer from reduced statistical power. Principled information sharing on signal strengths across drugs/AEs is crucial in such cases to enhance signal detection. However, existing methods typically ignore complex between-drug associations on AE signal strengths, limiting their ability to detect signals. We propose novel local-global mixture Dirichlet process (DP) prior-based nonparametric Bayesian models to capture these associations, enabling principled information sharing between drugs while balancing flexibility and shrinkage for each drug, thereby enhancing statistical power. We develop efficient Markov chain Monte Carlo algorithms for implementation and employ a false discovery rate (FDR)-controlled, false negative rate (FNR)-optimized hypothesis testing framework for AE signal detection. Extensive simulations demonstrate our methods' superior sensitivity -- often surpassing existing approaches by a twofold or greater margin -- while strictly controlling the FDR. An application to FDA FAERS data on statin drugs further highlights our methods' effectiveness in real-world AE signal detection. Software implementing our methods is provided as supplementary material.
arXivππ€
A Nonparametric Bayesian Local-Global Model for Enhanced Adverse Event Signal Detection in Spontaneous Reporting System Data
By Huang, Chakraborty
09.03.2026 22:08
π 0
π 0
π¬ 0
π 0
Matching is one of the most widely used causal inference designs in observational studies, but post-matching confounding bias remains a challenge. This bias includes overt bias from inexact matching on measured confounders and hidden bias from the existence of unmeasured confounders. Researchers commonly apply the Rosenbaum-type sensitivity analysis framework after matching to assess the impact of these biases on causal conclusions. In this work, we show that this approach is often conservative because the solution to the Rosenbaum-type sensitivity model may allocate hypothetical hidden bias in ways that contradict the overt bias observed in the matched dataset. To address this problem, we propose an iterative convex programming approach that enhances sensitivity analysis by ensuring consistency between hidden and overt biases. The validity of our approach does not rely on modeling assumptions for treatment or outcome variables. Extensive simulations demonstrate substantial gains in statistical power of sensitivity analysis, and a real-world data application illustrates the practical benefits of our approach. We have also developed an open-source R package to facilitate the implementation of our approach.
arXivππ€
Reconciling Overt Bias and Hidden Bias in Sensitivity Analysis for Matched Observational Studies
By Heng, Shen, Wang
09.03.2026 19:19
π 0
π 0
π¬ 0
π 0
Stablecoins have historically depegged due from par to large sales, possibly of speculative nature, or poor reserve asset quality. Using a global game which addresses both concerns, we show that the selling pressure on stablecoin holders increases in the presence of a large sale. While precise public knowledge reduces (increases) the probability of a run when fundamentals are strong (weak), interestingly, more precise private signals increase (reduce) the probability of a run when fundamentals are strong (weak), potentially explaining the stability of opaque stablecoins. The total run probability can be decomposed into components representing risks from large sales and poor collateral. By analyzing how these risk components vary with respect to information uncertainty and fundamentals, we can split the fundamental space into regions based on the type of risk a stablecoin issuer is more prone to. We suggest testable implications and connect our model's implications to real-world applications, including depegging events and the no-questions-asked property of money.
arXivππ€
Information Structures in Stablecoin Markets
By Zhu
09.03.2026 17:00
π 0
π 0
π¬ 0
π 0
We establish the existence and uniqueness of the equilibrium for a stochastic mean-field game of optimal investment. The analysis covers both finite and infinite time horizons, and the mean-field interaction of the representative company with a mass of identical and indistinguishable firms is modeled through the time-dependent price at which the produced good is sold. At equilibrium, this price is given in terms of a nonlinear function of the expected (optimally controlled) production capacity of the representative company at each time. The proof of the existence and uniqueness of the mean-field equilibrium relies on a priori estimates and the study of nonlinear integral equations, but employs different techniques for the finite and infinite horizon cases. Additionally, we investigate the deterministic counterpart of the mean-field game under study.
arXivππ€
Existence and uniqueness results for a mean-field game of optimal investment
By Calvia, Federico, Ferrari et al
09.03.2026 16:56
π 0
π 0
π¬ 0
π 0
Task-based models of AI and labor hold organizational structure fixed. We introduce agent capital: AI that reduces coordination costs, expanding spans of control and enabling endogenous task creation. Five propositions characterize how coordination compression affects output, hierarchy, manager demand, wage dispersion, and the task frontier. The model generates a regime fork: the same technology produces broad-based gains or superstar concentration depending on who benefits from coordination compression. Simulations with heterogeneous workers confirm sharp regime divergence. Economy-wide inequality falls in all regimes through employment expansion, but the manager-worker wage gap widens universally. The distributional impact hinges on who controls organizational elasticity.
arXivππ€
AI as Coordination-Compressing Capital: Task Reallocation, Organizational Redesign, and the Regime Fork
By Farach
09.03.2026 16:51
π 1
π 0
π¬ 0
π 0
Introduction. The high prevalence of students not achieving basic learning competencies in Latin America (LAC) is concerning, even more so considering the region's deep structural inequalities and the larger post-pandemic learning losses. Within this scenario, the paper aims to contribute to the identification of the determinants of bottom and low performers (below level 2).
Methodology. Based on 2022 data from the Programme for International Student Assessment (PISA) for 10 LAC countries, and using a stacking model integrating binary classification models as well as by applying Shapley Additive Explanations (SHAP) analysis for interpretability, we identify critical factors impacting on the student performance across low performers groups.
Results. We find that a student with the highest probability of being a not achiever speaks a minority language and had repeated, has no digital devices at home, comes from a poor family and works for payment half of the week, and the school the student attends has wide disadvantages such as bad school climate, weak Information and Communication Technology (ICT) infrastructure and poor teaching quality (only a third of teachers being certified). For countries' estimates, we find quite homogeneous patterns regarding the contribution of top ranked factors, with repetition at primary, household wealth, and educational ICT inputs being top ten ranked covariates in at least 8 out of the 10 total countries.
Discussions. The paper findings contribute to the broad literature on strategies to identify and to target those most left behind in Latin American education systems.
arXivππ€
Identifying the post-pandemic determinants of low performing students in Latin America through Interpretable Machine Learning methods
By Delprato
09.03.2026 16:48
π 0
π 0
π¬ 0
π 0
We study the monotonicity of information costs: more informative experiments must be more costly. As criteria for informativeness, we consider the standard information orders introduced by Blackwell (1951, 1953) and Lehmann (1988). We provide simple necessary and sufficient conditions for a cost function to be monotone with respect to each order, grounded in their garbling characterizations. Finally, we examine several well-known cost functions from the literature through the lens of these conditions.
arXivππ€
On the Monotonicity of Information Costs
By Cheng, Kim
09.03.2026 16:46
π 0
π 0
π¬ 0
π 0
We conduct experiments with algorithmic pricing agents based on Large Language Models (LLMs). In oligopoly settings, LLM-based pricing agents quickly and autonomously reach supracompetitive prices and profits. Variation in seemingly innocuous phrases in LLM instructions ("prompts") substantially influence the degree of supracompetitive pricing. We develop novel techniques for behavioral analysis of LLMs and use them to uncover price-war concerns as a contributing factor. Our results extend to auction settings. Our findings uncover unique challenges to any future regulation of LLM-based pricing agents, and AI-based pricing agents more broadly.
arXivππ€
Algorithmic Collusion by Large Language Models
By Fish, Gonczarowski, Shorrer
09.03.2026 16:41
π 0
π 0
π¬ 0
π 0
This study explored the association between sleep duration and redistribution preferences. Using an online survey, we propose a hypothetical situation in which the tax paid directly by respondents is redistributed to those earning less than one-fifth of the respondents' income. Next, we asked about the allowable tax rates. We found the following through Tobit and ordered logit regression estimations: (1) The relationship between sleep hours and the allowable tax rate showed an inverted U-shape, where the optimal amount of sleep led to the highest allowable tax rate. (2) High-quality sleep was more positively correlated with the allowable tax rate than was low-quality sleep when the sleep quantity was the same. (3) Sleep hours were more significantly and positively correlated with the allowable tax rate in the high-income group than in the low-income group. (4) Assuming that twice the amount of tax paid goes to those with lower income, individuals who previously preferred a higher tax rate were more likely to increase the allowable tax rate.
arXivππ€
Sleep and redistribution preferences: Considering allowable tax rates
By Yamamura, Ohtake
09.03.2026 16:37
π 0
π 0
π¬ 0
π 0
Using an individual-level panel dataset from Japan covering the period 2016-2024, we examined how the COVID-19 pandemic, as an unanticipated public crisis, affected preferences for income redistribution. Furthermore, we investigated how the association between redistribution preferences and trust in government changed before and after COVID-19. The major findings are as follows: (1) individuals in the high-income group are less likely to prefer redistribution after COVID-19 than before it; (2) the degree of decline in redistribution preference is lower when trust in government is higher; and (3) generalised trust and reciprocity did not influence the decline in preference.
arXivππ€
Preference for redistribution and institutional trust: Comparison before and after COVID-19
By Yamamura, Ohtake
09.03.2026 16:34
π 0
π 0
π¬ 0
π 0
This study investigates shifts in acceptable tax rate for reducing inequality during the COVID-19 pandemic using Japanese data. We find a transition from norm-based, unconditional support for redistribution to conditional altruism. Before the pandemic, support remained high and independent of institutional trust. The pandemic generated an overall decline in altruistic attitudes while increasing their dependence on trust in government, particularly among high-income individuals. This "widening gap" implies that in post-crisis societies, the social contract is no longer anchored in stable social norms but increasingly relies on institutional trust to sustain income redistribution from the rich to the poor.
arXivππ€
The Widening Gap in Tax Attitudes: Role of Government Trust in the post COVID-19 period
By Yamamura, Ohtake
09.03.2026 16:30
π 0
π 0
π¬ 0
π 0
In priority-based matching, serial dictatorship (SD) is simple, strategyproof, and Pareto efficient, but not free of justified envy (i.e. fair). This paper studies how to fairly order agents in SD as a function of their priorities. I show that if preferences are identical across agents and uniformly distributed, and objects have unit capacities, the serial order that minimizes the expected number of justified envy cases is the Kemeny ranking of agents' priorities. If any of these assumptions -- identical preferences, uniformly distributed preferences, or unit capacities -- is relaxed, the optimal SD follows a weighted Kemeny ranking. Broadly, these results demonstrate how insights from social choice theory can inform the design of practical matching mechanisms.
arXivππ€
Making Serial Dictatorships Fair
By Hamdan
09.03.2026 16:27
π 0
π 0
π¬ 0
π 0
This paper develops a nonlinear theoretical framework to analyze the dynamics of public expenditure reallocation in Uruguay. Motivated by recent debates on fiscal reform and expenditure efficiency, the paper models fiscal adjustment as a dynamic process in which expenditure categories exhibit heterogeneous institutional rigidity and convex adjustment costs.
Using the national budget for the 2026-2030 fiscal period as an institutional reference, the paper presents a calibrated illustration of the theoretical framework that captures key features of the structure of public spending, including transfers, the public wage bill, operating expenditures, and public investment. The calibration translates institutional characteristics of the budget into quantitative transition dynamics rather than estimating structural parameters econometrically.
The framework allows the evaluation of short-, medium-, and long-run fiscal implications of alternative reform strategies, including administrative restructuring, pension reform, and the gradual reallocation of resources toward human capital and productivity-enhancing investment. In contrast to descriptive expenditure reviews based on static budget comparisons, the model explicitly incorporates nonlinear transition dynamics and institutional frictions. Simulations show that structural expenditure reforms generate significant transitional fiscal costs arising from overlapping institutional systems, labor adjustment frictions, and pension transition liabilities.
As a result, fiscal reform produces a J-shaped expenditure trajectory in which total spending initially increases before gradually converging toward a more efficient long-run allocation. These findings highlight the importance of accounting for adjustment costs and transition dynamics when evaluating the feasibility and timing of structural fiscal reforms.
arXivππ€
Nonlinear Fiscal Transitions and the Dynamics of Public Expenditure Reform
By Vallarino
09.03.2026 16:24
π 0
π 0
π¬ 0
π 0
Partial Least Squares (PLS) regression emerged as an alternative to ordinary
least squares for addressing multicollinearity in a wide range of scientific
applications. As multidimensional tensor data is becoming more widespread,
tensor adaptations of PLS have been developed. Our investigations reveal that
the previously established asymptotic result of the PLS estimator for a tensor
response breaks down as the tensor dimensions and the number of features
increase relative to the sample size. To address this, we propose Sparse Higher
Order Partial Least Squares (SHOPS) regression and an accompanying algorithm.
SHOPS simultaneously accommodates variable selection, dimension reduction, and
tensor association denoising. We establish the asymptotic accuracy of the SHOPS
algorithm under a high-dimensional regime and verify these results through
comprehensive simulation experiments, and applications to two contemporary
high-dimensional biological data analysis.
arXivππ€
Sparse higher order partial least squares for simultaneous variable selection, dimension reduction, and tensor denoising
By
09.03.2026 01:37
π 1
π 0
π¬ 0
π 0
arXiv:2404.12882v1 Announce Type: new
Abstract: In this paper, we analyse the influence of estimating a constant term on the bias of the conditional sum-of-squares (CSS) estimator in a stationary or non-stationary type-II ARFIMA ($p_1$,$d$,$p_2$) model. We derive expressions for the estimator's bias and show that the leading term can be easily removed by a simple modification of the CSS objective function. We call this new estimator the modified conditional sum-of-squares (MCSS) estimator. We show theoretically and by means of Monte Carlo simulations that its performance relative to that of the CSS estimator is markedly improved even for small sample sizes. Finally, we revisit three classical short datasets that have in the past been described by ARFIMA($p_1$,$d$,$p_2$) models with constant term, namely the post-second World War real GNP data, the extended Nelson-Plosser data, and the Nile data.
arXivππ€
The modified conditional sum-of-squares estimator for fractionally integrated models
By
08.03.2026 22:06
π 0
π 0
π¬ 0
π 0
Periodontal pocket depth is a widely used biomarker for diagnosing risk of periodontal disease. However, pocket depth typically exhibits skewness and heavy-tailedness, and its relationship with clinical risk factors is often nonlinear. Motivated by periodontal studies, this paper develops a robust single-index modal regression framework for analyzing skewed and heavy-tailed data. Our method has the following novel features: (1) a flexible two-piece scale Student-$t$ error distribution that generalizes both normal and two-piece scale normal distributions; (2) a deep neural network with guaranteed monotonicity constraints to estimate the unknown single-index function; and (3) theoretical guarantees, including model identifiability and a universal approximation theorem. Our single-index model combines the flexibility of neural networks and the two-piece scale Student-$t$ distribution, delivering robust mode-based estimation that is resistant to outliers, while retaining clinical interpretability through parametric index coefficients. We demonstrate the performance of our method through simulation studies and an application to periodontal disease data from the HealthPartners Institute of Minnesota. The proposed methodology is implemented in the \textsf{R} package \href{https://doi.org/10.32614/CRAN.package.DNNSIM}{\textsc{DNNSIM}}.
arXivππ€
A Robust Monotonic Single-Index Model for Skewed and Heavy-Tailed Data: A Deep Neural Network Approach Applied to Periodontal Studies
By Liu, Wang, Bai et al
08.03.2026 19:08
π 0
π 0
π¬ 0
π 0
arXiv:2402.17915v1 Announce Type: new
Abstract: Safe and reliable disclosure of information from confidential data is a challenging statistical problem. A common approach considers the generation of synthetic data, to be disclosed instead of the original data. Efficient approaches ought to deal with the trade-off between reliability and confidentiality of the released data. Ultimately, the aim is to be able to reproduce as accurately as possible statistical analysis of the original data using the synthetic one. Bayesian networks is a model-based approach that can be used to parsimoniously estimate the underlying distribution of the original data and generate synthetic datasets. These ought to not only approximate the results of analyses with the original data but also robustly quantify the uncertainty involved in the approximation. This paper proposes a fully Bayesian approach to generate and analyze synthetic data based on the posterior predictive distribution of statistics of the synthetic data, allowing for efficient uncertainty quantification. The methodology makes use of probability properties of the model to devise a computationally efficient algorithm to obtain the target predictive distributions via Monte Carlo. Model parsimony is handled by proposing a general class of penalizing priors for Bayesian network models. Finally, the efficiency and applicability of the proposed methodology is empirically investigated through simulated and real examples.
arXivππ€
Generation and analysis of synthetic data via Bayesian networks: a robust approach for uncertainty quantification via Bayesian paradigm
By
08.03.2026 16:07
π 0
π 0
π¬ 0
π 0
arXiv:2401.11263v2 Announce Type: replace
Abstract: Methods for estimating heterogeneous treatment effects (HTE) from observational data have largely focused on continuous or binary outcomes, with less attention paid to survival outcomes and almost none to settings with competing risks. In this work, we develop censoring unbiased transformations (CUTs) for survival outcomes both with and without competing risks. After converting time-to-event outcomes using these CUTs, direct application of HTE learners for continuous outcomes yields consistent estimates of heterogeneous cumulative incidence effects, total effects, and separable direct effects. Our CUTs enable application of a much larger set of state of the art HTE learners for censored outcomes than had previously been available, especially in competing risks settings. We provide generic model-free learner-specific oracle inequalities bounding the finite-sample excess risk. The oracle efficiency results depend on the oracle selector and estimated nuisance functions from all steps involved in the transformation. We demonstrate the empirical performance of the proposed methods in simulation studies.
arXivππ€
Estimating Heterogeneous Treatment Effects on Survival Outcomes Using Counterfactual Censoring Unbiased Transformations
By
08.03.2026 03:49
π 0
π 0
π¬ 0
π 0
While there is an immense literature on Bayesian methods for clustering, the
multiview case has received little attention. This problem focuses on obtaining
distinct but statistically dependent clusterings in a common set of entities
for different data types. For example, clustering patients into subgroups with
subgroup membership varying according to the domain of the patient variables. A
challenge is how to model the across-view dependence between the partitions of
patients into subgroups. The complexities of the partition space make standard
methods to model dependence, such as correlation, infeasible. In this article,
we propose CLustering with Independence Centering (CLIC), a clustering prior
that uses a single parameter to explicitly model dependence between clusterings
across views. CLIC is induced by the product centered Dirichlet process (PCDP),
a novel hierarchical prior that bridges between independent and equivalent
partitions. We show appealing theoretic properties, provide a finite
approximation and prove its accuracy, present a marginal Gibbs sampler for
posterior computation, and derive closed form expressions for the marginal and
joint partition distributions for the CLIC model. On synthetic data and in an
application to epidemiology, CLIC accurately characterizes view-specific
partitions while providing inference on the dependence level.
arXivππ€
Product Centered Dirichlet Processes for Dependent Clustering
By
08.03.2026 01:36
π 0
π 0
π¬ 0
π 0
Existing approaches to asset-pricing under model-uncertainty adapt classical utility-maximisation frameworks and seek theoretical comprehensiveness. We move toward practice by considering binary model-uncertainties and by switching attention from 'preference' to 'constraints'. Economic asset-pricing in this setting is found to decompose naturally into the viable pricing of model-risk and of non-model risk separately such that the former has a unique and intuitive risk-neutral equivalent formulation with convenient properties. Its parameter, a dynamically conserved constant of model-risk inference, allows an integrated representation of ex-ante risk-pricing and bias, such that their ex-post price-effects can be disentangled, through well-known price anomalies.
arXivππ€
The Risk-Neutral Equivalent Pricing of Model-Uncertainty
By Wren
07.03.2026 22:06
π 0
π 0
π¬ 0
π 0
arXiv:2112.07755v2 Announce Type: replace
Abstract: We argue for the use of separate exchangeability as a modeling principle in Bayesian nonparametric (BNP) inference. Separate exchangeability is \emph{de facto} widely applied in the Bayesian parametric case, e.g., it naturally arises in simple mixed models. However, while in some areas, such as random graphs, separate and (closely related) joint exchangeability are widely used, it is curiously underused for several other applications in BNP. We briefly review the definition of separate exchangeability focusing on the implications of such a definition in Bayesian modeling. We then discuss two tractable classes of models that implement separate exchangeability that are the natural counterparts of familiar partially exchangeable BNP models.
The first is nested random partitions for a data matrix, defining a partition of columns and nested partitions of rows, nested within column clusters. Many recent models for nested partitions implement partially exchangeable models related to variations of the well-known nested Dirichlet process. We argue that inference under such models in some cases ignores important features of the experimental setup. We obtain the separately exchangeable counterpart of such partially exchangeable partition structures.
The second class is about setting up separately exchangeable priors for a nonparametric regression model when multiple sets of experimental units are involved. We highlight how a Dirichlet process mixture of linear models known as ANOVA DDP can naturally implement separate exchangeability in such regression problems. Finally, we illustrate how to perform inference under such models in two real data examples.
arXivππ€
Separate Exchangeability as Modeling Principle in Bayesian Nonparametrics
By
07.03.2026 19:07
π 0
π 0
π¬ 0
π 0
Observational studies are often conducted to estimate causal effects of
treatments or exposures on event-time outcomes. Since treatments are not
randomized in observational studies, techniques from causal inference are
required to adjust for confounding. Bayesian approaches to causal estimates are
desirable because they provide 1) prior smoothing provides useful
regularization of causal effect estimates, 2) flexible models that are robust
to misspecification, 3) full inference (i.e. both point and uncertainty
estimates) for causal estimands. However, Bayesian causal inference is
difficult to implement manually and there is a lack of user-friendly software,
presenting a significant barrier to wide-spread use. We address this gap by
developing causalBETA (Bayesian Event Time Analysis) - an open-source R package
for estimating causal effects on event-time outcomes using Bayesian
semiparametric models. The package provides a familiar front-end to users, with
syntax identical to existing survival analysis R packages such as survival. At
the same time, it back-ends to Stan - a popular platform for Bayesian modeling
and high performance statistical computing - for efficient posterior
computation. To improve user experience, the package is built using customized
S3 class objects and methods to facilitate visualizations and summaries of
results using familiar generic functions like plot() and summary(). In this
paper, we provide the methodological details of the package, a demonstration
using publicly-available data, and computational guidance.
arXivππ€
causalBETA: An R Package for Bayesian Semiparametric Casual Inference with Event-Time Outcomes
By
07.03.2026 16:07
π 2
π 1
π¬ 0
π 0
This paper analyses how firms' skill development strategies affect their propensity to introduce innovation. We develop an adjustment-cost framework that links human capital theory and institutionalist and evolutionary approaches, considering innovation as an activity that entails costs in labour adjustment arising either from the training activities of workers or the recruitment of skilled employees. Using a two-wave panel of Italian manufacturing firms observed in 2017-2018 and 2019-2020, we analyse firms' adoption of total, product, process, and circular innovation as a function of internal training practices and of external skills acquisition. Overall, the empirical analysis confirms the expected positive relationship between training and innovation, while also revealing important nuances in the workforce upskilling strategies required for different types of innovation. Moreover, while training activities and skills development are essential across all forms of innovation, our findings indicate that internal training is particularly effective in supporting the implementation of circular innovations. By contrast, external recruitment appears to be consistently necessary whenever innovations are introduced, regardless of their type.
arXivππ€
Training and Innovation in Italian Manufacturing Firms
By Antonioli, Chioatto, Guidetti et al
07.03.2026 03:47
π 0
π 0
π¬ 0
π 0
This paper studies patenting trends in artificial intelligence (AI) and robotics from 1980 to 2019. We introduce a novel distinction between traditional robotics and robotics embedding AI functionalities. Using patent data and a time-series econometric approach, we examine whether these domains share common long-run dynamics and how their trajectories differ across major innovation systems. Three main findings emerge. First, patenting activity in core AI, traditional robots, and AI-enhanced robots follows distinct trajectories, with AI-enhanced robotics accelerating sharply from the early 2010s. Second, structural breaks occur predominantly after 2010, indicating an acceleration in the technological dynamics associated with AI diffusion. Third, long-run relationships between AI and robotics vary systematically across countries: China exhibits strong integration between core AI and AI-enhanced robots, alongside a substantial contribution from universities and the public sector, whereas the United States displays a more market-oriented patenting structure and weaker integration between AI and robots. Europe, Japan, and South Korea show intermediate patterns.
arXivππ€
The "Gold Rush" in AI and Robotics Patenting Activity. Do innovation systems have a role?
By Guidetti, Leoncini, Macaluso
07.03.2026 03:42
π 0
π 0
π¬ 0
π 0
This paper analyses how firms' skill development strategies affect their propensity to introduce innovation. We develop an adjustment-cost framework that links human capital theory and institutionalist and evolutionary approaches, considering innovation as an activity that entails costs in labour adjustment arising either from the training activities of workers or the recruitment of skilled employees. Using a two-wave panel of Italian manufacturing firms observed in 2017-2018 and 2019-2020, we analyse firms' adoption of total, product, process, and circular innovation as a function of internal training practices and of external skills acquisition. Overall, the empirical analysis confirms the expected positive relationship between training and innovation, while also revealing important nuances in the workforce upskilling strategies required for different types of innovation. Moreover, while training activities and skills development are essential across all forms of innovation, our findings indicate that internal training is particularly effective in supporting the implementation of circular innovations. By contrast, external recruitment appears to be consistently necessary whenever innovations are introduced, regardless of their type.
arXivππ€
Training and Innovation in Italian Manufacturing Firms
By Antonioli, Chioatto, Guidetti et al
07.03.2026 01:40
π 0
π 0
π¬ 0
π 0
This paper studies patenting trends in artificial intelligence (AI) and robotics from 1980 to 2019. We introduce a novel distinction between traditional robotics and robotics embedding AI functionalities. Using patent data and a time-series econometric approach, we examine whether these domains share common long-run dynamics and how their trajectories differ across major innovation systems. Three main findings emerge. First, patenting activity in core AI, traditional robots, and AI-enhanced robots follows distinct trajectories, with AI-enhanced robotics accelerating sharply from the early 2010s. Second, structural breaks occur predominantly after 2010, indicating an acceleration in the technological dynamics associated with AI diffusion. Third, long-run relationships between AI and robotics vary systematically across countries: China exhibits strong integration between core AI and AI-enhanced robots, alongside a substantial contribution from universities and the public sector, whereas the United States displays a more market-oriented patenting structure and weaker integration between AI and robots. Europe, Japan, and South Korea show intermediate patterns.
arXivππ€
The "Gold Rush" in AI and Robotics Patenting Activity. Do innovation systems have a role?
By Guidetti, Leoncini, Macaluso
07.03.2026 01:36
π 0
π 0
π¬ 0
π 0
This paper analyses how firms' skill development strategies affect their propensity to introduce innovation. We develop an adjustment-cost framework that links human capital theory and institutionalist and evolutionary approaches, considering innovation as an activity that entails costs in labour adjustment arising either from the training activities of workers or the recruitment of skilled employees. Using a two-wave panel of Italian manufacturing firms observed in 2017-2018 and 2019-2020, we analyse firms' adoption of total, product, process, and circular innovation as a function of internal training practices and of external skills acquisition. Overall, the empirical analysis confirms the expected positive relationship between training and innovation, while also revealing important nuances in the workforce upskilling strategies required for different types of innovation. Moreover, while training activities and skills development are essential across all forms of innovation, our findings indicate that internal training is particularly effective in supporting the implementation of circular innovations. By contrast, external recruitment appears to be consistently necessary whenever innovations are introduced, regardless of their type.
arXivππ€
Training and Innovation in Italian Manufacturing Firms
By Antonioli, Chioatto, Guidetti et al
06.03.2026 22:10
π 0
π 0
π¬ 0
π 0
This paper studies patenting trends in artificial intelligence (AI) and robotics from 1980 to 2019. We introduce a novel distinction between traditional robotics and robotics embedding AI functionalities. Using patent data and a time-series econometric approach, we examine whether these domains share common long-run dynamics and how their trajectories differ across major innovation systems. Three main findings emerge. First, patenting activity in core AI, traditional robots, and AI-enhanced robots follows distinct trajectories, with AI-enhanced robotics accelerating sharply from the early 2010s. Second, structural breaks occur predominantly after 2010, indicating an acceleration in the technological dynamics associated with AI diffusion. Third, long-run relationships between AI and robotics vary systematically across countries: China exhibits strong integration between core AI and AI-enhanced robots, alongside a substantial contribution from universities and the public sector, whereas the United States displays a more market-oriented patenting structure and weaker integration between AI and robots. Europe, Japan, and South Korea show intermediate patterns.
arXivππ€
The "Gold Rush" in AI and Robotics Patenting Activity. Do innovation systems have a role?
By Guidetti, Leoncini, Macaluso
06.03.2026 22:08
π 0
π 0
π¬ 0
π 0