(Repost of issue #996 as it's been almost a year since the last activity there)
Leave-one-out cross-validation is implemented in loo_cv, but can't be used with the rest of the tidymodels framework, e.g. for computing metrics.
There is an urban myth in the statistics community saying that LOOCV has poor statistical properties, which seems to date back to an unsubstantiated claim in Efron (1983). Numerous modern studies have shown that LOOCV actually has very good properties (e.g. Zhang & Yang (2015)) and that it often is preferable to k-fold cross-validation with k<n.
Adding full support for LOOCV would, as the tidymodels design goals put it, encourage good statistical practice. Moreover, it would add a basic feature of the predictive modelling toolbox that currently is missing from the package (enabling a wider variety of methodologies; once again in line with the design goals).
(Repost of issue #996 as it's been almost a year since the last activity there)
Leave-one-out cross-validation is implemented in loo_cv, but can't be used with the rest of the tidymodels framework, e.g. for computing metrics.
There is an urban myth in the statistics community saying that LOOCV has poor statistical properties, which seems to date back to an unsubstantiated claim in Efron (1983). Numerous modern studies have shown that LOOCV actually has very good properties (e.g. Zhang & Yang (2015)) and that it often is preferable to k-fold cross-validation with k<n.
Adding full support for LOOCV would, as the tidymodels design goals put it, encourage good statistical practice. Moreover, it would add a basic feature of the predictive modelling toolbox that currently is missing from the package (enabling a wider variety of methodologies; once again in line with the design goals).