
None, to use the default 5-fold cross validation,
#Colorcross vs generator#
cv int, cross-validation generator or an iterable, default=Noneĭetermines the cross-validation splitting strategy. See Specifying multiple metrics for evaluation for an example. Names and the values are the metric scores Ī dictionary with metric names as keys and callables a values. If scoring represents multiple scores, one can use:Ī callable returning a dictionary where the keys are the metric If scoring represents a single score, one can use:Ī single string (see The scoring parameter: defining model evaluation rules) Ī callable (see Defining your scoring strategy from metric functions) that returns a single value. Strategy to evaluate the performance of the cross-validated model on scoring str, callable, list, tuple, or dict, default=None Only used in conjunction with a “Group” cv Group labels for the samples used while splitting the dataset into groups array-like of shape (n_samples,), default=None The target variable to try to predict in the case of y array-like of shape (n_samples,) or (n_samples, n_outputs), default=None X array-like of shape (n_samples, n_features) Parameters estimator estimator object implementing ‘fit’

cross_validate ( estimator, X, y = None, *, groups = None, scoring = None, cv = None, n_jobs = None, verbose = 0, fit_params = None, pre_dispatch = '2*n_jobs', return_train_score = False, return_estimator = False, error_score = nan ) ¶Įvaluate metric(s) by cross-validation and also record fit/score times.

