For algorithm == 'kd_tree': all parameters except metric != 'euclidean' or 'minkowski' with p != 2 For algorithm == 'brute': all parameters except metric not in ['euclidean', 'manhattan', 'minkowski', 'chebyshev', 'cosine']
Multi-output and sparse data are not supported. Number of classes must be at least 2.
All parameters are supported except: solver not in ['lbfgs', 'newton-cg'] l1_ratio != 0 dual = True sample_weight != None class_weight != None Solver 'newton-cg' with fit_intercept = False is not supported
All parameters are supported except: solver != 'auto' sample_weight != None positive = True (this is supported through the class sklearn.linear_model.ElasticNet) alpha must be a scalar
All parameters are supported except: algorithm != 'lloyd' (‘elkan’ falls back to ‘lloyd’) n_clusters = 1 sample_weight must be None, constant, or equal weights verbose = True will only print results from the last iteration, and will only print inertia numbers, not ‘convergence achieved’ messages.
All parameters are supported except: svd_solver not in ['full', 'covariance_eigh', 'onedal_svd'] For scikit-learn < 1.5: 'full' solver is automatically mapped to 'covariance_eigh'
All parameters are supported except: metric != ‘euclidean’ or 'minkowski' with p != 2 n_components can only be 2 method != "barnes_hut" Refer to TSNE acceleration details to learn more.
For algorithm == ‘kd_tree’: all parameters except metric != 'euclidean' or 'minkowski' with p != 2 For algorithm == ‘brute’: all parameters except metric not in ['euclidean', 'manhattan', 'minkowski', 'chebyshev', 'cosine']
Supported data formats: Only dense data is supported Only integer and 32/64-bits floating point types are supported Data with more than 3 dimensions is not supported Only np.ndarray inputs are supported.
All parameters are supported except: algorithm != 'brute' weights = 'callable' metric not in ['euclidean', 'manhattan', 'minkowski', 'chebyshev', 'cosine']
Only dense data is supported. Number of classes must be at least 2.
All parameters are supported except: algorithm != 'lloyd' (‘elkan’ falls back to ‘lloyd’) n_clusters = 1 sample_weight must be None, constant, or equal weights init = 'k-means++' falls back to CPU verbose = True will only print results from the last iteration, and will only print inertia numbers, not ‘convergence achieved’ messages.
All parameters are supported except: svd_solver not in ['full', 'covariance_eigh', 'onedal_svd'] For scikit-learn < 1.5: 'full' solver is automatically mapped to 'covariance_eigh'
All parameters are supported except: algorithm != 'brute' weights = 'callable' metric not in ['euclidean', 'manhattan', 'minkowski', 'chebyshev', 'cosine']
All parameters are supported except: algorithm != 'brute' weights = 'callable' metric not in ['euclidean', 'manhattan', 'minkowski', 'chebyshev', 'cosine'] predict_proba method not supported
Only dense data is supported. Number of classes must be at least 2.
All parameters are supported except: algorithm != 'lloyd' (‘elkan’ falls back to ‘lloyd’) n_clusters = 1 sample_weight must be None, constant, or equal weights init = 'k-means++' falls back to CPU verbose = True will only print results from the last iteration, and will only print inertia numbers, not ‘convergence achieved’ messages.
All parameters are supported except: svd_solver not in ['full', 'covariance_eigh', 'onedal_svd'] For scikit-learn < 1.5: 'full' solver is automatically mapped to 'covariance_eigh'
All parameters are supported except: algorithm != 'brute' weights = 'callable' metric not in ['euclidean', 'manhattan', 'minkowski', 'chebyshev', 'cosine']