ALSModel (Spark 3.5.5 JavaDoc) (original) (raw)
Object
- org.apache.spark.ml.PipelineStage
- org.apache.spark.ml.Transformer
- org.apache.spark.ml.Model<ALSModel>
* * org.apache.spark.ml.recommendation.ALSModel
- org.apache.spark.ml.Model<ALSModel>
- org.apache.spark.ml.Transformer
All Implemented Interfaces:
java.io.Serializable, org.apache.spark.internal.Logging, Params, HasBlockSize, HasPredictionCol, ALSModelParams, Identifiable, MLWritable
public class ALSModel
extends Model<ALSModel>
implements ALSModelParams, MLWritable
Model fitted by ALS.
param: rank rank of the matrix factorization model param: userFactors a DataFrame that stores user factors in two columns: id
and features
param: itemFactors a DataFrame that stores item factors in two columns: id
and features
See Also:
Serialized Form
Nested Class Summary
* ### Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging `org.apache.spark.internal.Logging.SparkShellLoggingFilter`
Method Summary
All Methods Static Methods Instance Methods Concrete Methods
Modifier and Type Method and Description IntParam blockSize() Param for block size for stacking input data in matrices. Param coldStartStrategy() Param for strategy for dealing with unknown or new users/items at prediction time. ALSModel copy(ParamMap extra) Creates a copy of this instance with the same UID and some extra params. Param itemCol() Param for the column name for item ids. Dataset<Row> itemFactors() static ALSModel load(String path) Param predictionCol() Param for prediction column name. int rank() static MLReader<ALSModel> read() Dataset<Row> recommendForAllItems(int numUsers) Returns top numUsers users recommended for each item, for all items. Dataset<Row> recommendForAllUsers(int numItems) Returns top numItems items recommended for each user, for all users. Dataset<Row> recommendForItemSubset(Dataset<?> dataset, int numUsers) Returns top numUsers users recommended for each item id in the input data set. Dataset<Row> recommendForUserSubset(Dataset<?> dataset, int numItems) Returns top numItems items recommended for each user id in the input data set. ALSModel setBlockSize(int value) Set block size for stacking input data in matrices. ALSModel setColdStartStrategy(String value) ALSModel setItemCol(String value) ALSModel setPredictionCol(String value) ALSModel setUserCol(String value) String toString() Dataset<Row> transform(Dataset<?> dataset) Transforms the input dataset. StructType transformSchema(StructType schema) Check transform validity and derive the output schema from the input schema. String uid() An immutable unique ID for the object and its derivatives. Param userCol() Param for the column name for user ids. Dataset<Row> userFactors() MLWriter write() Returns an MLWriter instance for this ML instance. * ### Methods inherited from class org.apache.spark.ml.[Model](../../../../../org/apache/spark/ml/Model.html "class in org.apache.spark.ml") `[hasParent](../../../../../org/apache/spark/ml/Model.html#hasParent--), [parent](../../../../../org/apache/spark/ml/Model.html#parent--), [setParent](../../../../../org/apache/spark/ml/Model.html#setParent-org.apache.spark.ml.Estimator-)` * ### Methods inherited from class org.apache.spark.ml.[Transformer](../../../../../org/apache/spark/ml/Transformer.html "class in org.apache.spark.ml") `[transform](../../../../../org/apache/spark/ml/Transformer.html#transform-org.apache.spark.sql.Dataset-org.apache.spark.ml.param.ParamMap-), [transform](../../../../../org/apache/spark/ml/Transformer.html#transform-org.apache.spark.sql.Dataset-org.apache.spark.ml.param.ParamPair-org.apache.spark.ml.param.ParamPair...-), [transform](../../../../../org/apache/spark/ml/Transformer.html#transform-org.apache.spark.sql.Dataset-org.apache.spark.ml.param.ParamPair-scala.collection.Seq-)` * ### Methods inherited from class org.apache.spark.ml.[PipelineStage](../../../../../org/apache/spark/ml/PipelineStage.html "class in org.apache.spark.ml") `[params](../../../../../org/apache/spark/ml/PipelineStage.html#params--)` * ### Methods inherited from class Object `equals, getClass, hashCode, notify, notifyAll, wait, wait, wait` * ### Methods inherited from interface org.apache.spark.ml.recommendation.[ALSModelParams](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html "interface in org.apache.spark.ml.recommendation") `[checkIntegers](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html#checkIntegers-org.apache.spark.sql.Dataset-java.lang.String-), [getColdStartStrategy](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html#getColdStartStrategy--), [getItemCol](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html#getItemCol--), [getUserCol](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html#getUserCol--)` * ### Methods inherited from interface org.apache.spark.ml.param.shared.[HasPredictionCol](../../../../../org/apache/spark/ml/param/shared/HasPredictionCol.html "interface in org.apache.spark.ml.param.shared") `[getPredictionCol](../../../../../org/apache/spark/ml/param/shared/HasPredictionCol.html#getPredictionCol--)` * ### Methods inherited from interface org.apache.spark.ml.param.shared.[HasBlockSize](../../../../../org/apache/spark/ml/param/shared/HasBlockSize.html "interface in org.apache.spark.ml.param.shared") `[getBlockSize](../../../../../org/apache/spark/ml/param/shared/HasBlockSize.html#getBlockSize--)` * ### Methods inherited from interface org.apache.spark.ml.param.[Params](../../../../../org/apache/spark/ml/param/Params.html "interface in org.apache.spark.ml.param") `[clear](../../../../../org/apache/spark/ml/param/Params.html#clear-org.apache.spark.ml.param.Param-), [copyValues](../../../../../org/apache/spark/ml/param/Params.html#copyValues-T-org.apache.spark.ml.param.ParamMap-), [defaultCopy](../../../../../org/apache/spark/ml/param/Params.html#defaultCopy-org.apache.spark.ml.param.ParamMap-), [defaultParamMap](../../../../../org/apache/spark/ml/param/Params.html#defaultParamMap--), [explainParam](../../../../../org/apache/spark/ml/param/Params.html#explainParam-org.apache.spark.ml.param.Param-), [explainParams](../../../../../org/apache/spark/ml/param/Params.html#explainParams--), [extractParamMap](../../../../../org/apache/spark/ml/param/Params.html#extractParamMap--), [extractParamMap](../../../../../org/apache/spark/ml/param/Params.html#extractParamMap-org.apache.spark.ml.param.ParamMap-), [get](../../../../../org/apache/spark/ml/param/Params.html#get-org.apache.spark.ml.param.Param-), [getDefault](../../../../../org/apache/spark/ml/param/Params.html#getDefault-org.apache.spark.ml.param.Param-), [getOrDefault](../../../../../org/apache/spark/ml/param/Params.html#getOrDefault-org.apache.spark.ml.param.Param-), [getParam](../../../../../org/apache/spark/ml/param/Params.html#getParam-java.lang.String-), [hasDefault](../../../../../org/apache/spark/ml/param/Params.html#hasDefault-org.apache.spark.ml.param.Param-), [hasParam](../../../../../org/apache/spark/ml/param/Params.html#hasParam-java.lang.String-), [isDefined](../../../../../org/apache/spark/ml/param/Params.html#isDefined-org.apache.spark.ml.param.Param-), [isSet](../../../../../org/apache/spark/ml/param/Params.html#isSet-org.apache.spark.ml.param.Param-), [onParamChange](../../../../../org/apache/spark/ml/param/Params.html#onParamChange-org.apache.spark.ml.param.Param-), [paramMap](../../../../../org/apache/spark/ml/param/Params.html#paramMap--), [params](../../../../../org/apache/spark/ml/param/Params.html#params--), [set](../../../../../org/apache/spark/ml/param/Params.html#set-org.apache.spark.ml.param.Param-T-), [set](../../../../../org/apache/spark/ml/param/Params.html#set-org.apache.spark.ml.param.ParamPair-), [set](../../../../../org/apache/spark/ml/param/Params.html#set-java.lang.String-java.lang.Object-), [setDefault](../../../../../org/apache/spark/ml/param/Params.html#setDefault-org.apache.spark.ml.param.Param-T-), [setDefault](../../../../../org/apache/spark/ml/param/Params.html#setDefault-scala.collection.Seq-), [shouldOwn](../../../../../org/apache/spark/ml/param/Params.html#shouldOwn-org.apache.spark.ml.param.Param-)` * ### Methods inherited from interface org.apache.spark.ml.util.[MLWritable](../../../../../org/apache/spark/ml/util/MLWritable.html "interface in org.apache.spark.ml.util") `[save](../../../../../org/apache/spark/ml/util/MLWritable.html#save-java.lang.String-)` * ### Methods inherited from interface org.apache.spark.internal.Logging `$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize`
Method Detail
* #### read public static [MLReader](../../../../../org/apache/spark/ml/util/MLReader.html "class in org.apache.spark.ml.util")<[ALSModel](../../../../../org/apache/spark/ml/recommendation/ALSModel.html "class in org.apache.spark.ml.recommendation")> read() * #### load public static [ALSModel](../../../../../org/apache/spark/ml/recommendation/ALSModel.html "class in org.apache.spark.ml.recommendation") load(String path) * #### userCol public [Param](../../../../../org/apache/spark/ml/param/Param.html "class in org.apache.spark.ml.param")<String> userCol() Param for the column name for user ids. Ids must be integers. Other numeric types are supported for this column, but will be cast to integers as long as they fall within the integer value range. Default: "user" Specified by: `[userCol](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html#userCol--)` in interface `[ALSModelParams](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html "interface in org.apache.spark.ml.recommendation")` Returns: (undocumented) * #### itemCol public [Param](../../../../../org/apache/spark/ml/param/Param.html "class in org.apache.spark.ml.param")<String> itemCol() Param for the column name for item ids. Ids must be integers. Other numeric types are supported for this column, but will be cast to integers as long as they fall within the integer value range. Default: "item" Specified by: `[itemCol](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html#itemCol--)` in interface `[ALSModelParams](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html "interface in org.apache.spark.ml.recommendation")` Returns: (undocumented) * #### coldStartStrategy public [Param](../../../../../org/apache/spark/ml/param/Param.html "class in org.apache.spark.ml.param")<String> coldStartStrategy() Param for strategy for dealing with unknown or new users/items at prediction time. This may be useful in cross-validation or production scenarios, for handling user/item ids the model has not seen in the training data. Supported values: - "nan": predicted value for unknown ids will be NaN. - "drop": rows in the input DataFrame containing unknown ids will be dropped from the output DataFrame containing predictions. Default: "nan". Specified by: `[coldStartStrategy](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html#coldStartStrategy--)` in interface `[ALSModelParams](../../../../../org/apache/spark/ml/recommendation/ALSModelParams.html "interface in org.apache.spark.ml.recommendation")` Returns: (undocumented) * #### blockSize public final [IntParam](../../../../../org/apache/spark/ml/param/IntParam.html "class in org.apache.spark.ml.param") blockSize() Param for block size for stacking input data in matrices. Data is stacked within partitions. If block size is more than remaining data in a partition then it is adjusted to the size of this data.. Specified by: `[blockSize](../../../../../org/apache/spark/ml/param/shared/HasBlockSize.html#blockSize--)` in interface `[HasBlockSize](../../../../../org/apache/spark/ml/param/shared/HasBlockSize.html "interface in org.apache.spark.ml.param.shared")` Returns: (undocumented) * #### predictionCol public final [Param](../../../../../org/apache/spark/ml/param/Param.html "class in org.apache.spark.ml.param")<String> predictionCol() Param for prediction column name. Specified by: `[predictionCol](../../../../../org/apache/spark/ml/param/shared/HasPredictionCol.html#predictionCol--)` in interface `[HasPredictionCol](../../../../../org/apache/spark/ml/param/shared/HasPredictionCol.html "interface in org.apache.spark.ml.param.shared")` Returns: (undocumented) * #### uid public String uid() An immutable unique ID for the object and its derivatives. Specified by: `[uid](../../../../../org/apache/spark/ml/util/Identifiable.html#uid--)` in interface `[Identifiable](../../../../../org/apache/spark/ml/util/Identifiable.html "interface in org.apache.spark.ml.util")` Returns: (undocumented) * #### rank public int rank() * #### userFactors public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<[Row](../../../../../org/apache/spark/sql/Row.html "interface in org.apache.spark.sql")> userFactors() * #### itemFactors public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<[Row](../../../../../org/apache/spark/sql/Row.html "interface in org.apache.spark.sql")> itemFactors() * #### setUserCol public [ALSModel](../../../../../org/apache/spark/ml/recommendation/ALSModel.html "class in org.apache.spark.ml.recommendation") setUserCol(String value) * #### setItemCol public [ALSModel](../../../../../org/apache/spark/ml/recommendation/ALSModel.html "class in org.apache.spark.ml.recommendation") setItemCol(String value) * #### setPredictionCol public [ALSModel](../../../../../org/apache/spark/ml/recommendation/ALSModel.html "class in org.apache.spark.ml.recommendation") setPredictionCol(String value) * #### setColdStartStrategy public [ALSModel](../../../../../org/apache/spark/ml/recommendation/ALSModel.html "class in org.apache.spark.ml.recommendation") setColdStartStrategy(String value) * #### setBlockSize public [ALSModel](../../../../../org/apache/spark/ml/recommendation/ALSModel.html "class in org.apache.spark.ml.recommendation") setBlockSize(int value) Set block size for stacking input data in matrices. Default is 4096. Parameters: `value` \- (undocumented) Returns: (undocumented) * #### transform public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<[Row](../../../../../org/apache/spark/sql/Row.html "interface in org.apache.spark.sql")> transform([Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> dataset) Transforms the input dataset. Specified by: `[transform](../../../../../org/apache/spark/ml/Transformer.html#transform-org.apache.spark.sql.Dataset-)` in class `[Transformer](../../../../../org/apache/spark/ml/Transformer.html "class in org.apache.spark.ml")` Parameters: `dataset` \- (undocumented) Returns: (undocumented) * #### transformSchema public [StructType](../../../../../org/apache/spark/sql/types/StructType.html "class in org.apache.spark.sql.types") transformSchema([StructType](../../../../../org/apache/spark/sql/types/StructType.html "class in org.apache.spark.sql.types") schema) Check transform validity and derive the output schema from the input schema. We check validity for interactions between parameters during `transformSchema` and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by `Param.validate()`. Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks. Specified by: `[transformSchema](../../../../../org/apache/spark/ml/PipelineStage.html#transformSchema-org.apache.spark.sql.types.StructType-)` in class `[PipelineStage](../../../../../org/apache/spark/ml/PipelineStage.html "class in org.apache.spark.ml")` Parameters: `schema` \- (undocumented) Returns: (undocumented) * #### copy public [ALSModel](../../../../../org/apache/spark/ml/recommendation/ALSModel.html "class in org.apache.spark.ml.recommendation") copy([ParamMap](../../../../../org/apache/spark/ml/param/ParamMap.html "class in org.apache.spark.ml.param") extra) Description copied from interface: `[Params](../../../../../org/apache/spark/ml/param/Params.html#copy-org.apache.spark.ml.param.ParamMap-)` Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See `defaultCopy()`. Specified by: `[copy](../../../../../org/apache/spark/ml/param/Params.html#copy-org.apache.spark.ml.param.ParamMap-)` in interface `[Params](../../../../../org/apache/spark/ml/param/Params.html "interface in org.apache.spark.ml.param")` Specified by: `[copy](../../../../../org/apache/spark/ml/Model.html#copy-org.apache.spark.ml.param.ParamMap-)` in class `[Model](../../../../../org/apache/spark/ml/Model.html "class in org.apache.spark.ml")<[ALSModel](../../../../../org/apache/spark/ml/recommendation/ALSModel.html "class in org.apache.spark.ml.recommendation")>` Parameters: `extra` \- (undocumented) Returns: (undocumented) * #### write public [MLWriter](../../../../../org/apache/spark/ml/util/MLWriter.html "class in org.apache.spark.ml.util") write() Description copied from interface: `[MLWritable](../../../../../org/apache/spark/ml/util/MLWritable.html#write--)` Returns an `MLWriter` instance for this ML instance. Specified by: `[write](../../../../../org/apache/spark/ml/util/MLWritable.html#write--)` in interface `[MLWritable](../../../../../org/apache/spark/ml/util/MLWritable.html "interface in org.apache.spark.ml.util")` Returns: (undocumented) * #### toString public String toString() Specified by: `[toString](../../../../../org/apache/spark/ml/util/Identifiable.html#toString--)` in interface `[Identifiable](../../../../../org/apache/spark/ml/util/Identifiable.html "interface in org.apache.spark.ml.util")` Overrides: `toString` in class `Object` * #### recommendForAllUsers public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<[Row](../../../../../org/apache/spark/sql/Row.html "interface in org.apache.spark.sql")> recommendForAllUsers(int numItems) Returns top `numItems` items recommended for each user, for all users. Parameters: `numItems` \- max number of recommendations for each user Returns: a DataFrame of (userCol: Int, recommendations), where recommendations are stored as an array of (itemCol: Int, rating: Float) Rows. * #### recommendForUserSubset public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<[Row](../../../../../org/apache/spark/sql/Row.html "interface in org.apache.spark.sql")> recommendForUserSubset([Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> dataset, int numItems) Returns top `numItems` items recommended for each user id in the input data set. Note that if there are duplicate ids in the input dataset, only one set of recommendations per unique id will be returned. Parameters: `dataset` \- a Dataset containing a column of user ids. The column name must match `userCol`. `numItems` \- max number of recommendations for each user. Returns: a DataFrame of (userCol: Int, recommendations), where recommendations are stored as an array of (itemCol: Int, rating: Float) Rows. * #### recommendForAllItems public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<[Row](../../../../../org/apache/spark/sql/Row.html "interface in org.apache.spark.sql")> recommendForAllItems(int numUsers) Returns top `numUsers` users recommended for each item, for all items. Parameters: `numUsers` \- max number of recommendations for each item Returns: a DataFrame of (itemCol: Int, recommendations), where recommendations are stored as an array of (userCol: Int, rating: Float) Rows. * #### recommendForItemSubset public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<[Row](../../../../../org/apache/spark/sql/Row.html "interface in org.apache.spark.sql")> recommendForItemSubset([Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> dataset, int numUsers) Returns top `numUsers` users recommended for each item id in the input data set. Note that if there are duplicate ids in the input dataset, only one set of recommendations per unique id will be returned. Parameters: `dataset` \- a Dataset containing a column of item ids. The column name must match `itemCol`. `numUsers` \- max number of recommendations for each item. Returns: a DataFrame of (itemCol: Int, recommendations), where recommendations are stored as an array of (userCol: Int, rating: Float) Rows.