MinHashLSHModel (Spark 3.5.5 JavaDoc) (original) (raw)
Object
- org.apache.spark.ml.PipelineStage
- org.apache.spark.ml.Transformer
- org.apache.spark.ml.Model
* * org.apache.spark.ml.feature.MinHashLSHModel
- org.apache.spark.ml.Model
- org.apache.spark.ml.Transformer
All Implemented Interfaces:
java.io.Serializable, org.apache.spark.internal.Logging, LSHParams, Params, HasInputCol, HasOutputCol, Identifiable, MLWritable
public class MinHashLSHModel
extends Model
Model produced by MinHashLSH, where multiple hash functions are stored. Each hash function is picked from the following family of hash functions, where a_i and b_i are randomly chosen integers less than prime:h_i(x) = ((x \cdot a_i + b_i) \mod prime)
This hash family is approximately min-wise independent according to the reference.
Reference: Tom Bohman, Colin Cooper, and Alan Frieze. "Min-wise independent linear permutations." Electronic Journal of Combinatorics 7 (2000): R26.
param: randCoefficients Pairs of random coefficients. Each pair is used by one hash function.
See Also:
Serialized Form
Nested Class Summary
* ### Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging `org.apache.spark.internal.Logging.SparkShellLoggingFilter`
Method Summary
All Methods Static Methods Instance Methods Concrete Methods
Modifier and Type Method and Description Dataset<?> approxNearestNeighbors(Dataset<?> dataset,Vector key, int numNearestNeighbors) Overloaded method for approxNearestNeighbors. Dataset<?> approxNearestNeighbors(Dataset<?> dataset,Vector key, int numNearestNeighbors, String distCol) Given a large dataset and an item, approximately find at most k items which have the closest distance to the item. Dataset<?> approxSimilarityJoin(Dataset datasetA,[Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql") datasetB, double threshold) Overloaded method for approxSimilarityJoin. Dataset<?> approxSimilarityJoin(Dataset datasetA,[Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql") datasetB, double threshold, String distCol) Join two datasets to approximately find all pairs of rows whose distance are smaller than the threshold. MinHashLSHModel copy(ParamMap extra) Creates a copy of this instance with the same UID and some extra params. Param inputCol() Param for input column name. static MinHashLSHModel load(String path) IntParam numHashTables() Param for the number of hash tables used in LSH OR-amplification. Param outputCol() Param for output column name. static MLReader<MinHashLSHModel> read() MinHashLSHModel setInputCol(String value) MinHashLSHModel setOutputCol(String value) String toString() Dataset<Row> transform(Dataset<?> dataset) Transforms the input dataset. StructType transformSchema(StructType schema) Check transform validity and derive the output schema from the input schema. String uid() An immutable unique ID for the object and its derivatives. MLWriter write() Returns an MLWriter instance for this ML instance. * ### Methods inherited from class org.apache.spark.ml.[Model](../../../../../org/apache/spark/ml/Model.html "class in org.apache.spark.ml") `[hasParent](../../../../../org/apache/spark/ml/Model.html#hasParent--), [parent](../../../../../org/apache/spark/ml/Model.html#parent--), [setParent](../../../../../org/apache/spark/ml/Model.html#setParent-org.apache.spark.ml.Estimator-)` * ### Methods inherited from class org.apache.spark.ml.[Transformer](../../../../../org/apache/spark/ml/Transformer.html "class in org.apache.spark.ml") `[transform](../../../../../org/apache/spark/ml/Transformer.html#transform-org.apache.spark.sql.Dataset-org.apache.spark.ml.param.ParamMap-), [transform](../../../../../org/apache/spark/ml/Transformer.html#transform-org.apache.spark.sql.Dataset-org.apache.spark.ml.param.ParamPair-org.apache.spark.ml.param.ParamPair...-), [transform](../../../../../org/apache/spark/ml/Transformer.html#transform-org.apache.spark.sql.Dataset-org.apache.spark.ml.param.ParamPair-scala.collection.Seq-)` * ### Methods inherited from class org.apache.spark.ml.[PipelineStage](../../../../../org/apache/spark/ml/PipelineStage.html "class in org.apache.spark.ml") `[params](../../../../../org/apache/spark/ml/PipelineStage.html#params--)` * ### Methods inherited from class Object `equals, getClass, hashCode, notify, notifyAll, wait, wait, wait` * ### Methods inherited from interface org.apache.spark.ml.feature.[LSHParams](../../../../../org/apache/spark/ml/feature/LSHParams.html "interface in org.apache.spark.ml.feature") `[getNumHashTables](../../../../../org/apache/spark/ml/feature/LSHParams.html#getNumHashTables--), [validateAndTransformSchema](../../../../../org/apache/spark/ml/feature/LSHParams.html#validateAndTransformSchema-org.apache.spark.sql.types.StructType-)` * ### Methods inherited from interface org.apache.spark.ml.param.shared.[HasInputCol](../../../../../org/apache/spark/ml/param/shared/HasInputCol.html "interface in org.apache.spark.ml.param.shared") `[getInputCol](../../../../../org/apache/spark/ml/param/shared/HasInputCol.html#getInputCol--)` * ### Methods inherited from interface org.apache.spark.ml.param.shared.[HasOutputCol](../../../../../org/apache/spark/ml/param/shared/HasOutputCol.html "interface in org.apache.spark.ml.param.shared") `[getOutputCol](../../../../../org/apache/spark/ml/param/shared/HasOutputCol.html#getOutputCol--)` * ### Methods inherited from interface org.apache.spark.ml.param.[Params](../../../../../org/apache/spark/ml/param/Params.html "interface in org.apache.spark.ml.param") `[clear](../../../../../org/apache/spark/ml/param/Params.html#clear-org.apache.spark.ml.param.Param-), [copyValues](../../../../../org/apache/spark/ml/param/Params.html#copyValues-T-org.apache.spark.ml.param.ParamMap-), [defaultCopy](../../../../../org/apache/spark/ml/param/Params.html#defaultCopy-org.apache.spark.ml.param.ParamMap-), [defaultParamMap](../../../../../org/apache/spark/ml/param/Params.html#defaultParamMap--), [explainParam](../../../../../org/apache/spark/ml/param/Params.html#explainParam-org.apache.spark.ml.param.Param-), [explainParams](../../../../../org/apache/spark/ml/param/Params.html#explainParams--), [extractParamMap](../../../../../org/apache/spark/ml/param/Params.html#extractParamMap--), [extractParamMap](../../../../../org/apache/spark/ml/param/Params.html#extractParamMap-org.apache.spark.ml.param.ParamMap-), [get](../../../../../org/apache/spark/ml/param/Params.html#get-org.apache.spark.ml.param.Param-), [getDefault](../../../../../org/apache/spark/ml/param/Params.html#getDefault-org.apache.spark.ml.param.Param-), [getOrDefault](../../../../../org/apache/spark/ml/param/Params.html#getOrDefault-org.apache.spark.ml.param.Param-), [getParam](../../../../../org/apache/spark/ml/param/Params.html#getParam-java.lang.String-), [hasDefault](../../../../../org/apache/spark/ml/param/Params.html#hasDefault-org.apache.spark.ml.param.Param-), [hasParam](../../../../../org/apache/spark/ml/param/Params.html#hasParam-java.lang.String-), [isDefined](../../../../../org/apache/spark/ml/param/Params.html#isDefined-org.apache.spark.ml.param.Param-), [isSet](../../../../../org/apache/spark/ml/param/Params.html#isSet-org.apache.spark.ml.param.Param-), [onParamChange](../../../../../org/apache/spark/ml/param/Params.html#onParamChange-org.apache.spark.ml.param.Param-), [paramMap](../../../../../org/apache/spark/ml/param/Params.html#paramMap--), [params](../../../../../org/apache/spark/ml/param/Params.html#params--), [set](../../../../../org/apache/spark/ml/param/Params.html#set-org.apache.spark.ml.param.Param-T-), [set](../../../../../org/apache/spark/ml/param/Params.html#set-org.apache.spark.ml.param.ParamPair-), [set](../../../../../org/apache/spark/ml/param/Params.html#set-java.lang.String-java.lang.Object-), [setDefault](../../../../../org/apache/spark/ml/param/Params.html#setDefault-org.apache.spark.ml.param.Param-T-), [setDefault](../../../../../org/apache/spark/ml/param/Params.html#setDefault-scala.collection.Seq-), [shouldOwn](../../../../../org/apache/spark/ml/param/Params.html#shouldOwn-org.apache.spark.ml.param.Param-)` * ### Methods inherited from interface org.apache.spark.ml.util.[MLWritable](../../../../../org/apache/spark/ml/util/MLWritable.html "interface in org.apache.spark.ml.util") `[save](../../../../../org/apache/spark/ml/util/MLWritable.html#save-java.lang.String-)` * ### Methods inherited from interface org.apache.spark.internal.Logging `$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize`
Method Detail
* #### read public static [MLReader](../../../../../org/apache/spark/ml/util/MLReader.html "class in org.apache.spark.ml.util")<[MinHashLSHModel](../../../../../org/apache/spark/ml/feature/MinHashLSHModel.html "class in org.apache.spark.ml.feature")> read() * #### load public static [MinHashLSHModel](../../../../../org/apache/spark/ml/feature/MinHashLSHModel.html "class in org.apache.spark.ml.feature") load(String path) * #### uid public String uid() An immutable unique ID for the object and its derivatives. Returns: (undocumented) * #### setInputCol public [MinHashLSHModel](../../../../../org/apache/spark/ml/feature/MinHashLSHModel.html "class in org.apache.spark.ml.feature") setInputCol(String value) * #### setOutputCol public [MinHashLSHModel](../../../../../org/apache/spark/ml/feature/MinHashLSHModel.html "class in org.apache.spark.ml.feature") setOutputCol(String value) * #### copy public [MinHashLSHModel](../../../../../org/apache/spark/ml/feature/MinHashLSHModel.html "class in org.apache.spark.ml.feature") copy([ParamMap](../../../../../org/apache/spark/ml/param/ParamMap.html "class in org.apache.spark.ml.param") extra) Description copied from interface: `[Params](../../../../../org/apache/spark/ml/param/Params.html#copy-org.apache.spark.ml.param.ParamMap-)` Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See `defaultCopy()`. Specified by: `[copy](../../../../../org/apache/spark/ml/param/Params.html#copy-org.apache.spark.ml.param.ParamMap-)` in interface `[Params](../../../../../org/apache/spark/ml/param/Params.html "interface in org.apache.spark.ml.param")` Specified by: `[copy](../../../../../org/apache/spark/ml/Model.html#copy-org.apache.spark.ml.param.ParamMap-)` in class `[Model](../../../../../org/apache/spark/ml/Model.html "class in org.apache.spark.ml")<[MinHashLSHModel](../../../../../org/apache/spark/ml/feature/MinHashLSHModel.html "class in org.apache.spark.ml.feature")>` Parameters: `extra` \- (undocumented) Returns: (undocumented) * #### write public [MLWriter](../../../../../org/apache/spark/ml/util/MLWriter.html "class in org.apache.spark.ml.util") write() Description copied from interface: `[MLWritable](../../../../../org/apache/spark/ml/util/MLWritable.html#write--)` Returns an `MLWriter` instance for this ML instance. Returns: (undocumented) * #### toString public String toString() Specified by: `[toString](../../../../../org/apache/spark/ml/util/Identifiable.html#toString--)` in interface `[Identifiable](../../../../../org/apache/spark/ml/util/Identifiable.html "interface in org.apache.spark.ml.util")` Overrides: `toString` in class `Object` * #### approxNearestNeighbors public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> approxNearestNeighbors([Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> dataset, [Vector](../../../../../org/apache/spark/ml/linalg/Vector.html "interface in org.apache.spark.ml.linalg") key, int numNearestNeighbors, String distCol) Given a large dataset and an item, approximately find at most k items which have the closest distance to the item. If the `outputCol` is missing, the method will transform the data; if the `outputCol` exists, it will use the `outputCol`. This allows caching of the transformed data when necessary. Parameters: `dataset` \- The dataset to search for nearest neighbors of the key. `key` \- Feature vector representing the item to search for. `numNearestNeighbors` \- The maximum number of nearest neighbors. `distCol` \- Output column for storing the distance between each result row and the key. Returns: A dataset containing at most k items closest to the key. A column "distCol" is added to show the distance between each row and the key. Note: This method is experimental and will likely change behavior in the next release. * #### approxNearestNeighbors public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> approxNearestNeighbors([Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> dataset, [Vector](../../../../../org/apache/spark/ml/linalg/Vector.html "interface in org.apache.spark.ml.linalg") key, int numNearestNeighbors) Overloaded method for approxNearestNeighbors. Use "distCol" as default distCol. Parameters: `dataset` \- (undocumented) `key` \- (undocumented) `numNearestNeighbors` \- (undocumented) Returns: (undocumented) * #### approxSimilarityJoin public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> approxSimilarityJoin([Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> datasetA, [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> datasetB, double threshold, String distCol) Join two datasets to approximately find all pairs of rows whose distance are smaller than the threshold. If the `outputCol` is missing, the method will transform the data; if the`outputCol` exists, it will use the `outputCol`. This allows caching of the transformed data when necessary. Parameters: `datasetA` \- One of the datasets to join. `datasetB` \- Another dataset to join. `threshold` \- The threshold for the distance of row pairs. `distCol` \- Output column for storing the distance between each pair of rows. Returns: A joined dataset containing pairs of rows. The original rows are in columns "datasetA" and "datasetB", and a column "distCol" is added to show the distance between each pair. * #### approxSimilarityJoin public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> approxSimilarityJoin([Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> datasetA, [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> datasetB, double threshold) Overloaded method for approxSimilarityJoin. Use "distCol" as default distCol. Parameters: `datasetA` \- (undocumented) `datasetB` \- (undocumented) `threshold` \- (undocumented) Returns: (undocumented) * #### inputCol public final [Param](../../../../../org/apache/spark/ml/param/Param.html "class in org.apache.spark.ml.param")<String> inputCol() Description copied from interface: `[HasInputCol](../../../../../org/apache/spark/ml/param/shared/HasInputCol.html#inputCol--)` Param for input column name. Specified by: `[inputCol](../../../../../org/apache/spark/ml/param/shared/HasInputCol.html#inputCol--)` in interface `[HasInputCol](../../../../../org/apache/spark/ml/param/shared/HasInputCol.html "interface in org.apache.spark.ml.param.shared")` Returns: (undocumented) * #### numHashTables public final [IntParam](../../../../../org/apache/spark/ml/param/IntParam.html "class in org.apache.spark.ml.param") numHashTables() Description copied from interface: `[LSHParams](../../../../../org/apache/spark/ml/feature/LSHParams.html#numHashTables--)` Param for the number of hash tables used in LSH OR-amplification. LSH OR-amplification can be used to reduce the false negative rate. Higher values for this param lead to a reduced false negative rate, at the expense of added computational complexity. Specified by: `[numHashTables](../../../../../org/apache/spark/ml/feature/LSHParams.html#numHashTables--)` in interface `[LSHParams](../../../../../org/apache/spark/ml/feature/LSHParams.html "interface in org.apache.spark.ml.feature")` Returns: (undocumented) * #### outputCol public final [Param](../../../../../org/apache/spark/ml/param/Param.html "class in org.apache.spark.ml.param")<String> outputCol() Param for output column name. Specified by: `[outputCol](../../../../../org/apache/spark/ml/param/shared/HasOutputCol.html#outputCol--)` in interface `[HasOutputCol](../../../../../org/apache/spark/ml/param/shared/HasOutputCol.html "interface in org.apache.spark.ml.param.shared")` Returns: (undocumented) * #### transform public [Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<[Row](../../../../../org/apache/spark/sql/Row.html "interface in org.apache.spark.sql")> transform([Dataset](../../../../../org/apache/spark/sql/Dataset.html "class in org.apache.spark.sql")<?> dataset) Transforms the input dataset. Specified by: `[transform](../../../../../org/apache/spark/ml/Transformer.html#transform-org.apache.spark.sql.Dataset-)` in class `[Transformer](../../../../../org/apache/spark/ml/Transformer.html "class in org.apache.spark.ml")` Parameters: `dataset` \- (undocumented) Returns: (undocumented) * #### transformSchema public [StructType](../../../../../org/apache/spark/sql/types/StructType.html "class in org.apache.spark.sql.types") transformSchema([StructType](../../../../../org/apache/spark/sql/types/StructType.html "class in org.apache.spark.sql.types") schema) Check transform validity and derive the output schema from the input schema. We check validity for interactions between parameters during `transformSchema` and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by `Param.validate()`. Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks. Specified by: `[transformSchema](../../../../../org/apache/spark/ml/PipelineStage.html#transformSchema-org.apache.spark.sql.types.StructType-)` in class `[PipelineStage](../../../../../org/apache/spark/ml/PipelineStage.html "class in org.apache.spark.ml")` Parameters: `schema` \- (undocumented) Returns: (undocumented)