score:-4

I am sharing the sample code from Spark ml.

import org.apache.spark.ml.clustering.LDA
import org.apache.spark.sql.{Row, SQLContext}
import org.apache.spark.sql.types.{StructField, StructType}

// Loads data
val rowRDD = sc.textFile(input).filter(_.nonEmpty)
  .map(_.split(" ").map(_.toDouble)).map(Vectors.dense).map(Row(_))
val schema = StructType(Array(StructField(FEATURES_COL, new VectorUDT, false)))
val dataset = sqlContext.createDataFrame(rowRDD, schema)

// Trains a LDA model
val lda = new LDA()
  .setK(10)
  .setMaxIter(10)
  .setFeaturesCol(FEATURES_COL)
val model = lda.fit(dataset)
val transformed = model.transform(dataset)

val ll = model.logLikelihood(dataset)
val lp = model.logPerplexity(dataset)

// describeTopics
val topics = model.describeTopics(3)

// Shows the result
topics.show(false)
transformed.show(false)

You can find the complete code here

score:3

To get a DistributedLDAModel instead of a LocalLDAModel, you need to use the Expectation-Maximization (EM) optimizer instead of the default Online Variational Bayes (online) one.

Concretely, use setOptimizer('em') on your LDA builder to get a distributed model:

val lda = new LDA().setOptimizer("em")

Related Query

More Query from same tag