score:2
Accepted answer
it's working now for me, and just for the record, referencing @martinsenne answer.
what i did is as below:
- clear all compile files under folder "project"
- scala version 2.10.4 (previously using 2.11.4)
- change spark-sql to be: "org.apache.spark" %% "spark-sql" % "1.4.1" % "provided"
- change mllib to be: "org.apache.spark" %% "spark-mllib" % "1.4.1" % "provided"
@note:
- i've already started a spark cluster and i use "sh spark-submit /path_to_folder/hello/target/scala-2.10/hello_2.10-1.0.jar" to submit jar to spark master. if use sbt to run by command "sbt run" will fail.
- when changing from scala-2.11 to scala-2.10, remember that the jar file path and name will also change from "scala-2.11/hello_2.11-1.0.jar" to "scala-2.10/hello_2.10-1.0.jar". when i re-packaged everything, i forgot to change the submit job command for the jar name, so i package into "hello_2.10-1.0.jar" but submitting "hello_2.10-1.0.jar" which caused me extra problem...
- i tried both "val sqlcontext = new org.apache.spark.sql.sqlcontext(sc)" and "val sqlcontext = new org.apache.spark.sql.hive.hivecontext(sc)", both are working with method createdataframe()
Source: stackoverflow.com
Related Query
- Remove Temporary Tables from Apache SQL Spark
- Apache spark error: not found: value sqlContext
- Apache Spark -- MlLib -- Collaborative filtering
- Apache spark SQL group data by range
- Apache Spark startsWith in SQL expression
- Convert RDD of Vector in LabeledPoint using Scala - MLLib in Apache Spark
- Launching Apache Spark SQL jobs from multi-threaded driver
- Apache Spark MLLib - Running KMeans with IDF-TF vectors - Java heap space
- Running Apache Spark Example Application in IntelliJ Idea
- How can find size of each Row in Apache spark sql dataframe and discrad the rows having size more than a threshold size in Kilobyte
- Apache Spark Mllib 2.1.0 with Scala sbt error
- Generate keywords using Apache Spark and mllib
- Find the facility with the longest interval without accidents using Apache Spark SQL
- Apache Spark SQL query and DataFrame as reference data
- Apache Spark SQL identifier expected exception
- Spark sql SQLContext
- Apache Spark MLlib LabeledPoint null label issue
- Apache Spark - Unable to understand scala example
- Apache Spark - MLlib - K-Means Input format
- Apache Spark SQL get_json_object java.lang.String cannot be cast to org.apache.spark.unsafe.types.UTF8String
- java.sql.SQLException: Unrecognized SQL type -102 while connecting to Oracle Database from Apache Spark
- Apache Spark MLlib : OLS regression in Java
- Apache Spark - Feature Extraction Word2Vec example and exception
- Misunderstanding of some parts of an example in Spark MLlib
- Naive Bayes with Apache Spark MLlib
- Apache Spark SQL dataframe filter multi-rules by string
- How to load SQL Database into Analytics for Apache Spark within Bluemix?
- Apache Spark SQL get values in dataframe from SQL query
- Convert RDD to DStream to apply StreamingKMeans algorithm in Apache Spark MlLib
- gaussian mixture model (GMM) mllib Apache Spark Scala
More Query from same tag
- spark-shell scala to map the exchange rate value in dataframe
- How to express this existential type from Haskell in Scala?
- How to detect dependencies on Java classes in Scala sources?
- Scala: How to get class of mixin composition?
- JSON4S does not serialize internal case class members
- spark scala cartesian product of each element in a column
- Finding array with empty elements
- Update dataframe column by comparing with existing data in another column using Levenshtein algorithm
- Scala trait as a function return type
- Spark SQL Sort order not retained by GroupBy and Aggregation?
- How should routesImport be defined in build.sbt in Play 2.3?
- The need for the @tailrec annotation in Scala?
- how would I map a list of strings with a known format to a list of tuples?
- Read multiple files from a directory using Spark
- Intellij compiler excludes for scala
- Writing a generic Scala merge function, can't get types to line up for compiler
- Scala Json4s CustomKeySerializer
- Scala macro and type erasure
- Scoping Issue with SparkContext.sequenceFile(...).foreach in Scala
- Creating a new Maven Scala project in Eclipse results in 4 errors
- Scala string filter based on split index
- Scala macro how to convert a MethodSymbol to DefDef with parameter default values?
- Convert String to Byte
- Could not find implicit in Scala Tagless Final, Implicit, unit test
- Alternatives to RDD.cartesian for fuzzy join in ApacheSpark
- What is the difference between JavaConverters and JavaConversions in Scala?
- Scala Slick codegen custom naming
- How to express "implies" in ScalaCheck, say, "if an integer n * n = 0 then n = 0"?
- udf spark Scala return case class
- Trouble with generic collections in Scala