score:2
We haven't used Spark 2.0 in production yet with Scala 2.11 and notebooks. The root cause you your error is in compatibility. Based on GitHub Toree description, the latest Scala version that is supported is Scala 2.10.4 and you have 2.11.8. Try to downgrade it to 2.10 if it is not a production need to use only 2.11
Source: stackoverflow.com
Related Query
- Apache Toree and Spark Scala Not Working in Jupyter
- Work with Jupyter on Windows and Apache Toree Kernel for Spark compatibility
- Google secret manager API and google storage API not working with Apache Spark
- running jupyter + Apache Toree 0.2.0 with spark 2.2 kernel generate error (Missing dependency 'object scala in compiler mirror')
- Find and replace not working - dataframe spark scala
- Configuring Apache Spark Logging with Scala and logback
- Debug not working with play framework activator, scala and eclipse
- Null values from a csv on Scala and Apache Spark
- Scala IDE and Apache Spark -- different scala library version found in the build path
- Scala type inference not working with generic case class and lambda
- Order By Timestamp is not working for Date time column in Scala Spark
- Cell width Jupyter notebook - Apache Toree - Scala
- Data preprocessing with apache spark and scala
- Scala PartialFunction with isDefinedA and apply not working
- Scala 2.11 and jsr-223 not working
- Using Scala Pickling serialization In APACHE SPARK over KryoSerializer and JavaSerializer
- Apache Spark : When not to use mapPartition and foreachPartition?
- calculating average in Spark streaming not working : issue w/ updateStateByKey and instantiating class
- How to check whether multiple columns values of a row are not null and then add a true/false resulting column in Spark Scala
- sortByKey() function in Spark Scala not working properly
- Apache Spark Data Generator Function on Databricks Not working
- Spark "error: type mismatch" with scala 2.11 and not with 2.12
- Getting latest based on column condition in Spark Scala is not working
- i want to store each rdd into database in twitter streaming using apache spark but got error of task not serialize in scala
- Scala Apache Spark and dynamic column list inside of DataFrame select method
- UDF is not working to get file name in spark scala
- Apache Spark in Scala not printing rdd values
- Spark - scala Window-lead function and case statment result is not as expected
- REGEX_REPLACE is not working spark, hive and scala as expected
- Push Data to Nifi Flow using apache spark and scala
More Query from same tag
- Scala - Split array within array, extract certain information and apply to case class
- RDD to LabeledPoint conversion
- spray set content-type for XML
- Scala apply method
- Watch for project files also
- Importance of Akka Routers
- How to read rows from Columns in Scala
- Scala upper and lower type bound
- Pattern for redirecting to previous page after an action
- Spark Scala : Join two Dataframes by near position and time range
- How to define a function does not return or return void in scala
- playframework slick deprecation warning concerning driver and profile
- create cassandra table for scala nested case class
- Avoiding SAXParseException in Scala
- Is there a way to match everything but a certain type (or set of types) without using isInstnaceOf?
- Scala parallel collection runtime puzzling
- Sealed trait and dynamic case objects
- scala: defining a trait and referencing the corresponding companion object
- libraryDependencies with multiple lines
- Selenium on Docker: Testing a Docker Image within the same `docker-compose` file
- Cannot filter a strucure of Strings with spark
- DataFrame first function ignoreNulls doesn't work
- Akka, Camel and ActiveMQ: throttling consumers
- Custom directive in Akka Http
- Scala meaning of tilde
- How to apply javascript to html simulating a browser
- Explain Kinesis Shard Iterator - AWS Java SDK
- Getting an ActorRef when a WebSocket is closed in Play
- Stateful implementation of F-algebra
- include spark package in Sbt