Oops! Turns out I needed to set an environment variable
spark-shell seemed to be setting automatically.
- Cannot connect to spark cluster on intellij but spark-submit can
- Cannot connect to Spark cluster programmatically but spark-shell can?
- How can I know programmatically if my spark program is running in local or cluster mode?
- Curl can connect to an Iron server on localhost, but Scala intermittently cannot
- Can connect to MongoDB in Heroku through Mongo shell but not application
- Cassandra cluster is running but not alble to connect from Spark App
- How can I connect to a postgreSQL database into Apache Spark using scala?
- Scala Spark connect to remote cluster
- How do I connect to a Kerberos-secured Kafka cluster with Spark Structured Streaming?
- Cannot connect to Cassandra from Spark (Contact points contain multiple data centers)
- Programmatically Rename All But One Column Spark Scala
- Spark - jdbc write fails in Yarn cluster mode but works in spark-shell
- Programmatically reduce log in a spark shell
- Spark SQL "No input paths specified in job", but can printSchema
- Can I use Jupyter lab to interact with databricks spark cluster using Scala?
- Cannot connect to Hive metastore from Spark application
- Spark shell started with assembly jar cannot resolve decline's cats dependency
- Spark Shell Import Fine, But Throws Error When Referencing Classes
- Why can a companion object access a private val in its companion class when compiling, but cannot do that when interpreting?
- spark join fails with exception "ClassNotFoundException: org.apache.spark.rdd.RDD$" but runs when pasting into spart-shell of Hadoop Cluster
- Why this Spark code works in local mode but not in cluster mode?
- Spark submit runs successfully but when submitted through oozie it fails to connect to hive
- Cannot load class but can import works fine
- Cannot connect locally to hdfs kerberized cluster using IntelliJ
- How can I run Spark job programmatically
- How can I connect to 2 kafka topics at a time, but process only 1 at a time
- Cannot run spark jobs locally using sbt, but works in IntelliJ
- Can not connect with Casbah but it works with ReactiveMongo
- Cassandra Cluster can not see nodes through Spark
- Cannot connect to remote MongoDB from EMR cluster with spark-shell
More Query from same tag
- I'm trying to store the names of Spark Scala columns in an array. Am getting a weird output. [Ljava.lang.String;@197d5a87
- Issue with pattern matching in scala: "error: constructor cannot be instantiated to expected type"
- Extract elements from XML and assign to a variable
- Not able to run scala program even after compiling it
- Best way to handle data streams for non-sealed trait hierarchies
- Why does the Spark application code need to be an object rather than a class?
- Why does new fail?
- Scala: Is it possible to override val's in the sub-class's constructor?
- How do I throttle messages in Akka (2.1.2)?
- SparkSQL scala api explode with column names
- Why does Gson().toJson() of object with Enumeration throw StackOverflowError?
- How to provide default value for implicit parameters at class level
- Scala: Implictly convert a list
- ProvisionException: Unable to provision
- Type Erasure in Scala
- Scala adding methods with complex types to final classes
- Elasticsearch high level rest client more than 1 field search
- Resizing JFrame when child resizes: how to keep up?
- Specs2 - Unit specification style should not be used in concurrent environments
- How to create a List of values aggregation after a join on DataFrame elements?
- ScalaMock: Can't handle methods with more than 22 parameters (yet)
- Easy Scala Serialization?
- DATE CONVERSION SCALA
- Scala Popup Menu
- Create dataframe from rdd objectfile
- how to pass sql array values from java controller to scala template
- How to Compare columns of two tables using Spark?
- Scala simple histogram
- Example of using Akka 2.3.8 in JRuby - specifically using Java::AkkaActor::Props
- what's the conceptual purpose of the Tuple2?