score:0
Accepted answer
the issue was that hive support was not enabled in the default sparksession provided by the dataframesuitebase class (from holdenkarau's spark-testing-base package) that i was extending.
to solve it, override the dataframesuitebase method beforeall() (runs before every test) by adding the enablehivesupport() method to the sparksession build chain:
override def beforeall(): unit = {
sparksessionprovider._sparksession = sparksession.builder()
.master("local") // add whatever other configurations...
.enablehivesupport()
.getorcreate()
}
score:0
you should try this
spark.catalog._jcatalog.tableexists("schema_name.table_name")
Source: stackoverflow.com
Related Query
- Spark Streaming from Kafka topic throws offset out of range with no option to restart the stream
- Spark RDD method "saveAsTextFile" throwing exception Even after deleting the output directory. org.apache.hadoop.mapred.FileAlreadyExistsException
- How to emulate the array_join() method in spark 2.2
- How to modify vertex data when calling the mapTriplets method in Graphx of Spark
- Overwriting the parquet file throws exception in spark
- spark - method Error: The argument types of an anonymous function must be fully known
- How to use the agg method of Spark KeyValueGroupedDataset?
- Why isn't Spark textFile method reading the whole text file?
- Unexpected column values after the IN condition in where() method of dataframe in spark
- The method spark.catalog.tableExists("newDb.newTable") throws NoSuchDatabaseException instead of returning false ("newDb" does not exist)
- Maximum search method in several columns and unification of the result within a single column with Spark
- Spark Accumulator throws a class cast exception when trying to count the number of records in the dataset
- spark sql throws non-intuitve exception for when method
- Does Spark data set method serialize the computation itself?
- Spark rdd uses the collect method to generate an OutOfMemoryError
- java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. spark Eclipse on windows 7
- Is there a way to take the first 1000 rows of a Spark Dataframe?
- When to use the equals sign in a Scala method declaration?
- Explain the aggregate functionality in Spark (with Python and Scala)
- What is the rule for parenthesis in Scala method invocation?
- What's the difference between join and cogroup in Apache Spark
- Joining Spark dataframes on the key
- How to resolve the AnalysisException: resolved attribute(s) in Spark
- Why won't the Scala compiler apply tail call optimization unless a method is final?
- What are the Spark transformations that causes a Shuffle?
- Scala macros and the JVM's method size limit
- Why can't a class extend traits with method of the same signature?
- How to add a Spark Dataframe to the bottom of another dataframe?
- Running a method after the constructor of any derived class
- Spark : Read file only if the path exists
More Query from same tag
- Migrate from MurmurHash to MurmurHash3
- Apache Spark - How does internal job scheduler in spark define what are users and what are pools
- save rdd of array of array to text file spark
- Is method semantically equivalent to function in Scala 3?
- How to disable java ortools CP solver logging?
- JavaFX controls not receiving mouse events when also using Shape3D
- Remove comma from parsed string before convert it to double
- Expect message on mocked actor when testing with TestKit
- Pattern matching on List[T] and Set[T] in Scala vs. Haskell: effects of type erasure
- Reuse json implicit readers in subclasses
- Run a custom transformation on string columns
- How to compare two dataframe and print columns that are different in scala
- How to decode missing json array as empty List with circe
- How to pass a tuple3 as an argument to function?
- Scala: recursive value listHuman needs type
- scala - how to bind a variable name in a multiple pattern matching clause
- How can I pass a type as a parameter in scala?
- What are the various patterns that I could handle a Future[Option[user]]?
- Object-private variables implementation
- Parse a file to AST with non-interactive scala.tools.nsc.Global
- Kotlin equivalent of Scala Traversable.collect
- Makiing sql request on columns containing dot
- In Scala Play and Slick, how to get request to finish before sending response
- How to use freeslick with oracle with play-framework scala?
- What is the proper way to structure Scala parser combinator code?
- create a Spark DataFrame from a nested array of struct element?
- Play2 and Scala, How should I configure my integration tests to run with proper DB
- Spark: Efficient way to get top K frequent values per key in (key, value) RDD?
- --= and ++= complexity on Scala ArrayBuffer
- How do I configure Maven to use 'rootdoc.txt' in scaladoc report?