score:0
Accepted answer
you can use spark-xml apis from databricks to save spark dataframe to xml file. something like below..
val selecteddata = df.select("author", "_id")
selecteddata.write
.format("com.databricks.spark.xml")
.option("roottag", "books")
.option("rowtag", "book")
.save("newbooks.xml")
"com.databricks" %% "spark-xml" % "0.4.1"
Source: stackoverflow.com
Related Query
- Generate a CSV file with combination of unique field and XML using spark dataframe
- Converting dataframe to XML in spark throws Null Pointer Exception in StaxXML while writing to file system
- Spark DataFrame to xml file
- Provide schema while reading csv file as a dataframe in Scala Spark
- How to create a DataFrame from a text file in Spark
- How to load the csv file into the Spark DataFrame with Array[Int]
- Spark DataFrame - Read pipe delimited file using SQL?
- Spark dataframe not writing Double quotes into csv file properly
- Read Array Of Jsons From File to Spark Dataframe
- Save Spark dataframe as parquet file in Google Cloud Storage
- create a spark dataframe from a nested json file in scala
- spark dataframe is loading all nulls from csv file
- Spark dataframe requires json file as one object in one line?
- process a text file with xml column in apache spark scala
- Scala: Convert xml dataframe to csv file
- Read FASTQ file into a Spark dataframe
- Save spark DataFrame to csv file with map<string,string> column type
- Spark not saving the dataframe as a paraquet file
- Text File of specific format into DataFrame in Spark using Scala
- How to load .rds R file as a Spark dataframe in Scala
- Why does Spark writes dataframe to different file sizes and different time?
- java.lang.ClassCastException arises when convert Spark RDD to a Dataframe from a text file
- Not able to load file from HDFS in spark Dataframe
- How to read a fixed length file in Spark using DataFrame API and SCALA
- Spark & Scala: Read in CSV file as DataFrame / Dataset
- Converting pipe-delimited file to spark dataframe to CSV file
- Error while writing spark 3.0 sql dataframe to CSV file using scala without Databricks
- Dump array of map column of a spark dataframe into csv file
- Spark Creating DataFrame from a text File
- Spark - Write DataFrame with custom file name
More Query from same tag
- Is it possible to get schema from a type field in trait?
- Orphan Futures in Scala
- Akka non blocking options when an HTTP response is requied
- How can I debug Scala microservices running on AKS with Telepresence?
- Application monitoring performance in scala
- Deleting temp file in a Spark datasource
- How to obtain class type from case class value
- Foreach vs Map in Scala
- Scala.js facade for npm project
- How do you Nest different Spray Directives into 1 directive
- How to mock spray-client response
- Scala looping over a java function
- Strategies in Scala
- Scala/Java HTTP parse POST data form-encoded arrays
- Why methods of traits are called when a class has method's implementation
- Subtracting time duration from timestamp column in pysaprk
- Overriding trait method with default parameter
- ` _corrupt_record: string (nullable = true)` with a simple Spark Scala application
- Constructor use null value as parameter
- Playframework JSON parsing - Null pointer exception - when array is present
- Get Joining two VertexPartitions with different indexes is slow in Spark and GraphX by unpersist graph
- Gatling. What would be the best way to pass feeder data to other functions?
- #Repr[A] in Akka Stream source type
- Scala: how to embed a small web server into a scala app?
- What is the fastest collection for slicing?
- Count occurrences of each item in a Scala parallel collection
- What does preserveWS parameter mean in ConstructingParser?
- Scala this.type conformance to type parameter bounds of supertype
- Why does this Iterable produce a Set after mapping?
- How do I use Scala to parse CSV data with empty columns?