score:1

I do not know if you want a generic solution but in your particular case, you can code something like this:

spark.read.json(newDF.as[String])
    .withColumn("RPM", explode(col("RPM")))
    .withColumn("E", col("RPM.E"))
    .withColumn("V", col("RPM.V"))
    .drop("RPM")
    .show()

Related Query

More Query from same tag