score:1

one approach would be to use cast() to cast the column into floattype, essentially converting all the non-float values into null:

// csv file content:
// id,value
// 1,50
// 2,null
// 3,60.5
// 4,a

val df = spark.read.
  option("header", true).
  csv("/path/to/csvfile")

import org.apache.spark.sql.types._

val df2 = df.withcolumn("val_float", $"value".cast(floattype))
// +---+-----+---------+
// | id|value|val_float|
// +---+-----+---------+
// |  1|   50|     50.0|
// |  2| null|     null|
// |  3| 60.5|     60.5|
// |  4|    a|     null|
// +---+-----+---------+

you can re-cast the floattype column back to stringtype, if necessary.


Related Query

More Query from same tag