score:0

The following methods generates the valid and invalid dataframes using the row_number function provided by spark-sql. I dont have access to cassandra so I am using a simple Dataframe here.

import sqlContext.implicits._
val df = sc.parallelize(Seq(("a" -> 1), ("b" -> 2), ("c" -> 3), ("d" -> 4), ("a" -> 5), ("a" -> 6), ("c" -> 7), ("c" -> 8))).toDF("c1", "c2")

df.registerTempTable("temp_table")

val masterdf = sqlContext.sql("SELECT c1, c2, ROW_NUMBER() OVER(PARTITION BY c1 ORDER BY c2) as row_num FROM temp_table")

masterdf.filter("row_num = 1").show()
+---+---+-------+
| c1| c2|row_num|
+---+---+-------+
|  a|  1|      1|
|  b|  2|      1|
|  c|  3|      1|
|  d|  4|      1|
+---+---+-------+


masterdf.filter("row_num > 1").show()
+---+---+-------+
| c1| c2|row_num|
+---+---+-------+
|  a|  5|      2|
|  a|  6|      3|
|  c|  7|      2|
|  c|  8|      3|
+---+---+-------+

Related Query

More Query from same tag