score:0

I don't have much knowledge of spark streaming but I believe streaming are iterative micro-batch, and in spark batch execution each action has one sink/output. So you can't store it in different tables with one execution.

Now,

  1. if you write it in one table, reader can simply read only the column that they require. I mean: do you really need to store it in different places?
  2. You can write it twice, filtering the fields that are not required
  • both write action will execute the computation of the full dataset, then remove not required columns
  • if the full dataset computation is long, you can cache it before the filtering+write

Related Query

More Query from same tag