RE: [SS] Why does ConsoleSink's addBatch convert input DataFrame to show it?
I actually asked the same thing a couple of weeks ago.
Apparently, when you create a structured streaming plan, it is different than the batch plan and is fixed in order to properly aggregate. If you perform most
operations on the dataframe it will recalculate the plan as a batch plan and will therefore not work properly. Therefore, you must either collect or turn to RDD and then create a new dataframe from the RDD.
It would be very useful IMO if we can “freeze” the plan for the input portion and work as if it was a new dataframe (similar to turning it to RDD and then creating
a new dataframe from the RDD but without the overhead of converting to RDD and back to dataframe), however, this is not currently possible.
From: Jacek Laskowski [via Apache Spark Developers List] [mailto:ml+[hidden email]]
Sent: Friday, July 07, 2017 11:44 AM To: Mendelson, Assaf Subject: [SS] Why does ConsoleSink's addBatch convert input DataFrame to show it?
Just noticed that the input DataFrame is collect'ed and then
parallelize'd simply to show it to the console . Why so many fairly
expensive operations for show?
I'd appreciate some help understanding this code. Thanks.