ColumnnarBatch to InternalRow Cast exception with codegen enabled.

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

ColumnnarBatch to InternalRow Cast exception with codegen enabled.

Nasrulla Khan Haris

HI Spark developer,

 

I have a new baseRelation which Initializes ParquetFileFormat object and when reading the data I am encountering Cast Exception below, however when I disable codegen support with config “spark.sql.codegen.wholeStage"= false, I do not encounter this exception.

 

 

20/06/11 17:35:39 INFO FileScanRDD: Reading File path: file:///D:/ jvm/src/test/scala/resources/pems_sorted/station=402260/part-r-00245-ddaee723-f3f6-4f25-a34b-3312172aa6d7.snappy.parquet, range: 0-50936, partition values: [402260]

20/06/11 17:35:39 INFO CodecPool: Got brand-new decompressor [.snappy]

20/06/11 17:35:40 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)

java.lang.ClassCastException: org.apache.spark.sql.vectorized.ColumnarBatch cannot be cast to org.apache.spark.sql.catalyst.InternalRow

                at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source)

                at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

                at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

                at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)

                at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)

                at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)

                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)

                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)

                at org.apache.spark.scheduler.Task.run(Task.scala:123)

                at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)

                at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)

                at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at java.lang.Thread.run(Thread.java:748)

 

 

Appreciate your inputs.

 

Thanks,

NKH

Reply | Threaded
Open this post in threaded view
|

Re: ColumnnarBatch to InternalRow Cast exception with codegen enabled.

Kris Mo
Hi Nasrulla,

Not sure what your new code is doing, but the symptom looks like you're creating a new data source that wraps around the builtin Parquet data source?

The problem here is, whole-stage codegen generated code for row-based input, but the actual input is columnar.
In other words, in your setup, the vectorized Parquet reader is enabled (which produces columnar output), and you probably wrote a new operator that didn't properly interact with the columnar support, so that WSCG thought it should generate row-based code instead of columnar code.

Hope it helps,
Kris
--

Kris Mok

Software Engineer Databricks Inc.

[hidden email]

databricks.com

 



On Thu, Jun 11, 2020 at 5:41 PM Nasrulla Khan Haris <[hidden email]> wrote:

HI Spark developer,

 

I have a new baseRelation which Initializes ParquetFileFormat object and when reading the data I am encountering Cast Exception below, however when I disable codegen support with config “spark.sql.codegen.wholeStage"= false, I do not encounter this exception.

 

 

20/06/11 17:35:39 INFO FileScanRDD: Reading File path: file:///D:/ jvm/src/test/scala/resources/pems_sorted/station=402260/part-r-00245-ddaee723-f3f6-4f25-a34b-3312172aa6d7.snappy.parquet, range: 0-50936, partition values: [402260]

20/06/11 17:35:39 INFO CodecPool: Got brand-new decompressor [.snappy]

20/06/11 17:35:40 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)

java.lang.ClassCastException: org.apache.spark.sql.vectorized.ColumnarBatch cannot be cast to org.apache.spark.sql.catalyst.InternalRow

                at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source)

                at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

                at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

                at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)

                at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)

                at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)

                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)

                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)

                at org.apache.spark.scheduler.Task.run(Task.scala:123)

                at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)

                at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)

                at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at java.lang.Thread.run(Thread.java:748)

 

 

Appreciate your inputs.

 

Thanks,

NKH

Reply | Threaded
Open this post in threaded view
|

RE: [EXTERNAL] Re: ColumnnarBatch to InternalRow Cast exception with codegen enabled.

Nasrulla Khan Haris

Thanks Kris for your inputs. Yes I have a new data source which wraps around builtin parquet data source. What I do not understand is with WSCG disabled, Output is not columnar batch.

 

 

From: Kris Mo <[hidden email]>
Sent: Friday, June 12, 2020 2:20 AM
To: Nasrulla Khan Haris <[hidden email]>
Cc: [hidden email]
Subject: [EXTERNAL] Re: ColumnnarBatch to InternalRow Cast exception with codegen enabled.

 

Hi Nasrulla,

 

Not sure what your new code is doing, but the symptom looks like you're creating a new data source that wraps around the builtin Parquet data source?

 

The problem here is, whole-stage codegen generated code for row-based input, but the actual input is columnar.

In other words, in your setup, the vectorized Parquet reader is enabled (which produces columnar output), and you probably wrote a new operator that didn't properly interact with the columnar support, so that WSCG thought it should generate row-based code instead of columnar code.

 

Hope it helps,

Kris

--

 

Kris Mok

Software Engineer Databricks Inc.

[hidden email]

databricks.com

 Image removed by sender.

 

 

On Thu, Jun 11, 2020 at 5:41 PM Nasrulla Khan Haris <[hidden email]> wrote:

HI Spark developer,

 

I have a new baseRelation which Initializes ParquetFileFormat object and when reading the data I am encountering Cast Exception below, however when I disable codegen support with config “spark.sql.codegen.wholeStage"= false, I do not encounter this exception.

 

 

20/06/11 17:35:39 INFO FileScanRDD: Reading File path: file:///D:/ jvm/src/test/scala/resources/pems_sorted/station=402260/part-r-00245-ddaee723-f3f6-4f25-a34b-3312172aa6d7.snappy.parquet, range: 0-50936, partition values: [402260]

20/06/11 17:35:39 INFO CodecPool: Got brand-new decompressor [.snappy]

20/06/11 17:35:40 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)

java.lang.ClassCastException: org.apache.spark.sql.vectorized.ColumnarBatch cannot be cast to org.apache.spark.sql.catalyst.InternalRow

                at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source)

                at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

                at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

                at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)

                at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)

                at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)

                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)

                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)

                at org.apache.spark.scheduler.Task.run(Task.scala:123)

                at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)

                at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)

                at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at java.lang.Thread.run(Thread.java:748)

 

 

Appreciate your inputs.

 

Thanks,

NKH

Reply | Threaded
Open this post in threaded view
|

RE: [EXTERNAL] Re: ColumnnarBatch to InternalRow Cast exception with codegen enabled.

Nasrulla Khan Haris
In reply to this post by Kris Mo

 

Thanks Kris for your inputs. Yes I have a new data source which wraps around built-in parquet data source. What I do not understand is with WSCG disabled, output is not columnar batch, if my changes do not handle columnar support, shouldn’t the behavior remain same with or without WSCG.

 

 

 

From: Kris Mo <[hidden email]>
Sent: Friday, June 12, 2020 2:20 AM
To: Nasrulla Khan Haris <[hidden email]>
Cc: [hidden email]
Subject: [EXTERNAL] Re: ColumnnarBatch to InternalRow Cast exception with codegen enabled.

 

Hi Nasrulla,

 

Not sure what your new code is doing, but the symptom looks like you're creating a new data source that wraps around the builtin Parquet data source?

 

The problem here is, whole-stage codegen generated code for row-based input, but the actual input is columnar.

In other words, in your setup, the vectorized Parquet reader is enabled (which produces columnar output), and you probably wrote a new operator that didn't properly interact with the columnar support, so that WSCG thought it should generate row-based code instead of columnar code.

 

Hope it helps,

Kris

--

 

Kris Mok

Software Engineer Databricks Inc.

[hidden email]

databricks.com

 Image removed by sender.

 

 

On Thu, Jun 11, 2020 at 5:41 PM Nasrulla Khan Haris <[hidden email]> wrote:

HI Spark developer,

 

I have a new baseRelation which Initializes ParquetFileFormat object and when reading the data I am encountering Cast Exception below, however when I disable codegen support with config “spark.sql.codegen.wholeStage"= false, I do not encounter this exception.

 

 

20/06/11 17:35:39 INFO FileScanRDD: Reading File path: file:///D:/ jvm/src/test/scala/resources/pems_sorted/station=402260/part-r-00245-ddaee723-f3f6-4f25-a34b-3312172aa6d7.snappy.parquet, range: 0-50936, partition values: [402260]

20/06/11 17:35:39 INFO CodecPool: Got brand-new decompressor [.snappy]

20/06/11 17:35:40 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)

java.lang.ClassCastException: org.apache.spark.sql.vectorized.ColumnarBatch cannot be cast to org.apache.spark.sql.catalyst.InternalRow

                at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source)

                at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

                at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

                at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)

                at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)

                at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)

                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)

                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)

                at org.apache.spark.scheduler.Task.run(Task.scala:123)

                at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)

                at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)

                at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at java.lang.Thread.run(Thread.java:748)

 

 

Appreciate your inputs.

 

Thanks,

NKH

Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: ColumnnarBatch to InternalRow Cast exception with codegen enabled.

Kris Mo
Hi Nasrulla,

Without details of your code / configuration, it's a bit hard to tell what exactly went wrong, since there can be a lot of places that could go wrong...

But one thing for sure is that, the interpreted code path (non-WSCG) and the WSCG path are two separate things and it wouldn't surprise me that one works and the other doesn't, because you can have different features/bugs in them.
Depending on which version/branch of Spark you're working with, you might need to implement columnar support slightly differently. c.f. https://github.com/apache/spark/commit/c341de8b3e1f1d3327bd4ae3b0d2ec048f64d306 

Best regards,
Kris
--

Kris Mok

Software Engineer Databricks Inc.

[hidden email]

databricks.com

 



On Fri, Jun 12, 2020 at 11:09 AM Nasrulla Khan Haris <[hidden email]> wrote:

 

Thanks Kris for your inputs. Yes I have a new data source which wraps around built-in parquet data source. What I do not understand is with WSCG disabled, output is not columnar batch, if my changes do not handle columnar support, shouldn’t the behavior remain same with or without WSCG.

 

 

 

From: Kris Mo <[hidden email]>
Sent: Friday, June 12, 2020 2:20 AM
To: Nasrulla Khan Haris <[hidden email]>
Cc: [hidden email]
Subject: [EXTERNAL] Re: ColumnnarBatch to InternalRow Cast exception with codegen enabled.

 

Hi Nasrulla,

 

Not sure what your new code is doing, but the symptom looks like you're creating a new data source that wraps around the builtin Parquet data source?

 

The problem here is, whole-stage codegen generated code for row-based input, but the actual input is columnar.

In other words, in your setup, the vectorized Parquet reader is enabled (which produces columnar output), and you probably wrote a new operator that didn't properly interact with the columnar support, so that WSCG thought it should generate row-based code instead of columnar code.

 

Hope it helps,

Kris

--

 

Kris Mok

Software Engineer Databricks Inc.

[hidden email]

databricks.com

 Image removed by sender.

 

 

On Thu, Jun 11, 2020 at 5:41 PM Nasrulla Khan Haris <[hidden email]> wrote:

HI Spark developer,

 

I have a new baseRelation which Initializes ParquetFileFormat object and when reading the data I am encountering Cast Exception below, however when I disable codegen support with config “spark.sql.codegen.wholeStage"= false, I do not encounter this exception.

 

 

20/06/11 17:35:39 INFO FileScanRDD: Reading File path: file:///D:/ jvm/src/test/scala/resources/pems_sorted/station=402260/part-r-00245-ddaee723-f3f6-4f25-a34b-3312172aa6d7.snappy.parquet, range: 0-50936, partition values: [402260]

20/06/11 17:35:39 INFO CodecPool: Got brand-new decompressor [.snappy]

20/06/11 17:35:40 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)

java.lang.ClassCastException: org.apache.spark.sql.vectorized.ColumnarBatch cannot be cast to org.apache.spark.sql.catalyst.InternalRow

                at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source)

                at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

                at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

                at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)

                at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)

                at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)

                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)

                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)

                at org.apache.spark.scheduler.Task.run(Task.scala:123)

                at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)

                at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)

                at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at java.lang.Thread.run(Thread.java:748)

 

 

Appreciate your inputs.

 

Thanks,

NKH