Getting the ball started on a 2.4.6 release

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

Getting the ball started on a 2.4.6 release

Holden Karau
Hi folks,

I’m going to get started on putting together a 2.4.6 release to come out hopefully around the same time as 3.0. Are there any changes in master folks think we should be considering backporting to a 2.4.6 release?

Cheers,

Holden :)
--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 
Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Sean Owen-2
Looks like we have 1 marked for 2.4.6:
https://issues.apache.org/jira/projects/SPARK/versions/12346781

https://issues.apache.org/jira/browse/SPARK-31234 ResetCommand should
not wipe out all configs

Xiao might be able to comment on that one.

On Mon, Apr 20, 2020 at 11:31 AM Holden Karau <[hidden email]> wrote:

>
> Hi folks,
>
> I’m going to get started on putting together a 2.4.6 release to come out hopefully around the same time as 3.0. Are there any changes in master folks think we should be considering backporting to a 2.4.6 release?
>
> Cheers,
>
> Holden :)
> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Xiao Li-2
Yes. This one got merged yesterday. 

Thanks!

Xiao

On Mon, Apr 20, 2020 at 10:51 AM Sean Owen <[hidden email]> wrote:
Looks like we have 1 marked for 2.4.6:
https://issues.apache.org/jira/projects/SPARK/versions/12346781

https://issues.apache.org/jira/browse/SPARK-31234 ResetCommand should
not wipe out all configs

Xiao might be able to comment on that one.

On Mon, Apr 20, 2020 at 11:31 AM Holden Karau <[hidden email]> wrote:
>
> Hi folks,
>
> I’m going to get started on putting together a 2.4.6 release to come out hopefully around the same time as 3.0. Are there any changes in master folks think we should be considering backporting to a 2.4.6 release?
>
> Cheers,
>
> Holden :)
> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]



--
Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

edeesis
I'd like to advocate for:

https://issues.apache.org/jira/browse/SPARK-25515
and
https://issues.apache.org/jira/browse/SPARK-29865

Two small QOL changes that make production use of Spark with Kubernetes much
easier.



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Holden Karau
This seem like not very impactful for end-users on K8s assuming they've got logging of some kind set up. Unless I'm missing something.

On Tue, Apr 21, 2020 at 4:51 PM edeesis <[hidden email]> wrote:
I'd like to advocate for:

https://issues.apache.org/jira/browse/SPARK-25515
and
https://issues.apache.org/jira/browse/SPARK-29865

Two small QOL changes that make production use of Spark with Kubernetes much
easier.



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]



--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 
Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

wuyi
In reply to this post by Holden Karau
I have one: https://issues.apache.org/jira/browse/SPARK-31485, which could
cause application hang.
 

And, probably, also https://issues.apache.org/jira/browse/SPARK-31509, to
make better guidance of barrier execution for user. But we do not have
conclusion yet.

Best,

Yi Wu



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Holden Karau
Thanks, I agree improving that error message instead of hanging could be a good candidate for backporting to 2.4

On Tue, Apr 21, 2020 at 6:43 PM wuyi <[hidden email]> wrote:
I have one: https://issues.apache.org/jira/browse/SPARK-31485, which could
cause application hang.


And, probably, also https://issues.apache.org/jira/browse/SPARK-31509, to
make better guidance of barrier execution for user. But we do not have
conclusion yet.

Best,

Yi Wu



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]



--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 
Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

wuyi
We have a conclusion now and we decide to include SPARK-31509 in the PR of
SPARK-31485. So there actually should be only one candidate(But to be
honest, it still depends on committers).

Best,
Yi Wu



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

edeesis
In reply to this post by Holden Karau
There's other information you can obtain from the Pod metadata on a describe
than just from the logs, which are typically what's being printed by the
Application itself.

I've also found that Spark has some trouble obtaining the reason for a K8S
executor death (as evident by the
spark.kubernetes.executor.lostCheck.maxAttempts config property)

I admittedly don't know what should qualify for a backport, but considering
3.0 is a major upgrade (Scala version, et al), is there any room for for
being more generous with backporting to 2.4?



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Holden Karau


On Thu, Apr 23, 2020 at 9:07 AM edeesis <[hidden email]> wrote:
There's other information you can obtain from the Pod metadata on a describe
than just from the logs, which are typically what's being printed by the
Application itself.
Would get pods -w -o yaml do the trick here or is there going to be information that wouldn’t be captured that way?


I've also found that Spark has some trouble obtaining the reason for a K8S
executor death (as evident by the
spark.kubernetes.executor.lostCheck.maxAttempts config property)

I admittedly don't know what should qualify for a backport, but considering
3.0 is a major upgrade (Scala version, et al), is there any room for for
being more generous with backporting to 2.4?
I’d like to revisit the conversation around a Spark 2.5 as a transitional release. I know that some people are already effectively maintaining 2.4+ Selective new functionality backports internally. Maybe I’ll kick off that discussion which we can have and that can help inform what we should be putting in 2.4.




--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 
Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Holden Karau
Tentatively I'm planning on this list to start backporting. If no one sees any issues with those I'll start to make backport JIRAs for them for tracking this afternoon.
SPARK-26390       ColumnPruning rule should only do column pruning
SPARK-25407       Allow nested access for non-existent field for Parquet file when nested pruning is enabled
SPARK-25559       Remove the unsupported predicates in Parquet when possible
SPARK-25860       Replace Literal(null, _) with FalseLiteral whenever possible
SPARK-27514       Skip collapsing windows with empty window expressions
SPARK-25338       Ensure to call super.beforeAll() and super.afterAll() in test cases
SPARK-27138       Remove AdminUtils calls (fixes deprecation)
SPARK-27981       Remove `Illegal reflective access` warning for `java.nio.Bits.unaligned()` in JDK9+
SPARK-26095       Disable parallelization in make-distibution.sh. (Avoid build hanging)
SPARK-25692       Remove static initialization of worker eventLoop handling chunk fetch requests within TransportContext. This fixes ChunkFetchIntegrationSuite as well
SPARK-26306       More memory to de-flake SorterSuite
SPARK-30199       Recover `spark.(ui|blockManager).port` from checkpoint
SPARK-27676       InMemoryFileIndex should respect spark.sql.files.ignoreMissingFiles
SPARK-31047       Improve file listing for ViewFileSystem
SPARK-25595       Ignore corrupt Avro file if flag IGNORE_CORRUPT_FILES enabled

Maybe:
SPARK-27801       Delegate to ViewFileSystem during file listing correctly

Not yet merged:
SPARK-31485       Barrier execution hang if insufficient resources

On Thu, Apr 23, 2020 at 9:13 AM Holden Karau <[hidden email]> wrote:


On Thu, Apr 23, 2020 at 9:07 AM edeesis <[hidden email]> wrote:
There's other information you can obtain from the Pod metadata on a describe
than just from the logs, which are typically what's being printed by the
Application itself.
Would get pods -w -o yaml do the trick here or is there going to be information that wouldn’t be captured that way?


I've also found that Spark has some trouble obtaining the reason for a K8S
executor death (as evident by the
spark.kubernetes.executor.lostCheck.maxAttempts config property)

I admittedly don't know what should qualify for a backport, but considering
3.0 is a major upgrade (Scala version, et al), is there any room for for
being more generous with backporting to 2.4?
I’d like to revisit the conversation around a Spark 2.5 as a transitional release. I know that some people are already effectively maintaining 2.4+ Selective new functionality backports internally. Maybe I’ll kick off that discussion which we can have and that can help inform what we should be putting in 2.4.




--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 
Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Xiao Li-2
Hi, Holden, 

We are trying to avoid backporting the improvement/cleanup PRs to the maintenance releases, especially the core modules, like Spark Core and SQL. For example, SPARK-26390 is a good example. 

Xiao

On Thu, Apr 23, 2020 at 11:17 AM Holden Karau <[hidden email]> wrote:
Tentatively I'm planning on this list to start backporting. If no one sees any issues with those I'll start to make backport JIRAs for them for tracking this afternoon.
SPARK-26390       ColumnPruning rule should only do column pruning
SPARK-25407       Allow nested access for non-existent field for Parquet file when nested pruning is enabled
SPARK-25559       Remove the unsupported predicates in Parquet when possible
SPARK-25860       Replace Literal(null, _) with FalseLiteral whenever possible
SPARK-27514       Skip collapsing windows with empty window expressions
SPARK-25338       Ensure to call super.beforeAll() and super.afterAll() in test cases
SPARK-27138       Remove AdminUtils calls (fixes deprecation)
SPARK-27981       Remove `Illegal reflective access` warning for `java.nio.Bits.unaligned()` in JDK9+
SPARK-26095       Disable parallelization in make-distibution.sh. (Avoid build hanging)
SPARK-25692       Remove static initialization of worker eventLoop handling chunk fetch requests within TransportContext. This fixes ChunkFetchIntegrationSuite as well
SPARK-26306       More memory to de-flake SorterSuite
SPARK-30199       Recover `spark.(ui|blockManager).port` from checkpoint
SPARK-27676       InMemoryFileIndex should respect spark.sql.files.ignoreMissingFiles
SPARK-31047       Improve file listing for ViewFileSystem
SPARK-25595       Ignore corrupt Avro file if flag IGNORE_CORRUPT_FILES enabled

Maybe:
SPARK-27801       Delegate to ViewFileSystem during file listing correctly

Not yet merged:
SPARK-31485       Barrier execution hang if insufficient resources

On Thu, Apr 23, 2020 at 9:13 AM Holden Karau <[hidden email]> wrote:


On Thu, Apr 23, 2020 at 9:07 AM edeesis <[hidden email]> wrote:
There's other information you can obtain from the Pod metadata on a describe
than just from the logs, which are typically what's being printed by the
Application itself.
Would get pods -w -o yaml do the trick here or is there going to be information that wouldn’t be captured that way?


I've also found that Spark has some trouble obtaining the reason for a K8S
executor death (as evident by the
spark.kubernetes.executor.lostCheck.maxAttempts config property)

I admittedly don't know what should qualify for a backport, but considering
3.0 is a major upgrade (Scala version, et al), is there any room for for
being more generous with backporting to 2.4?
I’d like to revisit the conversation around a Spark 2.5 as a transitional release. I know that some people are already effectively maintaining 2.4+ Selective new functionality backports internally. Maybe I’ll kick off that discussion which we can have and that can help inform what we should be putting in 2.4.




--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Holden Karau
I included 26390 as a candidate since it sounded like it bordered on a correctness/expected behaviour fix (eg columpruning rule doing more than column pruning), but if it’s too big a change happy to drop that one.

On Thu, Apr 23, 2020 at 11:43 AM Xiao Li <[hidden email]> wrote:
Hi, Holden, 

We are trying to avoid backporting the improvement/cleanup PRs to the maintenance releases, especially the core modules, like Spark Core and SQL. For example, SPARK-26390 is a good example. 

Xiao

On Thu, Apr 23, 2020 at 11:17 AM Holden Karau <[hidden email]> wrote:
Tentatively I'm planning on this list to start backporting. If no one sees any issues with those I'll start to make backport JIRAs for them for tracking this afternoon.
SPARK-26390       ColumnPruning rule should only do column pruning
SPARK-25407       Allow nested access for non-existent field for Parquet file when nested pruning is enabled
SPARK-25559       Remove the unsupported predicates in Parquet when possible
SPARK-25860       Replace Literal(null, _) with FalseLiteral whenever possible
SPARK-27514       Skip collapsing windows with empty window expressions
SPARK-25338       Ensure to call super.beforeAll() and super.afterAll() in test cases
SPARK-27138       Remove AdminUtils calls (fixes deprecation)
SPARK-27981       Remove `Illegal reflective access` warning for `java.nio.Bits.unaligned()` in JDK9+
SPARK-26095       Disable parallelization in make-distibution.sh. (Avoid build hanging)
SPARK-25692       Remove static initialization of worker eventLoop handling chunk fetch requests within TransportContext. This fixes ChunkFetchIntegrationSuite as well
SPARK-26306       More memory to de-flake SorterSuite
SPARK-30199       Recover `spark.(ui|blockManager).port` from checkpoint
SPARK-27676       InMemoryFileIndex should respect spark.sql.files.ignoreMissingFiles
SPARK-31047       Improve file listing for ViewFileSystem
SPARK-25595       Ignore corrupt Avro file if flag IGNORE_CORRUPT_FILES enabled

Maybe:
SPARK-27801       Delegate to ViewFileSystem during file listing correctly

Not yet merged:
SPARK-31485       Barrier execution hang if insufficient resources

On Thu, Apr 23, 2020 at 9:13 AM Holden Karau <[hidden email]> wrote:


On Thu, Apr 23, 2020 at 9:07 AM edeesis <[hidden email]> wrote:
There's other information you can obtain from the Pod metadata on a describe
than just from the logs, which are typically what's being printed by the
Application itself.
Would get pods -w -o yaml do the trick here or is there going to be information that wouldn’t be captured that way?


I've also found that Spark has some trouble obtaining the reason for a K8S
executor death (as evident by the
spark.kubernetes.executor.lostCheck.maxAttempts config property)

I admittedly don't know what should qualify for a backport, but considering
3.0 is a major upgrade (Scala version, et al), is there any room for for
being more generous with backporting to 2.4?
I’d like to revisit the conversation around a Spark 2.5 as a transitional release. I know that some people are already effectively maintaining 2.4+ Selective new functionality backports internally. Maybe I’ll kick off that discussion which we can have and that can help inform what we should be putting in 2.4.




--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 
Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Xiao Li-2

Actually, SPARK-26390 https://github.com/apache/spark/pull/23343 is just a small clean up. I do not think it fixes any correctness bugs. 

I think we should discuss your backport plans one by one with the PR authors and reviewers, since most of them are not closely following the dev list. 

Xiao



On Thu, Apr 23, 2020 at 11:46 AM Holden Karau <[hidden email]> wrote:
I included 26390 as a candidate since it sounded like it bordered on a correctness/expected behaviour fix (eg columpruning rule doing more than column pruning), but if it’s too big a change happy to drop that one.

On Thu, Apr 23, 2020 at 11:43 AM Xiao Li <[hidden email]> wrote:
Hi, Holden, 

We are trying to avoid backporting the improvement/cleanup PRs to the maintenance releases, especially the core modules, like Spark Core and SQL. For example, SPARK-26390 is a good example. 

Xiao

On Thu, Apr 23, 2020 at 11:17 AM Holden Karau <[hidden email]> wrote:
Tentatively I'm planning on this list to start backporting. If no one sees any issues with those I'll start to make backport JIRAs for them for tracking this afternoon.
SPARK-26390       ColumnPruning rule should only do column pruning
SPARK-25407       Allow nested access for non-existent field for Parquet file when nested pruning is enabled
SPARK-25559       Remove the unsupported predicates in Parquet when possible
SPARK-25860       Replace Literal(null, _) with FalseLiteral whenever possible
SPARK-27514       Skip collapsing windows with empty window expressions
SPARK-25338       Ensure to call super.beforeAll() and super.afterAll() in test cases
SPARK-27138       Remove AdminUtils calls (fixes deprecation)
SPARK-27981       Remove `Illegal reflective access` warning for `java.nio.Bits.unaligned()` in JDK9+
SPARK-26095       Disable parallelization in make-distibution.sh. (Avoid build hanging)
SPARK-25692       Remove static initialization of worker eventLoop handling chunk fetch requests within TransportContext. This fixes ChunkFetchIntegrationSuite as well
SPARK-26306       More memory to de-flake SorterSuite
SPARK-30199       Recover `spark.(ui|blockManager).port` from checkpoint
SPARK-27676       InMemoryFileIndex should respect spark.sql.files.ignoreMissingFiles
SPARK-31047       Improve file listing for ViewFileSystem
SPARK-25595       Ignore corrupt Avro file if flag IGNORE_CORRUPT_FILES enabled

Maybe:
SPARK-27801       Delegate to ViewFileSystem during file listing correctly

Not yet merged:
SPARK-31485       Barrier execution hang if insufficient resources

On Thu, Apr 23, 2020 at 9:13 AM Holden Karau <[hidden email]> wrote:


On Thu, Apr 23, 2020 at 9:07 AM edeesis <[hidden email]> wrote:
There's other information you can obtain from the Pod metadata on a describe
than just from the logs, which are typically what's being printed by the
Application itself.
Would get pods -w -o yaml do the trick here or is there going to be information that wouldn’t be captured that way?


I've also found that Spark has some trouble obtaining the reason for a K8S
executor death (as evident by the
spark.kubernetes.executor.lostCheck.maxAttempts config property)

I admittedly don't know what should qualify for a backport, but considering
3.0 is a major upgrade (Scala version, et al), is there any room for for
being more generous with backporting to 2.4?
I’d like to revisit the conversation around a Spark 2.5 as a transitional release. I know that some people are already effectively maintaining 2.4+ Selective new functionality backports internally. Maybe I’ll kick off that discussion which we can have and that can help inform what we should be putting in 2.4.




--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Holden Karau
Sounds good, I’ll make the JIRAs for tracking then and I can ping the original PR authors in their and based on their feedback either include or not.

On Thu, Apr 23, 2020 at 11:51 AM Xiao Li <[hidden email]> wrote:

Actually, SPARK-26390 https://github.com/apache/spark/pull/23343 is just a small clean up. I do not think it fixes any correctness bugs. 

I think we should discuss your backport plans one by one with the PR authors and reviewers, since most of them are not closely following the dev list. 

Xiao



On Thu, Apr 23, 2020 at 11:46 AM Holden Karau <[hidden email]> wrote:
I included 26390 as a candidate since it sounded like it bordered on a correctness/expected behaviour fix (eg columpruning rule doing more than column pruning), but if it’s too big a change happy to drop that one.

On Thu, Apr 23, 2020 at 11:43 AM Xiao Li <[hidden email]> wrote:
Hi, Holden, 

We are trying to avoid backporting the improvement/cleanup PRs to the maintenance releases, especially the core modules, like Spark Core and SQL. For example, SPARK-26390 is a good example. 

Xiao

On Thu, Apr 23, 2020 at 11:17 AM Holden Karau <[hidden email]> wrote:
Tentatively I'm planning on this list to start backporting. If no one sees any issues with those I'll start to make backport JIRAs for them for tracking this afternoon.
SPARK-26390       ColumnPruning rule should only do column pruning
SPARK-25407       Allow nested access for non-existent field for Parquet file when nested pruning is enabled
SPARK-25559       Remove the unsupported predicates in Parquet when possible
SPARK-25860       Replace Literal(null, _) with FalseLiteral whenever possible
SPARK-27514       Skip collapsing windows with empty window expressions
SPARK-25338       Ensure to call super.beforeAll() and super.afterAll() in test cases
SPARK-27138       Remove AdminUtils calls (fixes deprecation)
SPARK-27981       Remove `Illegal reflective access` warning for `java.nio.Bits.unaligned()` in JDK9+
SPARK-26095       Disable parallelization in make-distibution.sh. (Avoid build hanging)
SPARK-25692       Remove static initialization of worker eventLoop handling chunk fetch requests within TransportContext. This fixes ChunkFetchIntegrationSuite as well
SPARK-26306       More memory to de-flake SorterSuite
SPARK-30199       Recover `spark.(ui|blockManager).port` from checkpoint
SPARK-27676       InMemoryFileIndex should respect spark.sql.files.ignoreMissingFiles
SPARK-31047       Improve file listing for ViewFileSystem
SPARK-25595       Ignore corrupt Avro file if flag IGNORE_CORRUPT_FILES enabled

Maybe:
SPARK-27801       Delegate to ViewFileSystem during file listing correctly

Not yet merged:
SPARK-31485       Barrier execution hang if insufficient resources

On Thu, Apr 23, 2020 at 9:13 AM Holden Karau <[hidden email]> wrote:


On Thu, Apr 23, 2020 at 9:07 AM edeesis <[hidden email]> wrote:
There's other information you can obtain from the Pod metadata on a describe
than just from the logs, which are typically what's being printed by the
Application itself.
Would get pods -w -o yaml do the trick here or is there going to be information that wouldn’t be captured that way?


I've also found that Spark has some trouble obtaining the reason for a K8S
executor death (as evident by the
spark.kubernetes.executor.lostCheck.maxAttempts config property)

I admittedly don't know what should qualify for a backport, but considering
3.0 is a major upgrade (Scala version, et al), is there any room for for
being more generous with backporting to 2.4?
I’d like to revisit the conversation around a Spark 2.5 as a transitional release. I know that some people are already effectively maintaining 2.4+ Selective new functionality backports internally. Maybe I’ll kick off that discussion which we can have and that can help inform what we should be putting in 2.4.




--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 


--
--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 
Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

edeesis
In reply to this post by Holden Karau
Yes, watching the pod yaml could work for this. Just gotta set up some kind
of thing to do that, thanks for clueing me into that.

And sounds great re: Spark 2.5. Having a transitional release makes sense I
think.



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Getting the ball started on a 2.4.6 release

Holden Karau


On Fri, Apr 24, 2020 at 6:14 PM edeesis <[hidden email]> wrote:
Yes, watching the pod yaml could work for this. Just gotta set up some kind
of thing to do that, thanks for clueing me into that.
Sure thing, Kris Nova was the one who clued me into it so just passing it along :)


And sounds great re: Spark 2.5. Having a transitional release makes sense I
think.
wonderful :)




--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

--
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9