Minimum JDK8 version

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Minimum JDK8 version

Dongjoon Hyun-2
Hi, All.

Apache Spark 3.x will support both JDK8 and JDK11.

I'm wondering if we can have a minimum JDK8 version in Apache Spark 3.0.

Specifically, can we start to deprecate JDK8u81 and older at 3.0.

Currently, Apache Spark testing infra are testing only with jdk1.8.0_191 and above.

Bests,
Dongjoon.
Reply | Threaded
Open this post in threaded view
|

Re: Minimum JDK8 version

Sean Owen-2
Probably, but what is the difference that makes it different to
support u81 vs later?

On Thu, Oct 24, 2019 at 4:39 PM Dongjoon Hyun <[hidden email]> wrote:

>
> Hi, All.
>
> Apache Spark 3.x will support both JDK8 and JDK11.
>
> I'm wondering if we can have a minimum JDK8 version in Apache Spark 3.0.
>
> Specifically, can we start to deprecate JDK8u81 and older at 3.0.
>
> Currently, Apache Spark testing infra are testing only with jdk1.8.0_191 and above.
>
> Bests,
> Dongjoon.

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Minimum JDK8 version

shane knapp
In reply to this post by Dongjoon Hyun-2
i will happily test against the lowest agreed upon version...

On Thu, Oct 24, 2019 at 2:39 PM Dongjoon Hyun <[hidden email]> wrote:

>
> Hi, All.
>
> Apache Spark 3.x will support both JDK8 and JDK11.
>
> I'm wondering if we can have a minimum JDK8 version in Apache Spark 3.0.
>
> Specifically, can we start to deprecate JDK8u81 and older at 3.0.
>
> Currently, Apache Spark testing infra are testing only with jdk1.8.0_191 and above.
>
> Bests,
> Dongjoon.



--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Minimum JDK8 version

Takeshi Yamamuro
In reply to this post by Sean Owen-2
Hi, Dongjoon

It might be worth clearly describing which jdk versions we check in the testing infra

btw, any other project announcing the minimum support jdk version?
It seems that hadoop does not.

On Fri, Oct 25, 2019 at 6:51 AM Sean Owen <[hidden email]> wrote:
Probably, but what is the difference that makes it different to
support u81 vs later?

On Thu, Oct 24, 2019 at 4:39 PM Dongjoon Hyun <[hidden email]> wrote:
>
> Hi, All.
>
> Apache Spark 3.x will support both JDK8 and JDK11.
>
> I'm wondering if we can have a minimum JDK8 version in Apache Spark 3.0.
>
> Specifically, can we start to deprecate JDK8u81 and older at 3.0.
>
> Currently, Apache Spark testing infra are testing only with jdk1.8.0_191 and above.
>
> Bests,
> Dongjoon.

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]



--
---
Takeshi Yamamuro
Reply | Threaded
Open this post in threaded view
|

Re: Minimum JDK8 version

Dongjoon Hyun-2
Thank you for reply, Sean, Shane, Takeshi.

The reason is that there is a PR to aim to add `-XX:OnOutOfMemoryError="kill -9 %p"` as a default behavior at 3.0.0.
(Please note that the PR will add it by *default* always. There is no way for user to remove it.)

    - [SPARK-27900][CORE][K8s] Add `spark.driver.killOnOOMError` flag in cluster mode

If we can deprecate old JDK8 versions, we are able to use JVM option `ExitOnOutOfMemoryError` instead.
(This is added at JDK 8u92. In my previous email, 8u82 was a typo.)


All versions of JDK8 are not the same naturally. For example, Hadoop community also have the following document although they are not specifying the minimum versions.


Bests,
Dongjoon.


On Thu, Oct 24, 2019 at 6:05 PM Takeshi Yamamuro <[hidden email]> wrote:
Hi, Dongjoon

It might be worth clearly describing which jdk versions we check in the testing infra

btw, any other project announcing the minimum support jdk version?
It seems that hadoop does not.

On Fri, Oct 25, 2019 at 6:51 AM Sean Owen <[hidden email]> wrote:
Probably, but what is the difference that makes it different to
support u81 vs later?

On Thu, Oct 24, 2019 at 4:39 PM Dongjoon Hyun <[hidden email]> wrote:
>
> Hi, All.
>
> Apache Spark 3.x will support both JDK8 and JDK11.
>
> I'm wondering if we can have a minimum JDK8 version in Apache Spark 3.0.
>
> Specifically, can we start to deprecate JDK8u81 and older at 3.0.
>
> Currently, Apache Spark testing infra are testing only with jdk1.8.0_191 and above.
>
> Bests,
> Dongjoon.

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]



--
---
Takeshi Yamamuro
Reply | Threaded
Open this post in threaded view
|

Re: Minimum JDK8 version

Manu Zhang
In reply to this post by Takeshi Yamamuro
Probably, but what is the difference that makes it different to
support u81 vs later?



On Fri, Oct 25, 2019 at 9:05 AM Takeshi Yamamuro <[hidden email]> wrote:
Hi, Dongjoon

It might be worth clearly describing which jdk versions we check in the testing infra

btw, any other project announcing the minimum support jdk version?
It seems that hadoop does not.

On Fri, Oct 25, 2019 at 6:51 AM Sean Owen <[hidden email]> wrote:
Probably, but what is the difference that makes it different to
support u81 vs later?

On Thu, Oct 24, 2019 at 4:39 PM Dongjoon Hyun <[hidden email]> wrote:
>
> Hi, All.
>
> Apache Spark 3.x will support both JDK8 and JDK11.
>
> I'm wondering if we can have a minimum JDK8 version in Apache Spark 3.0.
>
> Specifically, can we start to deprecate JDK8u81 and older at 3.0.
>
> Currently, Apache Spark testing infra are testing only with jdk1.8.0_191 and above.
>
> Bests,
> Dongjoon.

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]



--
---
Takeshi Yamamuro
Reply | Threaded
Open this post in threaded view
|

Re: Minimum JDK8 version

Sean Owen-2
In reply to this post by Dongjoon Hyun-2
I think that's fine, personally. Anyone using JDK 8 should / probably
is on a recent release.

On Thu, Oct 24, 2019 at 8:56 PM Dongjoon Hyun <[hidden email]> wrote:

>
> Thank you for reply, Sean, Shane, Takeshi.
>
> The reason is that there is a PR to aim to add `-XX:OnOutOfMemoryError="kill -9 %p"` as a default behavior at 3.0.0.
> (Please note that the PR will add it by *default* always. There is no way for user to remove it.)
>
>     - [SPARK-27900][CORE][K8s] Add `spark.driver.killOnOOMError` flag in cluster mode
>     - https://github.com/apache/spark/pull/26161
>
> If we can deprecate old JDK8 versions, we are able to use JVM option `ExitOnOutOfMemoryError` instead.
> (This is added at JDK 8u92. In my previous email, 8u82 was a typo.)
>
>     - https://www.oracle.com/technetwork/java/javase/8u92-relnotes-2949471.html
>
> All versions of JDK8 are not the same naturally. For example, Hadoop community also have the following document although they are not specifying the minimum versions.
>
>     - https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Java+Versions
>
> Bests,
> Dongjoon.
>
>
> On Thu, Oct 24, 2019 at 6:05 PM Takeshi Yamamuro <[hidden email]> wrote:
>>
>> Hi, Dongjoon
>>
>> It might be worth clearly describing which jdk versions we check in the testing infra
>> in some documents, e.g., https://spark.apache.org/docs/latest/#downloading
>>
>> btw, any other project announcing the minimum support jdk version?
>> It seems that hadoop does not.
>>
>> On Fri, Oct 25, 2019 at 6:51 AM Sean Owen <[hidden email]> wrote:
>>>
>>> Probably, but what is the difference that makes it different to
>>> support u81 vs later?
>>>
>>> On Thu, Oct 24, 2019 at 4:39 PM Dongjoon Hyun <[hidden email]> wrote:
>>> >
>>> > Hi, All.
>>> >
>>> > Apache Spark 3.x will support both JDK8 and JDK11.
>>> >
>>> > I'm wondering if we can have a minimum JDK8 version in Apache Spark 3.0.
>>> >
>>> > Specifically, can we start to deprecate JDK8u81 and older at 3.0.
>>> >
>>> > Currently, Apache Spark testing infra are testing only with jdk1.8.0_191 and above.
>>> >
>>> > Bests,
>>> > Dongjoon.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: [hidden email]
>>>
>>
>>
>> --
>> ---
>> Takeshi Yamamuro

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Minimum JDK8 version

Dongjoon Hyun-2
Thank you. I created a PR for that. For now, the minimum requirement is 8u92 in that PR.


Bests,
Dongjoon.


On Thu, Oct 24, 2019 at 7:55 PM Sean Owen <[hidden email]> wrote:
I think that's fine, personally. Anyone using JDK 8 should / probably
is on a recent release.

On Thu, Oct 24, 2019 at 8:56 PM Dongjoon Hyun <[hidden email]> wrote:
>
> Thank you for reply, Sean, Shane, Takeshi.
>
> The reason is that there is a PR to aim to add `-XX:OnOutOfMemoryError="kill -9 %p"` as a default behavior at 3.0.0.
> (Please note that the PR will add it by *default* always. There is no way for user to remove it.)
>
>     - [SPARK-27900][CORE][K8s] Add `spark.driver.killOnOOMError` flag in cluster mode
>     - https://github.com/apache/spark/pull/26161
>
> If we can deprecate old JDK8 versions, we are able to use JVM option `ExitOnOutOfMemoryError` instead.
> (This is added at JDK 8u92. In my previous email, 8u82 was a typo.)
>
>     - https://www.oracle.com/technetwork/java/javase/8u92-relnotes-2949471.html
>
> All versions of JDK8 are not the same naturally. For example, Hadoop community also have the following document although they are not specifying the minimum versions.
>
>     - https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Java+Versions
>
> Bests,
> Dongjoon.
>
>
> On Thu, Oct 24, 2019 at 6:05 PM Takeshi Yamamuro <[hidden email]> wrote:
>>
>> Hi, Dongjoon
>>
>> It might be worth clearly describing which jdk versions we check in the testing infra
>> in some documents, e.g., https://spark.apache.org/docs/latest/#downloading
>>
>> btw, any other project announcing the minimum support jdk version?
>> It seems that hadoop does not.
>>
>> On Fri, Oct 25, 2019 at 6:51 AM Sean Owen <[hidden email]> wrote:
>>>
>>> Probably, but what is the difference that makes it different to
>>> support u81 vs later?
>>>
>>> On Thu, Oct 24, 2019 at 4:39 PM Dongjoon Hyun <[hidden email]> wrote:
>>> >
>>> > Hi, All.
>>> >
>>> > Apache Spark 3.x will support both JDK8 and JDK11.
>>> >
>>> > I'm wondering if we can have a minimum JDK8 version in Apache Spark 3.0.
>>> >
>>> > Specifically, can we start to deprecate JDK8u81 and older at 3.0.
>>> >
>>> > Currently, Apache Spark testing infra are testing only with jdk1.8.0_191 and above.
>>> >
>>> > Bests,
>>> > Dongjoon.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: [hidden email]
>>>
>>
>>
>> --
>> ---
>> Takeshi Yamamuro
Reply | Threaded
Open this post in threaded view
|

Re: Minimum JDK8 version

Takeshi Yamamuro
> All versions of JDK8 are not the same naturally. For example, Hadoop community also have the following document although they are not specifying the minimum versions.
oh, I didn't know that. Thanks for the info and updating the doc!

Bests,
Takeshi

On Fri, Oct 25, 2019 at 12:26 PM Dongjoon Hyun <[hidden email]> wrote:
Thank you. I created a PR for that. For now, the minimum requirement is 8u92 in that PR.


Bests,
Dongjoon.


On Thu, Oct 24, 2019 at 7:55 PM Sean Owen <[hidden email]> wrote:
I think that's fine, personally. Anyone using JDK 8 should / probably
is on a recent release.

On Thu, Oct 24, 2019 at 8:56 PM Dongjoon Hyun <[hidden email]> wrote:
>
> Thank you for reply, Sean, Shane, Takeshi.
>
> The reason is that there is a PR to aim to add `-XX:OnOutOfMemoryError="kill -9 %p"` as a default behavior at 3.0.0.
> (Please note that the PR will add it by *default* always. There is no way for user to remove it.)
>
>     - [SPARK-27900][CORE][K8s] Add `spark.driver.killOnOOMError` flag in cluster mode
>     - https://github.com/apache/spark/pull/26161
>
> If we can deprecate old JDK8 versions, we are able to use JVM option `ExitOnOutOfMemoryError` instead.
> (This is added at JDK 8u92. In my previous email, 8u82 was a typo.)
>
>     - https://www.oracle.com/technetwork/java/javase/8u92-relnotes-2949471.html
>
> All versions of JDK8 are not the same naturally. For example, Hadoop community also have the following document although they are not specifying the minimum versions.
>
>     - https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Java+Versions
>
> Bests,
> Dongjoon.
>
>
> On Thu, Oct 24, 2019 at 6:05 PM Takeshi Yamamuro <[hidden email]> wrote:
>>
>> Hi, Dongjoon
>>
>> It might be worth clearly describing which jdk versions we check in the testing infra
>> in some documents, e.g., https://spark.apache.org/docs/latest/#downloading
>>
>> btw, any other project announcing the minimum support jdk version?
>> It seems that hadoop does not.
>>
>> On Fri, Oct 25, 2019 at 6:51 AM Sean Owen <[hidden email]> wrote:
>>>
>>> Probably, but what is the difference that makes it different to
>>> support u81 vs later?
>>>
>>> On Thu, Oct 24, 2019 at 4:39 PM Dongjoon Hyun <[hidden email]> wrote:
>>> >
>>> > Hi, All.
>>> >
>>> > Apache Spark 3.x will support both JDK8 and JDK11.
>>> >
>>> > I'm wondering if we can have a minimum JDK8 version in Apache Spark 3.0.
>>> >
>>> > Specifically, can we start to deprecate JDK8u81 and older at 3.0.
>>> >
>>> > Currently, Apache Spark testing infra are testing only with jdk1.8.0_191 and above.
>>> >
>>> > Bests,
>>> > Dongjoon.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: [hidden email]
>>>
>>
>>
>> --
>> ---
>> Takeshi Yamamuro


--
---
Takeshi Yamamuro
Reply | Threaded
Open this post in threaded view
|

Re: Minimum JDK8 version

Steve Loughran-2
In reply to this post by Dongjoon Hyun-2


On Fri, Oct 25, 2019 at 2:56 AM Dongjoon Hyun <[hidden email]> wrote:

All versions of JDK8 are not the same naturally. For example, Hadoop community also have the following document although they are not specifying the minimum versions.



I don't think there is a specific minimum, though some of the big intermediate releases have caused surprises.

One regression is actually HTTPS/TLS encryption performance, which hurts abfs and s3a. You can tune that in system properties; a move to wildfly openssl (when on the classpath) is underway in 3.2/3.3 which we may backport, once we trust that. Amazon have also recently released a fork of that code, which presumably works consistently with AWS endpoints - that being a troublespot on some wildfly versions.

And everyone is moving to openJDK.

FWIW we've just discovered perf issues with FutureCompletable.get() as it makes a native call to getAvailableProcessors() on on every single get() just to decide whether to busy wait briefly (multicore) vs sleep immediately. Becomes significant once you start making serious use of those Apis.

-steve