Quantcast

[VOTE] Apache Spark 2.2.0 (RC2)

classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[VOTE] Apache Spark 2.2.0 (RC2)

Michael Armbrust
Please vote on releasing the following candidate as Apache Spark version 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.2.0
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.2.0-rc2 (1d4017b44d5e6ad156abeaae6371747f111dd1f9)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.2.0?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.3.0 or 2.2.1.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.1.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Sean Owen
The tests pass, licenses are OK, sigs, etc. I'd endorse it but we do still have blockers, so I assume people mean we need there will be another RC at some point.

Blocker
SPARK-20503 ML 2.2 QA: API: Python API coverage
SPARK-20501 ML, Graph 2.2 QA: API: New Scala APIs, docs
SPARK-20502 ML, Graph 2.2 QA: API: Experimental, DeveloperApi, final, sealed audit
SPARK-20509 SparkR 2.2 QA: New R APIs and API docs
SPARK-20504 ML 2.2 QA: API: Java compatibility, docs
SPARK-20500 ML, Graph 2.2 QA: API: Binary incompatible changes

Critical
SPARK-20499 Spark MLlib, GraphX 2.2 QA umbrella
SPARK-20520 R streaming tests failed on Windows
SPARK-18891 Support for specific collection types
SPARK-20505 ML, Graph 2.2 QA: Update user guide for new features & APIs
SPARK-20364 Parquet predicate pushdown on columns with dots return empty results
SPARK-20508 Spark R 2.2 QA umbrella
SPARK-20512 SparkR 2.2 QA: Programming guide, migration guide, vignettes updates
SPARK-20513 Update SparkR website for 2.2
SPARK-20510 SparkR 2.2 QA: Update user guide for new features & APIs
SPARK-20507 Update MLlib, GraphX websites for 2.2
SPARK-20506 ML, Graph 2.2 QA: Programming guide update and migration guide
SPARK-19690 Join a streaming DataFrame with a batch DataFrame may not work
SPARK-7768 Make user-defined type (UDT) API public
SPARK-4502 Spark SQL reads unneccesary nested fields from Parquet
SPARK-17626 TPC-DS performance improvements using star-schema heuristics


On Thu, May 4, 2017 at 6:07 PM Michael Armbrust <[hidden email]> wrote:
Please vote on releasing the following candidate as Apache Spark version 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.2.0
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.2.0-rc2 (1d4017b44d5e6ad156abeaae6371747f111dd1f9)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.2.0?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.3.0 or 2.2.1.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.1.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Joseph Bradley
I'll work on resolving some of the ML QA blockers this week, but it'd be great to get help.  @committers & contributors who work on ML, many of you have helped in the past, so please help take QA tasks wherever possible.  (Thanks Yanbo & Felix for jumping in already.)  Anyone is welcome to chip in of course!
Joseph

On Thu, May 4, 2017 at 4:09 PM, Sean Owen <[hidden email]> wrote:
The tests pass, licenses are OK, sigs, etc. I'd endorse it but we do still have blockers, so I assume people mean we need there will be another RC at some point.

Blocker
SPARK-20503 ML 2.2 QA: API: Python API coverage
SPARK-20501 ML, Graph 2.2 QA: API: New Scala APIs, docs
SPARK-20502 ML, Graph 2.2 QA: API: Experimental, DeveloperApi, final, sealed audit
SPARK-20509 SparkR 2.2 QA: New R APIs and API docs
SPARK-20504 ML 2.2 QA: API: Java compatibility, docs
SPARK-20500 ML, Graph 2.2 QA: API: Binary incompatible changes

Critical
SPARK-20499 Spark MLlib, GraphX 2.2 QA umbrella
SPARK-20520 R streaming tests failed on Windows
SPARK-18891 Support for specific collection types
SPARK-20505 ML, Graph 2.2 QA: Update user guide for new features & APIs
SPARK-20364 Parquet predicate pushdown on columns with dots return empty results
SPARK-20508 Spark R 2.2 QA umbrella
SPARK-20512 SparkR 2.2 QA: Programming guide, migration guide, vignettes updates
SPARK-20513 Update SparkR website for 2.2
SPARK-20510 SparkR 2.2 QA: Update user guide for new features & APIs
SPARK-20507 Update MLlib, GraphX websites for 2.2
SPARK-20506 ML, Graph 2.2 QA: Programming guide update and migration guide
SPARK-19690 Join a streaming DataFrame with a batch DataFrame may not work
SPARK-7768 Make user-defined type (UDT) API public
SPARK-4502 Spark SQL reads unneccesary nested fields from Parquet
SPARK-17626 TPC-DS performance improvements using star-schema heuristics


On Thu, May 4, 2017 at 6:07 PM Michael Armbrust <[hidden email]> wrote:
Please vote on releasing the following candidate as Apache Spark version 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.2.0
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.2.0-rc2 (1d4017b44d5e6ad156abeaae6371747f111dd1f9)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.2.0?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.3.0 or 2.2.1.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.1.



--

Joseph Bradley

Software Engineer - Machine Learning

Databricks, Inc.

http://databricks.com

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Ricardo Almeida-2
As for build and tests, all pass on both macOS 10 and Ubuntu 16.10, with Java 8.

./build/mvn -Phadoop-2.7 -Dhadoop.version=2.7.3 -Pyarn -Phive -Phive-thriftserver -Pscala-2.11 clean package


On 8 May 2017 at 23:18, Joseph Bradley <[hidden email]> wrote:
I'll work on resolving some of the ML QA blockers this week, but it'd be great to get help.  @committers & contributors who work on ML, many of you have helped in the past, so please help take QA tasks wherever possible.  (Thanks Yanbo & Felix for jumping in already.)  Anyone is welcome to chip in of course!
Joseph

On Thu, May 4, 2017 at 4:09 PM, Sean Owen <[hidden email]> wrote:
The tests pass, licenses are OK, sigs, etc. I'd endorse it but we do still have blockers, so I assume people mean we need there will be another RC at some point.

Blocker
SPARK-20503 ML 2.2 QA: API: Python API coverage
SPARK-20501 ML, Graph 2.2 QA: API: New Scala APIs, docs
SPARK-20502 ML, Graph 2.2 QA: API: Experimental, DeveloperApi, final, sealed audit
SPARK-20509 SparkR 2.2 QA: New R APIs and API docs
SPARK-20504 ML 2.2 QA: API: Java compatibility, docs
SPARK-20500 ML, Graph 2.2 QA: API: Binary incompatible changes

Critical
SPARK-20499 Spark MLlib, GraphX 2.2 QA umbrella
SPARK-20520 R streaming tests failed on Windows
SPARK-18891 Support for specific collection types
SPARK-20505 ML, Graph 2.2 QA: Update user guide for new features & APIs
SPARK-20364 Parquet predicate pushdown on columns with dots return empty results
SPARK-20508 Spark R 2.2 QA umbrella
SPARK-20512 SparkR 2.2 QA: Programming guide, migration guide, vignettes updates
SPARK-20513 Update SparkR website for 2.2
SPARK-20510 SparkR 2.2 QA: Update user guide for new features & APIs
SPARK-20507 Update MLlib, GraphX websites for 2.2
SPARK-20506 ML, Graph 2.2 QA: Programming guide update and migration guide
SPARK-19690 Join a streaming DataFrame with a batch DataFrame may not work
SPARK-7768 Make user-defined type (UDT) API public
SPARK-4502 Spark SQL reads unneccesary nested fields from Parquet
SPARK-17626 TPC-DS performance improvements using star-schema heuristics


On Thu, May 4, 2017 at 6:07 PM Michael Armbrust <[hidden email]> wrote:
Please vote on releasing the following candidate as Apache Spark version 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.2.0
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.2.0-rc2 (1d4017b44d5e6ad156abeaae6371747f111dd1f9)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.2.0?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.3.0 or 2.2.1.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.1.



--

Joseph Bradley

Software Engineer - Machine Learning

Databricks, Inc.

http://databricks.com


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Kazuaki Ishizaki
In reply to this post by Michael Armbrust
+1 (non-binding)

I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for core have passed.

$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
$ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 package install
$ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
...
Run completed in 15 minutes, 12 seconds.
Total number of tests run: 1940
Suites: completed 206, aborted 0
Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
All tests passed.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 16:51 min
[INFO] Finished at: 2017-05-09T17:51:04+09:00
[INFO] Final Memory: 53M/514M
[INFO] ------------------------------------------------------------------------
[WARNING] The requested profile "hive" could not be activated because it does not exist.


Kazuaki Ishizaki,



From:        Michael Armbrust <[hidden email]>
To:        "[hidden email]" <[hidden email]>
Date:        2017/05/05 02:08
Subject:        [VOTE] Apache Spark 2.2.0 (RC2)




Please vote on releasing the following candidate as Apache Spark version 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.2.0
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.2.0-rc2 (1d4017b44d5e6ad156abeaae6371747f111dd1f9)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:
http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1236/

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.2.0?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.3.0 or 2.2.1.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.1.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Michael Armbrust
I'm going to -1 given the outstanding issues and lack of +1s.  I'll create another RC once ML has had time to take care of the more critical problems.  In the meantime please keep testing this release!

On Tue, May 9, 2017 at 2:00 AM, Kazuaki Ishizaki <[hidden email]> wrote:
+1 (non-binding)

I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for core have passed.

$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
$ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 package install
$ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
...
Run completed in 15 minutes, 12 seconds.
Total number of tests run: 1940
Suites: completed 206, aborted 0
Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
All tests passed.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 16:51 min
[INFO] Finished at: 2017-05-09T17:51:04+09:00
[INFO] Final Memory: 53M/514M
[INFO] ------------------------------------------------------------------------
[WARNING] The requested profile "hive" could not be activated because it does not exist.


Kazuaki Ishizaki,



From:        Michael Armbrust <[hidden email]>
To:        "[hidden email]" <[hidden email]>
Date:        2017/05/05 02:08
Subject:        [VOTE] Apache Spark 2.2.0 (RC2)




Please vote on releasing the following candidate as Apache Spark version 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.2.0
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.2.0-rc2 (1d4017b44d5e6ad156abeaae6371747f111dd1f9)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:
http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1236/

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.2.0?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.3.0 or 2.2.1.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.1.


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Marcelo Vanzin
Since you'll be creating a new RC, I'd wait until SPARK-20666 is
fixed, since the change that caused it is in branch-2.2. Probably a
good idea to raise it to blocker at this point.

On Thu, May 11, 2017 at 2:59 PM, Michael Armbrust
<[hidden email]> wrote:

> I'm going to -1 given the outstanding issues and lack of +1s.  I'll create
> another RC once ML has had time to take care of the more critical problems.
> In the meantime please keep testing this release!
>
> On Tue, May 9, 2017 at 2:00 AM, Kazuaki Ishizaki <[hidden email]>
> wrote:
>>
>> +1 (non-binding)
>>
>> I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for
>> core have passed.
>>
>> $ java -version
>> openjdk version "1.8.0_111"
>> OpenJDK Runtime Environment (build
>> 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
>> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>> $ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7
>> package install
>> $ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
>> ...
>> Run completed in 15 minutes, 12 seconds.
>> Total number of tests run: 1940
>> Suites: completed 206, aborted 0
>> Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
>> All tests passed.
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] BUILD SUCCESS
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] Total time: 16:51 min
>> [INFO] Finished at: 2017-05-09T17:51:04+09:00
>> [INFO] Final Memory: 53M/514M
>> [INFO]
>> ------------------------------------------------------------------------
>> [WARNING] The requested profile "hive" could not be activated because it
>> does not exist.
>>
>>
>> Kazuaki Ishizaki,
>>
>>
>>
>> From:        Michael Armbrust <[hidden email]>
>> To:        "[hidden email]" <[hidden email]>
>> Date:        2017/05/05 02:08
>> Subject:        [VOTE] Apache Spark 2.2.0 (RC2)
>> ________________________________
>>
>>
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if
>> a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.2.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.2.0-rc2
>> (1d4017b44d5e6ad156abeaae6371747f111dd1f9)
>>
>> List of JIRA tickets resolved can be found with this filter.
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1236/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/
>>
>>
>> FAQ
>>
>> How can I help test this release?
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> What should happen to JIRA tickets still targeting 2.2.0?
>>
>> Committers should look at those and triage. Extremely important bug fixes,
>> documentation, and API tweaks that impact compatibility should be worked on
>> immediately. Everything else please retarget to 2.3.0 or 2.2.1.
>>
>> But my bug isn't fixed!??!
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.1.
>>
>



--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Joseph Bradley
All of the ML/Graph/SparkR QA blocker JIRAs have been resolved.  Thanks everyone who helped out on those!

We still have open ML/Graph/SparkR JIRAs targeted at 2.2, but they are essentially all for documentation.

Joseph

On Thu, May 11, 2017 at 3:08 PM, Marcelo Vanzin <[hidden email]> wrote:
Since you'll be creating a new RC, I'd wait until SPARK-20666 is
fixed, since the change that caused it is in branch-2.2. Probably a
good idea to raise it to blocker at this point.

On Thu, May 11, 2017 at 2:59 PM, Michael Armbrust
<[hidden email]> wrote:
> I'm going to -1 given the outstanding issues and lack of +1s.  I'll create
> another RC once ML has had time to take care of the more critical problems.
> In the meantime please keep testing this release!
>
> On Tue, May 9, 2017 at 2:00 AM, Kazuaki Ishizaki <[hidden email]>
> wrote:
>>
>> +1 (non-binding)
>>
>> I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for
>> core have passed.
>>
>> $ java -version
>> openjdk version "1.8.0_111"
>> OpenJDK Runtime Environment (build
>> 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
>> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>> $ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7
>> package install
>> $ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
>> ...
>> Run completed in 15 minutes, 12 seconds.
>> Total number of tests run: 1940
>> Suites: completed 206, aborted 0
>> Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
>> All tests passed.
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] BUILD SUCCESS
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] Total time: 16:51 min
>> [INFO] Finished at: 2017-05-09T17:51:04+09:00
>> [INFO] Final Memory: 53M/514M
>> [INFO]
>> ------------------------------------------------------------------------
>> [WARNING] The requested profile "hive" could not be activated because it
>> does not exist.
>>
>>
>> Kazuaki Ishizaki,
>>
>>
>>
>> From:        Michael Armbrust <[hidden email]>
>> To:        "[hidden email]" <[hidden email]>
>> Date:        2017/05/05 02:08
>> Subject:        [VOTE] Apache Spark 2.2.0 (RC2)
>> ________________________________
>>
>>
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if
>> a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.2.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.2.0-rc2
>> (1d4017b44d5e6ad156abeaae6371747f111dd1f9)
>>
>> List of JIRA tickets resolved can be found with this filter.
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1236/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/
>>
>>
>> FAQ
>>
>> How can I help test this release?
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> What should happen to JIRA tickets still targeting 2.2.0?
>>
>> Committers should look at those and triage. Extremely important bug fixes,
>> documentation, and API tweaks that impact compatibility should be worked on
>> immediately. Everything else please retarget to 2.3.0 or 2.2.1.
>>
>> But my bug isn't fixed!??!
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.1.
>>
>



--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--

Joseph Bradley

Software Engineer - Machine Learning

Databricks, Inc.

http://databricks.com

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

RussS
Seeing an issue with the DataScanExec and some of our integration tests for the SCC. Running dataframe read and writes from the shell seems fine but the Redaction code seems to get a "None" when doing SparkSession.getActiveSession.get in our integration tests. I'm not sure why but i'll dig into this later if I get a chance.

Example Failed Test
https://github.com/datastax/spark-cassandra-connector/blob/v2.0.1/spark-cassandra-connector/src/it/scala/com/datastax/spark/connector/sql/CassandraSQLSpec.scala#L311

```[info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.util.NoSuchElementException: None.get
[info] java.util.NoSuchElementException: None.get
[info] at scala.None$.get(Option.scala:347)
[info] at scala.None$.get(Option.scala:345)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$class.org$apache$spark$sql$execution$DataSourceScanExec$$redact(DataSourceScanExec.scala:70)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:54)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:52)
``` 

Again this only seems to repo in our IT suite so i'm not sure if this is a real issue. 


On Tue, May 16, 2017 at 1:40 PM Joseph Bradley <[hidden email]> wrote:
All of the ML/Graph/SparkR QA blocker JIRAs have been resolved.  Thanks everyone who helped out on those!

We still have open ML/Graph/SparkR JIRAs targeted at 2.2, but they are essentially all for documentation.

Joseph

On Thu, May 11, 2017 at 3:08 PM, Marcelo Vanzin <[hidden email]> wrote:
Since you'll be creating a new RC, I'd wait until SPARK-20666 is
fixed, since the change that caused it is in branch-2.2. Probably a
good idea to raise it to blocker at this point.

On Thu, May 11, 2017 at 2:59 PM, Michael Armbrust
<[hidden email]> wrote:
> I'm going to -1 given the outstanding issues and lack of +1s.  I'll create
> another RC once ML has had time to take care of the more critical problems.
> In the meantime please keep testing this release!
>
> On Tue, May 9, 2017 at 2:00 AM, Kazuaki Ishizaki <[hidden email]>
> wrote:
>>
>> +1 (non-binding)
>>
>> I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for
>> core have passed.
>>
>> $ java -version
>> openjdk version "1.8.0_111"
>> OpenJDK Runtime Environment (build
>> 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
>> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>> $ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7
>> package install
>> $ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
>> ...
>> Run completed in 15 minutes, 12 seconds.
>> Total number of tests run: 1940
>> Suites: completed 206, aborted 0
>> Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
>> All tests passed.
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] BUILD SUCCESS
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] Total time: 16:51 min
>> [INFO] Finished at: 2017-05-09T17:51:04+09:00
>> [INFO] Final Memory: 53M/514M
>> [INFO]
>> ------------------------------------------------------------------------
>> [WARNING] The requested profile "hive" could not be activated because it
>> does not exist.
>>
>>
>> Kazuaki Ishizaki,
>>
>>
>>
>> From:        Michael Armbrust <[hidden email]>
>> To:        "[hidden email]" <[hidden email]>
>> Date:        2017/05/05 02:08
>> Subject:        [VOTE] Apache Spark 2.2.0 (RC2)
>> ________________________________
>>
>>
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if
>> a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.2.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.2.0-rc2
>> (1d4017b44d5e6ad156abeaae6371747f111dd1f9)
>>
>> List of JIRA tickets resolved can be found with this filter.
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1236/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/
>>
>>
>> FAQ
>>
>> How can I help test this release?
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> What should happen to JIRA tickets still targeting 2.2.0?
>>
>> Committers should look at those and triage. Extremely important bug fixes,
>> documentation, and API tweaks that impact compatibility should be worked on
>> immediately. Everything else please retarget to 2.3.0 or 2.2.1.
>>
>> But my bug isn't fixed!??!
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.1.
>>
>



--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--

Joseph Bradley

Software Engineer - Machine Learning

Databricks, Inc.

http://databricks.com

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Nick Pentreath
All the outstanding ML QA doc and user guide items are done for 2.2 so from that side we should be good to cut another RC :)

On Thu, 18 May 2017 at 00:18 Russell Spitzer <[hidden email]> wrote:
Seeing an issue with the DataScanExec and some of our integration tests for the SCC. Running dataframe read and writes from the shell seems fine but the Redaction code seems to get a "None" when doing SparkSession.getActiveSession.get in our integration tests. I'm not sure why but i'll dig into this later if I get a chance.

Example Failed Test
https://github.com/datastax/spark-cassandra-connector/blob/v2.0.1/spark-cassandra-connector/src/it/scala/com/datastax/spark/connector/sql/CassandraSQLSpec.scala#L311

```[info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.util.NoSuchElementException: None.get
[info] java.util.NoSuchElementException: None.get
[info] at scala.None$.get(Option.scala:347)
[info] at scala.None$.get(Option.scala:345)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$class.org$apache$spark$sql$execution$DataSourceScanExec$$redact(DataSourceScanExec.scala:70)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:54)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:52)
``` 

Again this only seems to repo in our IT suite so i'm not sure if this is a real issue. 


On Tue, May 16, 2017 at 1:40 PM Joseph Bradley <[hidden email]> wrote:
All of the ML/Graph/SparkR QA blocker JIRAs have been resolved.  Thanks everyone who helped out on those!

We still have open ML/Graph/SparkR JIRAs targeted at 2.2, but they are essentially all for documentation.

Joseph

On Thu, May 11, 2017 at 3:08 PM, Marcelo Vanzin <[hidden email]> wrote:
Since you'll be creating a new RC, I'd wait until SPARK-20666 is
fixed, since the change that caused it is in branch-2.2. Probably a
good idea to raise it to blocker at this point.

On Thu, May 11, 2017 at 2:59 PM, Michael Armbrust
<[hidden email]> wrote:
> I'm going to -1 given the outstanding issues and lack of +1s.  I'll create
> another RC once ML has had time to take care of the more critical problems.
> In the meantime please keep testing this release!
>
> On Tue, May 9, 2017 at 2:00 AM, Kazuaki Ishizaki <[hidden email]>
> wrote:
>>
>> +1 (non-binding)
>>
>> I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for
>> core have passed.
>>
>> $ java -version
>> openjdk version "1.8.0_111"
>> OpenJDK Runtime Environment (build
>> 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
>> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>> $ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7
>> package install
>> $ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
>> ...
>> Run completed in 15 minutes, 12 seconds.
>> Total number of tests run: 1940
>> Suites: completed 206, aborted 0
>> Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
>> All tests passed.
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] BUILD SUCCESS
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] Total time: 16:51 min
>> [INFO] Finished at: 2017-05-09T17:51:04+09:00
>> [INFO] Final Memory: 53M/514M
>> [INFO]
>> ------------------------------------------------------------------------
>> [WARNING] The requested profile "hive" could not be activated because it
>> does not exist.
>>
>>
>> Kazuaki Ishizaki,
>>
>>
>>
>> From:        Michael Armbrust <[hidden email]>
>> To:        "[hidden email]" <[hidden email]>
>> Date:        2017/05/05 02:08
>> Subject:        [VOTE] Apache Spark 2.2.0 (RC2)
>> ________________________________
>>
>>
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if
>> a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.2.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.2.0-rc2
>> (1d4017b44d5e6ad156abeaae6371747f111dd1f9)
>>
>> List of JIRA tickets resolved can be found with this filter.
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1236/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/
>>
>>
>> FAQ
>>
>> How can I help test this release?
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> What should happen to JIRA tickets still targeting 2.2.0?
>>
>> Committers should look at those and triage. Extremely important bug fixes,
>> documentation, and API tweaks that impact compatibility should be worked on
>> immediately. Everything else please retarget to 2.3.0 or 2.2.1.
>>
>> But my bug isn't fixed!??!
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.1.
>>
>



--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--

Joseph Bradley

Software Engineer - Machine Learning

Databricks, Inc.

http://databricks.com

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

MKesselaers
Hi,

When would the next vote for RC take place?

Thanks,
Maarten
On 19 May 2017, at 20:53, Nick Pentreath <[hidden email]> wrote:

All the outstanding ML QA doc and user guide items are done for 2.2 so from that side we should be good to cut another RC :)

On Thu, 18 May 2017 at 00:18 Russell Spitzer <[hidden email]> wrote:
Seeing an issue with the DataScanExec and some of our integration tests for the SCC. Running dataframe read and writes from the shell seems fine but the Redaction code seems to get a "None" when doing SparkSession.getActiveSession.get in our integration tests. I'm not sure why but i'll dig into this later if I get a chance.

Example Failed Test
https://github.com/datastax/spark-cassandra-connector/blob/v2.0.1/spark-cassandra-connector/src/it/scala/com/datastax/spark/connector/sql/CassandraSQLSpec.scala#L311

```[info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.util.NoSuchElementException: None.get
[info] java.util.NoSuchElementException: None.get
[info] at scala.None$.get(Option.scala:347)
[info] at scala.None$.get(Option.scala:345)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$class.org$apache$spark$sql$execution$DataSourceScanExec$$redact(DataSourceScanExec.scala:70)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:54)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:52)
``` 

Again this only seems to repo in our IT suite so i'm not sure if this is a real issue. 


On Tue, May 16, 2017 at 1:40 PM Joseph Bradley <[hidden email]> wrote:
All of the ML/Graph/SparkR QA blocker JIRAs have been resolved.  Thanks everyone who helped out on those!

We still have open ML/Graph/SparkR JIRAs targeted at 2.2, but they are essentially all for documentation.

Joseph

On Thu, May 11, 2017 at 3:08 PM, Marcelo Vanzin <[hidden email]> wrote:
Since you'll be creating a new RC, I'd wait until SPARK-20666 is
fixed, since the change that caused it is in branch-2.2. Probably a
good idea to raise it to blocker at this point.

On Thu, May 11, 2017 at 2:59 PM, Michael Armbrust
<[hidden email]> wrote:
> I'm going to -1 given the outstanding issues and lack of +1s.  I'll create
> another RC once ML has had time to take care of the more critical problems.
> In the meantime please keep testing this release!
>
> On Tue, May 9, 2017 at 2:00 AM, Kazuaki Ishizaki <[hidden email]>
> wrote:
>>
>> +1 (non-binding)
>>
>> I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for
>> core have passed.
>>
>> $ java -version
>> openjdk version "1.8.0_111"
>> OpenJDK Runtime Environment (build
>> 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
>> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>> $ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7
>> package install
>> $ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
>> ...
>> Run completed in 15 minutes, 12 seconds.
>> Total number of tests run: 1940
>> Suites: completed 206, aborted 0
>> Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
>> All tests passed.
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] BUILD SUCCESS
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] Total time: 16:51 min
>> [INFO] Finished at: 2017-05-09T17:51:04+09:00
>> [INFO] Final Memory: 53M/514M
>> [INFO]
>> ------------------------------------------------------------------------
>> [WARNING] The requested profile "hive" could not be activated because it
>> does not exist.
>>
>>
>> Kazuaki Ishizaki,
>>
>>
>>
>> From:        Michael Armbrust <[hidden email]>
>> To:        "[hidden email]" <[hidden email]>
>> Date:        2017/05/05 02:08
>> Subject:        [VOTE] Apache Spark 2.2.0 (RC2)
>> ________________________________
>>
>>
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if
>> a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.2.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.2.0-rc2
>> (1d4017b44d5e6ad156abeaae6371747f111dd1f9)
>>
>> List of JIRA tickets resolved can be found with this filter.
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1236/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/
>>
>>
>> FAQ
>>
>> How can I help test this release?
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> What should happen to JIRA tickets still targeting 2.2.0?
>>
>> Committers should look at those and triage. Extremely important bug fixes,
>> documentation, and API tweaks that impact compatibility should be worked on
>> immediately. Everything else please retarget to 2.3.0 or 2.2.1.
>>
>> But my bug isn't fixed!??!
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.1.
>>
>



--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
Joseph Bradley
Software Engineer - Machine Learning
Databricks, Inc.
http://databricks.com

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Michael Armbrust
In reply to this post by Nick Pentreath
I'm waiting for SPARK-20814 at Marcelo's request and I'd also like to include SPARK-20844.  I think we should be able to cut another RC midweek.

On Fri, May 19, 2017 at 11:53 AM, Nick Pentreath <[hidden email]> wrote:
All the outstanding ML QA doc and user guide items are done for 2.2 so from that side we should be good to cut another RC :)


On Thu, 18 May 2017 at 00:18 Russell Spitzer <[hidden email]> wrote:
Seeing an issue with the DataScanExec and some of our integration tests for the SCC. Running dataframe read and writes from the shell seems fine but the Redaction code seems to get a "None" when doing SparkSession.getActiveSession.get in our integration tests. I'm not sure why but i'll dig into this later if I get a chance.

Example Failed Test
https://github.com/datastax/spark-cassandra-connector/blob/v2.0.1/spark-cassandra-connector/src/it/scala/com/datastax/spark/connector/sql/CassandraSQLSpec.scala#L311

```[info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.util.NoSuchElementException: None.get
[info] java.util.NoSuchElementException: None.get
[info] at scala.None$.get(Option.scala:347)
[info] at scala.None$.get(Option.scala:345)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$class.org$apache$spark$sql$execution$DataSourceScanExec$$redact(DataSourceScanExec.scala:70)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:54)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:52)
``` 

Again this only seems to repo in our IT suite so i'm not sure if this is a real issue. 


On Tue, May 16, 2017 at 1:40 PM Joseph Bradley <[hidden email]> wrote:
All of the ML/Graph/SparkR QA blocker JIRAs have been resolved.  Thanks everyone who helped out on those!

We still have open ML/Graph/SparkR JIRAs targeted at 2.2, but they are essentially all for documentation.

Joseph

On Thu, May 11, 2017 at 3:08 PM, Marcelo Vanzin <[hidden email]> wrote:
Since you'll be creating a new RC, I'd wait until SPARK-20666 is
fixed, since the change that caused it is in branch-2.2. Probably a
good idea to raise it to blocker at this point.

On Thu, May 11, 2017 at 2:59 PM, Michael Armbrust
<[hidden email]> wrote:
> I'm going to -1 given the outstanding issues and lack of +1s.  I'll create
> another RC once ML has had time to take care of the more critical problems.
> In the meantime please keep testing this release!
>
> On Tue, May 9, 2017 at 2:00 AM, Kazuaki Ishizaki <[hidden email]>
> wrote:
>>
>> +1 (non-binding)
>>
>> I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for
>> core have passed.
>>
>> $ java -version
>> openjdk version "1.8.0_111"
>> OpenJDK Runtime Environment (build
>> 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
>> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>> $ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7
>> package install
>> $ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
>> ...
>> Run completed in 15 minutes, 12 seconds.
>> Total number of tests run: 1940
>> Suites: completed 206, aborted 0
>> Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
>> All tests passed.
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] BUILD SUCCESS
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] Total time: 16:51 min
>> [INFO] Finished at: 2017-05-09T17:51:04+09:00
>> [INFO] Final Memory: 53M/514M
>> [INFO]
>> ------------------------------------------------------------------------
>> [WARNING] The requested profile "hive" could not be activated because it
>> does not exist.
>>
>>
>> Kazuaki Ishizaki,
>>
>>
>>
>> From:        Michael Armbrust <[hidden email]>
>> To:        "[hidden email]" <[hidden email]>
>> Date:        2017/05/05 02:08
>> Subject:        [VOTE] Apache Spark 2.2.0 (RC2)
>> ________________________________
>>
>>
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if
>> a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.2.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.2.0-rc2
>> (1d4017b44d5e6ad156abeaae6371747f111dd1f9)
>>
>> List of JIRA tickets resolved can be found with this filter.
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1236/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/
>>
>>
>> FAQ
>>
>> How can I help test this release?
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> What should happen to JIRA tickets still targeting 2.2.0?
>>
>> Committers should look at those and triage. Extremely important bug fixes,
>> documentation, and API tweaks that impact compatibility should be worked on
>> immediately. Everything else please retarget to 2.3.0 or 2.2.1.
>>
>> But my bug isn't fixed!??!
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.1.
>>
>



--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--

Joseph Bradley

Software Engineer - Machine Learning

Databricks, Inc.

http://databricks.com


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Michael Allman-2
Michael,

If you haven't started cutting the new RC, I'm working on a documentation PR right now I'm hoping we can get into Spark 2.2 as a migration note, even if it's just a mention: https://issues.apache.org/jira/browse/SPARK-20888.

Michael


On May 22, 2017, at 11:39 AM, Michael Armbrust <[hidden email]> wrote:

I'm waiting for SPARK-20814 at Marcelo's request and I'd also like to include SPARK-20844.  I think we should be able to cut another RC midweek.

On Fri, May 19, 2017 at 11:53 AM, Nick Pentreath <[hidden email]> wrote:
All the outstanding ML QA doc and user guide items are done for 2.2 so from that side we should be good to cut another RC :)


On Thu, 18 May 2017 at 00:18 Russell Spitzer <[hidden email]> wrote:
Seeing an issue with the DataScanExec and some of our integration tests for the SCC. Running dataframe read and writes from the shell seems fine but the Redaction code seems to get a "None" when doing SparkSession.getActiveSession.get in our integration tests. I'm not sure why but i'll dig into this later if I get a chance.

Example Failed Test
https://github.com/datastax/spark-cassandra-connector/blob/v2.0.1/spark-cassandra-connector/src/it/scala/com/datastax/spark/connector/sql/CassandraSQLSpec.scala#L311

```[info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.util.NoSuchElementException: None.get
[info] java.util.NoSuchElementException: None.get
[info] at scala.None$.get(Option.scala:347)
[info] at scala.None$.get(Option.scala:345)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$class.org$apache$spark$sql$execution$DataSourceScanExec$$redact(DataSourceScanExec.scala:70)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:54)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:52)
``` 

Again this only seems to repo in our IT suite so i'm not sure if this is a real issue. 


On Tue, May 16, 2017 at 1:40 PM Joseph Bradley <[hidden email]> wrote:
All of the ML/Graph/SparkR QA blocker JIRAs have been resolved.  Thanks everyone who helped out on those!

We still have open ML/Graph/SparkR JIRAs targeted at 2.2, but they are essentially all for documentation.

Joseph

On Thu, May 11, 2017 at 3:08 PM, Marcelo Vanzin <[hidden email]> wrote:
Since you'll be creating a new RC, I'd wait until SPARK-20666 is
fixed, since the change that caused it is in branch-2.2. Probably a
good idea to raise it to blocker at this point.

On Thu, May 11, 2017 at 2:59 PM, Michael Armbrust
<[hidden email]> wrote:
> I'm going to -1 given the outstanding issues and lack of +1s.  I'll create
> another RC once ML has had time to take care of the more critical problems.
> In the meantime please keep testing this release!
>
> On Tue, May 9, 2017 at 2:00 AM, Kazuaki Ishizaki <[hidden email]>
> wrote:
>>
>> +1 (non-binding)
>>
>> I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for
>> core have passed.
>>
>> $ java -version
>> openjdk version "1.8.0_111"
>> OpenJDK Runtime Environment (build
>> 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
>> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>> $ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7
>> package install
>> $ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
>> ...
>> Run completed in 15 minutes, 12 seconds.
>> Total number of tests run: 1940
>> Suites: completed 206, aborted 0
>> Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
>> All tests passed.
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] BUILD SUCCESS
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] Total time: 16:51 min
>> [INFO] Finished at: 2017-05-09T17:51:04+09:00
>> [INFO] Final Memory: 53M/514M
>> [INFO]
>> ------------------------------------------------------------------------
>> [WARNING] The requested profile "hive" could not be activated because it
>> does not exist.
>>
>>
>> Kazuaki Ishizaki,
>>
>>
>>
>> From:        Michael Armbrust <[hidden email]>
>> To:        "[hidden email]" <[hidden email]>
>> Date:        2017/05/05 02:08
>> Subject:        [VOTE] Apache Spark 2.2.0 (RC2)
>> ________________________________
>>
>>
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if
>> a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.2.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.2.0-rc2
>> (1d4017b44d5e6ad156abeaae6371747f111dd1f9)
>>
>> List of JIRA tickets resolved can be found with this filter.
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1236/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/
>>
>>
>> FAQ
>>
>> How can I help test this release?
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> What should happen to JIRA tickets still targeting 2.2.0?
>>
>> Committers should look at those and triage. Extremely important bug fixes,
>> documentation, and API tweaks that impact compatibility should be worked on
>> immediately. Everything else please retarget to 2.3.0 or 2.2.1.
>>
>> But my bug isn't fixed!??!
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.1.
>>
>



--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
Joseph Bradley
Software Engineer - Machine Learning
Databricks, Inc.
http://databricks.com


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.2.0 (RC2)

Michael Allman-2
PR is here: https://github.com/apache/spark/pull/18112


On May 25, 2017, at 10:28 AM, Michael Allman <[hidden email]> wrote:

Michael,

If you haven't started cutting the new RC, I'm working on a documentation PR right now I'm hoping we can get into Spark 2.2 as a migration note, even if it's just a mention: https://issues.apache.org/jira/browse/SPARK-20888.

Michael


On May 22, 2017, at 11:39 AM, Michael Armbrust <[hidden email]> wrote:

I'm waiting for SPARK-20814 at Marcelo's request and I'd also like to include SPARK-20844.  I think we should be able to cut another RC midweek.

On Fri, May 19, 2017 at 11:53 AM, Nick Pentreath <[hidden email]> wrote:
All the outstanding ML QA doc and user guide items are done for 2.2 so from that side we should be good to cut another RC :)


On Thu, 18 May 2017 at 00:18 Russell Spitzer <[hidden email]> wrote:
Seeing an issue with the DataScanExec and some of our integration tests for the SCC. Running dataframe read and writes from the shell seems fine but the Redaction code seems to get a "None" when doing SparkSession.getActiveSession.get in our integration tests. I'm not sure why but i'll dig into this later if I get a chance.

Example Failed Test
https://github.com/datastax/spark-cassandra-connector/blob/v2.0.1/spark-cassandra-connector/src/it/scala/com/datastax/spark/connector/sql/CassandraSQLSpec.scala#L311

```[info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.util.NoSuchElementException: None.get
[info] java.util.NoSuchElementException: None.get
[info] at scala.None$.get(Option.scala:347)
[info] at scala.None$.get(Option.scala:345)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$class.org$apache$spark$sql$execution$DataSourceScanExec$$redact(DataSourceScanExec.scala:70)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:54)
[info] at org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:52)
``` 

Again this only seems to repo in our IT suite so i'm not sure if this is a real issue. 


On Tue, May 16, 2017 at 1:40 PM Joseph Bradley <[hidden email]> wrote:
All of the ML/Graph/SparkR QA blocker JIRAs have been resolved.  Thanks everyone who helped out on those!

We still have open ML/Graph/SparkR JIRAs targeted at 2.2, but they are essentially all for documentation.

Joseph

On Thu, May 11, 2017 at 3:08 PM, Marcelo Vanzin <[hidden email]> wrote:
Since you'll be creating a new RC, I'd wait until SPARK-20666 is
fixed, since the change that caused it is in branch-2.2. Probably a
good idea to raise it to blocker at this point.

On Thu, May 11, 2017 at 2:59 PM, Michael Armbrust
<[hidden email]> wrote:
> I'm going to -1 given the outstanding issues and lack of +1s.  I'll create
> another RC once ML has had time to take care of the more critical problems.
> In the meantime please keep testing this release!
>
> On Tue, May 9, 2017 at 2:00 AM, Kazuaki Ishizaki <[hidden email]>
> wrote:
>>
>> +1 (non-binding)
>>
>> I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for
>> core have passed.
>>
>> $ java -version
>> openjdk version "1.8.0_111"
>> OpenJDK Runtime Environment (build
>> 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
>> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>> $ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7
>> package install
>> $ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
>> ...
>> Run completed in 15 minutes, 12 seconds.
>> Total number of tests run: 1940
>> Suites: completed 206, aborted 0
>> Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
>> All tests passed.
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] BUILD SUCCESS
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] Total time: 16:51 min
>> [INFO] Finished at: 2017-05-09T17:51:04+09:00
>> [INFO] Final Memory: 53M/514M
>> [INFO]
>> ------------------------------------------------------------------------
>> [WARNING] The requested profile "hive" could not be activated because it
>> does not exist.
>>
>>
>> Kazuaki Ishizaki,
>>
>>
>>
>> From:        Michael Armbrust <[hidden email]>
>> To:        "[hidden email]" <[hidden email]>
>> Date:        2017/05/05 02:08
>> Subject:        [VOTE] Apache Spark 2.2.0 (RC2)
>> ________________________________
>>
>>
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes if
>> a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.2.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.2.0-rc2
>> (1d4017b44d5e6ad156abeaae6371747f111dd1f9)
>>
>> List of JIRA tickets resolved can be found with this filter.
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1236/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/
>>
>>
>> FAQ
>>
>> How can I help test this release?
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> What should happen to JIRA tickets still targeting 2.2.0?
>>
>> Committers should look at those and triage. Extremely important bug fixes,
>> documentation, and API tweaks that impact compatibility should be worked on
>> immediately. Everything else please retarget to 2.3.0 or 2.2.1.
>>
>> But my bug isn't fixed!??!
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.1.
>>
>



--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
Joseph Bradley
Software Engineer - Machine Learning
Databricks, Inc.
http://databricks.com



Loading...