[VOTE] Apache Spark 2.1.1 (RC2)

classic Classic list List threaded Threaded
30 messages Options
12
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[VOTE] Apache Spark 2.1.1 (RC2)

Michael Armbrust
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

hennekey
This post has NOT been accepted by the mailing list yet.
+1 Release this package as Apache Spark 2.1.1

I was able to run a sequence of three jobs that construct a count vectorizer, an LDA model, and print some diagnostics in EMR on Yarn in our environment successfully. This was not true of 1.6.3 and 2.1.0
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Xiao Li
In reply to this post by Michael Armbrust
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Holden Karau
-1 (non-binding)

Python packaging doesn't seem to have quite worked out (looking at PKG-INFO the description is "Description: !!!!! missing pandoc do not upload to PyPI !!!!"), ideally it would be nice to have this as a version we upgrade to PyPi.
Building this on my own machine results in a longer description.

My guess is that whichever machine was used to package this is missing the pandoc executable (or possibly pypandoc library).

On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]> wrote:
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.




--
Cell : 425-233-8271
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Sean Owen
In reply to this post by Michael Armbrust
(Tiny nits: first line says '2.1.0', just a note for next copy/paste of the email if needed. Also can we point people to an HTTPS URL to download artifacts in this boilerplate? https://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/ )

I'm testing on Ubuntu 16.10, with Java 8, with -Phive -Pyarn -Phadoop-2.7. I am getting several intermittent failures:

- caching in memory, serialized, replicated *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before 30000 milliseconds elapsed
...

- caching on disk, replicated *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before 30000 milliseconds elapsed
...

- Unpersisting TorrentBroadcast on executors only in distributed mode *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before 60000 milliseconds elapsed
...

- replicating blocks of object with class defined in repl *** FAILED ***
  isContain was true Interpreter output contained 'Exception':
...
  scala>      |      | java.util.concurrent.TimeoutException: Executors were not up in 60 seconds
..

- using external shuffle service *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before 60000 milliseconds elapsed
...

LocalityPlacementStrategySuite:
 (... just hangs ...)


Only the last two are persistent. It might be an env issue or test issue, so just wondering if anyone else sees these?


On Fri, Mar 31, 2017 at 12:09 AM Michael Armbrust <[hidden email]> wrote:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Mark Hamstra
LocalityPlacementStrategySuite hangs -- definitely been seeing that one for quite awhile, not just with 2.1.1-rc, also with Ubuntu 16.10, and not with macOS Sierra.

On Sat, Apr 1, 2017 at 12:34 PM, Sean Owen <[hidden email]> wrote:
(Tiny nits: first line says '2.1.0', just a note for next copy/paste of the email if needed. Also can we point people to an HTTPS URL to download artifacts in this boilerplate? https://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/ )

I'm testing on Ubuntu 16.10, with Java 8, with -Phive -Pyarn -Phadoop-2.7. I am getting several intermittent failures:

- caching in memory, serialized, replicated *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before 30000 milliseconds elapsed
...

- caching on disk, replicated *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before 30000 milliseconds elapsed
...

- Unpersisting TorrentBroadcast on executors only in distributed mode *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before 60000 milliseconds elapsed
...

- replicating blocks of object with class defined in repl *** FAILED ***
  isContain was true Interpreter output contained 'Exception':
...
  scala>      |      | java.util.concurrent.TimeoutException: Executors were not up in 60 seconds
..

- using external shuffle service *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before 60000 milliseconds elapsed
...

LocalityPlacementStrategySuite:
 (... just hangs ...)


Only the last two are persistent. It might be an env issue or test issue, so just wondering if anyone else sees these?


On Fri, Mar 31, 2017 at 12:09 AM Michael Armbrust <[hidden email]> wrote:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Marcelo Vanzin
In reply to this post by Sean Owen
On Sat, Apr 1, 2017 at 12:34 PM, Sean Owen <[hidden email]> wrote:
> LocalityPlacementStrategySuite:
>  (... just hangs ...)

This test is very heavy on DNS requests... I tried to work around that
when I wrote it but couldn't, maybe I should try some more.

I wouldn't hold the release for that.

--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Kazuaki Ishizaki
In reply to this post by Michael Armbrust
-1 (non-binding)

I tested it on Ubuntu 16.04 and openjdk8 on ppc64le. I got several errors.
I expect that this backport (https://github.com/apache/spark/pull/17509) will be integrated into Spark 2.1.1.


$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
$ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 package install
$ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
...
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
Running org.apache.spark.memory.TaskMemoryManagerSuite
Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.445 sec <<< FAILURE! - in org.apache.spark.memory.TaskMemoryManagerSuite
encodePageNumberAndOffsetOffHeap(org.apache.spark.memory.TaskMemoryManagerSuite)  Time elapsed: 0.007 sec  <<< ERROR!
java.lang.IllegalArgumentException: requirement failed: No support for unaligned Unsafe. Set spark.memory.offHeap.enabled to false.
        at org.apache.spark.memory.TaskMemoryManagerSuite.encodePageNumberAndOffsetOffHeap(TaskMemoryManagerSuite.java:48)

offHeapConfigurationBackwardsCompatibility(org.apache.spark.memory.TaskMemoryManagerSuite)  Time elapsed: 0.013 sec  <<< ERROR!
java.lang.IllegalArgumentException: requirement failed: No support for unaligned Unsafe. Set spark.memory.offHeap.enabled to false.
        at org.apache.spark.memory.TaskMemoryManagerSuite.offHeapConfigurationBackwardsCompatibility(TaskMemoryManagerSuite.java:138)

Running org.apache.spark.io.NioBufferedFileInputStreamSuite
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.029 sec - in org.apache.spark.io.NioBufferedFileInputStreamSuite
Running org.apache.spark.unsafe.map.BytesToBytesMapOnHeapSuite
Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.708 sec <<< FAILURE! - in org.apache.spark.unsafe.map.BytesToBytesMapOnHeapSuite
testPeakMemoryUsed(org.apache.spark.unsafe.map.BytesToBytesMapOnHeapSuite)  Time elapsed: 0.006 sec  <<< FAILURE!
java.lang.AssertionError: expected:<16648> but was:<16912>

Running org.apache.spark.unsafe.map.BytesToBytesMapOffHeapSuite
Tests run: 13, Failures: 0, Errors: 13, Skipped: 0, Time elapsed: 0.043 sec <<< FAILURE! - in org.apache.spark.unsafe.map.BytesToBytesMapOffHeapSuite
failureToGrow(org.apache.spark.unsafe.map.BytesToBytesMapOffHeapSuite)  Time elapsed: 0.002 sec  <<< ERROR!
java.lang.IllegalArgumentException: requirement failed: No support for unaligned Unsafe. Set spark.memory.offHeap.enabled to false.
...
Tests run: 207, Failures: 7, Errors: 16, Skipped: 0

Kazuaki Ishizaki



From:        Michael Armbrust <[hidden email]>
To:        "[hidden email]" <[hidden email]>
Date:        2017/03/31 08:10
Subject:        [VOTE] Apache Spark 2.1.1 (RC2)




Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2(02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:
http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1227/

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Sean Owen
That backport is fine, for another RC even in my opinion, but it's not a regression. It's a JDK bug really. 2.1.0 would have failed too.

On Sun, Apr 2, 2017 at 8:20 AM Kazuaki Ishizaki <[hidden email]> wrote:
-1 (non-binding)

I tested it on Ubuntu 16.04 and openjdk8 on ppc64le. I got several errors.
I expect that this backport (https://github.com/apache/spark/pull/17509) will be integrated into Spark 2.1.1.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Kazuaki Ishizaki
Thank you. Yes, it is not a regression. 2.1.0 would have this failure, too.

Regards,
Kazuaki Ishizaki



From:        Sean Owen <[hidden email]>
To:        Kazuaki Ishizaki/Japan/IBM@IBMJP, Michael Armbrust <[hidden email]>
Cc:        "[hidden email]" <[hidden email]>
Date:        2017/04/02 18:18
Subject:        Re: [VOTE] Apache Spark 2.1.1 (RC2)




That backport is fine, for another RC even in my opinion, but it's not a regression. It's a JDK bug really. 2.1.0 would have failed too.

On Sun, Apr 2, 2017 at 8:20 AM Kazuaki Ishizaki <ISHIZAKI@...> wrote:
-1 (non-binding)

I tested it on Ubuntu 16.04 and openjdk8 on ppc64le. I got several errors.
I expect that this backport (
https://github.com/apache/spark/pull/17509) will be integrated into Spark 2.1.1.


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Felix Cheung
In reply to this post by Holden Karau
-1
sorry, found an issue with SparkR CRAN check.
Opened SPARK-20197 and working on fix.


From: [hidden email] <[hidden email]> on behalf of Holden Karau <[hidden email]>
Sent: Friday, March 31, 2017 6:25:20 PM
To: Xiao Li
Cc: Michael Armbrust; [hidden email]
Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
 
-1 (non-binding)

Python packaging doesn't seem to have quite worked out (looking at PKG-INFO the description is "Description: !!!!! missing pandoc do not upload to PyPI !!!!"), ideally it would be nice to have this as a version we upgrade to PyPi.
Building this on my own machine results in a longer description.

My guess is that whichever machine was used to package this is missing the pandoc executable (or possibly pypandoc library).

On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]> wrote:
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.




--
Cell : 425-233-8271
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Michael Armbrust
Thanks for the comments everyone.  This vote fails.  Here's how I think we should proceed:
 - [SPARK-20197] - SparkR CRAN - appears to be resolved
 - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and report if this is a regression and if there is an easy fix that we should wait for.

For all the other test failures, please take the time to look through JIRA and open an issue if one does not already exist so that we can triage if these are just environmental issues.  If I don't hear any objections I'm going to go ahead with RC3 tomorrow.

On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <[hidden email]> wrote:
-1
sorry, found an issue with SparkR CRAN check.
Opened SPARK-20197 and working on fix.


From: [hidden email] <[hidden email]> on behalf of Holden Karau <[hidden email]>
Sent: Friday, March 31, 2017 6:25:20 PM
To: Xiao Li
Cc: Michael Armbrust; [hidden email]
Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
 
-1 (non-binding)

Python packaging doesn't seem to have quite worked out (looking at PKG-INFO the description is "Description: !!!!! missing pandoc do not upload to PyPI !!!!"), ideally it would be nice to have this as a version we upgrade to PyPi.
Building this on my own machine results in a longer description.

My guess is that whichever machine was used to package this is missing the pandoc executable (or possibly pypandoc library).

On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]> wrote:
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Mridul Muralidharan
Hi,

https://issues.apache.org/jira/browse/SPARK-20202?jql=priority%20%3D%20Blocker%20AND%20affectedVersion%20%3D%20%222.1.1%22%20and%20project%3D%22spark%22


Indicates there is another blocker (SPARK-20197 should have come in
the list too, but was marked major).


Regards,
Mridul

On Tue, Apr 4, 2017 at 11:35 AM, Michael Armbrust
<[hidden email]> wrote:

> Thanks for the comments everyone.  This vote fails.  Here's how I think we
> should proceed:
>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and report
> if this is a regression and if there is an easy fix that we should wait for.
>
> For all the other test failures, please take the time to look through JIRA
> and open an issue if one does not already exist so that we can triage if
> these are just environmental issues.  If I don't hear any objections I'm
> going to go ahead with RC3 tomorrow.
>
> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <[hidden email]>
> wrote:
>>
>> -1
>> sorry, found an issue with SparkR CRAN check.
>> Opened SPARK-20197 and working on fix.
>>
>> ________________________________
>> From: [hidden email] <[hidden email]> on behalf of Holden
>> Karau <[hidden email]>
>> Sent: Friday, March 31, 2017 6:25:20 PM
>> To: Xiao Li
>> Cc: Michael Armbrust; [hidden email]
>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>
>> -1 (non-binding)
>>
>> Python packaging doesn't seem to have quite worked out (looking at
>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>> upgrade to PyPi.
>> Building this on my own machine results in a longer description.
>>
>> My guess is that whichever machine was used to package this is missing the
>> pandoc executable (or possibly pypandoc library).
>>
>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]> wrote:
>>>
>>> +1
>>>
>>> Xiao
>>>
>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
>>>>
>>>> Please vote on releasing the following candidate as Apache Spark version
>>>> 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes
>>>> if a majority of at least 3 +1 PMC votes are cast.
>>>>
>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>> [ ] -1 Do not release this package because ...
>>>>
>>>>
>>>> To learn more about Apache Spark, please see http://spark.apache.org/
>>>>
>>>> The tag to be voted on is v2.1.1-rc2
>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>
>>>> List of JIRA tickets resolved can be found with this filter.
>>>>
>>>> The release files, including signatures, digests, etc. can be found at:
>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>
>>>> Release artifacts are signed with the following key:
>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>
>>>> The staging repository for this release can be found at:
>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>
>>>> The documentation corresponding to this release can be found at:
>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>
>>>>
>>>> FAQ
>>>>
>>>> How can I help test this release?
>>>>
>>>> If you are a Spark user, you can help us test this release by taking an
>>>> existing Spark workload and running on this release candidate, then
>>>> reporting any regressions.
>>>>
>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>
>>>> Committers should look at those and triage. Extremely important bug
>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>
>>>> But my bug isn't fixed!??!
>>>>
>>>> In order to make timely releases, we will typically not hold the release
>>>> unless the bug in question is a regression from 2.1.0.
>>>>
>>>> What happened to RC1?
>>>>
>>>> There were issues with the release packaging and as a result was
>>>> skipped.
>>>
>>>
>>
>>
>>
>> --
>> Cell : 425-233-8271
>> Twitter: https://twitter.com/holdenkarau
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Sean Owen
This is maybe a blocker. See my suggested action about voting on the current artifact to, I believe, eliminate the possible blocking part of the issue in the short term. 

On Tue, Apr 4, 2017, 22:02 Mridul Muralidharan <[hidden email]> wrote:
Hi,

https://issues.apache.org/jira/browse/SPARK-20202?jql=priority%20%3D%20Blocker%20AND%20affectedVersion%20%3D%20%222.1.1%22%20and%20project%3D%22spark%22


Indicates there is another blocker (SPARK-20197 should have come in
the list too, but was marked major).


Regards,
Mridul

On Tue, Apr 4, 2017 at 11:35 AM, Michael Armbrust
<[hidden email]> wrote:
> Thanks for the comments everyone.  This vote fails.  Here's how I think we
> should proceed:
>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and report
> if this is a regression and if there is an easy fix that we should wait for.
>
> For all the other test failures, please take the time to look through JIRA
> and open an issue if one does not already exist so that we can triage if
> these are just environmental issues.  If I don't hear any objections I'm
> going to go ahead with RC3 tomorrow.
>
> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <[hidden email]>
> wrote:
>>
>> -1
>> sorry, found an issue with SparkR CRAN check.
>> Opened SPARK-20197 and working on fix.
>>
>> ________________________________
>> From: [hidden email] <[hidden email]> on behalf of Holden
>> Karau <[hidden email]>
>> Sent: Friday, March 31, 2017 6:25:20 PM
>> To: Xiao Li
>> Cc: Michael Armbrust; [hidden email]
>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>
>> -1 (non-binding)
>>
>> Python packaging doesn't seem to have quite worked out (looking at
>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>> upgrade to PyPi.
>> Building this on my own machine results in a longer description.
>>
>> My guess is that whichever machine was used to package this is missing the
>> pandoc executable (or possibly pypandoc library).
>>
>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]> wrote:
>>>
>>> +1
>>>
>>> Xiao
>>>
>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
>>>>
>>>> Please vote on releasing the following candidate as Apache Spark version
>>>> 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes
>>>> if a majority of at least 3 +1 PMC votes are cast.
>>>>
>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>> [ ] -1 Do not release this package because ...
>>>>
>>>>
>>>> To learn more about Apache Spark, please see http://spark.apache.org/
>>>>
>>>> The tag to be voted on is v2.1.1-rc2
>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>
>>>> List of JIRA tickets resolved can be found with this filter.
>>>>
>>>> The release files, including signatures, digests, etc. can be found at:
>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>
>>>> Release artifacts are signed with the following key:
>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>
>>>> The staging repository for this release can be found at:
>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>
>>>> The documentation corresponding to this release can be found at:
>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>
>>>>
>>>> FAQ
>>>>
>>>> How can I help test this release?
>>>>
>>>> If you are a Spark user, you can help us test this release by taking an
>>>> existing Spark workload and running on this release candidate, then
>>>> reporting any regressions.
>>>>
>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>
>>>> Committers should look at those and triage. Extremely important bug
>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>
>>>> But my bug isn't fixed!??!
>>>>
>>>> In order to make timely releases, we will typically not hold the release
>>>> unless the bug in question is a regression from 2.1.0.
>>>>
>>>> What happened to RC1?
>>>>
>>>> There were issues with the release packaging and as a result was
>>>> skipped.
>>>
>>>
>>
>>
>>
>> --
>> Cell : 425-233-8271
>> Twitter: https://twitter.com/holdenkarau
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Holden Karau
In reply to this post by Michael Armbrust
So the fix is installing pandoc on whichever machine is used for packaging. I thought that was generally done on the machine of the person rolling the release so I wasn't sure it made sense as a JIRA, but from chatting with Josh it sounds like that part might be on of the Jenkins workers - is there a fixed one that is used?

Regardless I'll file a JIRA for this when I get back in front of my desktop (~1 hour or so).

On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust <[hidden email]> wrote:
Thanks for the comments everyone.  This vote fails.  Here's how I think we should proceed:
 - [SPARK-20197] - SparkR CRAN - appears to be resolved
 - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and report if this is a regression and if there is an easy fix that we should wait for.

For all the other test failures, please take the time to look through JIRA and open an issue if one does not already exist so that we can triage if these are just environmental issues.  If I don't hear any objections I'm going to go ahead with RC3 tomorrow.

On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <[hidden email]> wrote:
-1
sorry, found an issue with SparkR CRAN check.
Opened SPARK-20197 and working on fix.


From: [hidden email] <[hidden email]> on behalf of Holden Karau <[hidden email]>
Sent: Friday, March 31, 2017 6:25:20 PM
To: Xiao Li
Cc: Michael Armbrust; [hidden email]
Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
 
-1 (non-binding)

Python packaging doesn't seem to have quite worked out (looking at PKG-INFO the description is "Description: !!!!! missing pandoc do not upload to PyPI !!!!"), ideally it would be nice to have this as a version we upgrade to PyPi.
Building this on my own machine results in a longer description.

My guess is that whichever machine was used to package this is missing the pandoc executable (or possibly pypandoc library).

On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]> wrote:
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" class="gmail_msg" target="_blank">425-233-8271

--
Cell : 425-233-8271
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Holden Karau
See SPARK-20216, if Michael can let me know which machine is being used for packaging I can see if I can install pandoc on it (should be simple but I know the Jenkins cluster is a bit on the older side).

On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]> wrote:
So the fix is installing pandoc on whichever machine is used for packaging. I thought that was generally done on the machine of the person rolling the release so I wasn't sure it made sense as a JIRA, but from chatting with Josh it sounds like that part might be on of the Jenkins workers - is there a fixed one that is used?

Regardless I'll file a JIRA for this when I get back in front of my desktop (~1 hour or so).

On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust <[hidden email]> wrote:
Thanks for the comments everyone.  This vote fails.  Here's how I think we should proceed:
 - [SPARK-20197] - SparkR CRAN - appears to be resolved
 - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and report if this is a regression and if there is an easy fix that we should wait for.

For all the other test failures, please take the time to look through JIRA and open an issue if one does not already exist so that we can triage if these are just environmental issues.  If I don't hear any objections I'm going to go ahead with RC3 tomorrow.

On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <[hidden email]> wrote:
-1
sorry, found an issue with SparkR CRAN check.
Opened SPARK-20197 and working on fix.


From: [hidden email] <[hidden email]> on behalf of Holden Karau <[hidden email]>
Sent: Friday, March 31, 2017 6:25:20 PM
To: Xiao Li
Cc: Michael Armbrust; [hidden email]
Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
 
-1 (non-binding)

Python packaging doesn't seem to have quite worked out (looking at PKG-INFO the description is "Description: !!!!! missing pandoc do not upload to PyPI !!!!"), ideally it would be nice to have this as a version we upgrade to PyPi.
Building this on my own machine results in a longer description.

My guess is that whichever machine was used to package this is missing the pandoc executable (or possibly pypandoc library).

On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]> wrote:
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" class="m_7665899887424243827gmail_msg" target="_blank">425-233-8271

--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : 425-233-8271
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Holden Karau
Following up, the issues with missing pypandoc/pandoc on the packaging machine has been resolved.

On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]> wrote:
See SPARK-20216, if Michael can let me know which machine is being used for packaging I can see if I can install pandoc on it (should be simple but I know the Jenkins cluster is a bit on the older side).

On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]> wrote:
So the fix is installing pandoc on whichever machine is used for packaging. I thought that was generally done on the machine of the person rolling the release so I wasn't sure it made sense as a JIRA, but from chatting with Josh it sounds like that part might be on of the Jenkins workers - is there a fixed one that is used?

Regardless I'll file a JIRA for this when I get back in front of my desktop (~1 hour or so).

On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust <[hidden email]> wrote:
Thanks for the comments everyone.  This vote fails.  Here's how I think we should proceed:
 - [SPARK-20197] - SparkR CRAN - appears to be resolved
 - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and report if this is a regression and if there is an easy fix that we should wait for.

For all the other test failures, please take the time to look through JIRA and open an issue if one does not already exist so that we can triage if these are just environmental issues.  If I don't hear any objections I'm going to go ahead with RC3 tomorrow.

On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <[hidden email]> wrote:
-1
sorry, found an issue with SparkR CRAN check.
Opened SPARK-20197 and working on fix.


From: [hidden email] <[hidden email]> on behalf of Holden Karau <[hidden email]>
Sent: Friday, March 31, 2017 6:25:20 PM
To: Xiao Li
Cc: Michael Armbrust; [hidden email]
Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
 
-1 (non-binding)

Python packaging doesn't seem to have quite worked out (looking at PKG-INFO the description is "Description: !!!!! missing pandoc do not upload to PyPI !!!!"), ideally it would be nice to have this as a version we upgrade to PyPi.
Building this on my own machine results in a longer description.

My guess is that whichever machine was used to package this is missing the pandoc executable (or possibly pypandoc library).

On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]> wrote:
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" class="m_7583406421945121367m_7665899887424243827gmail_msg" target="_blank">425-233-8271

--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : 425-233-8271
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

DB Tsai-3
-1

I think that back-porting SPARK-20270 and SPARK-18555 are very important since it's a critical bug that na.fill will mess up the data in Long even the data isn't null.

Thanks. 


Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0

On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <[hidden email]> wrote:
Following up, the issues with missing pypandoc/pandoc on the packaging machine has been resolved.

On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]> wrote:
See SPARK-20216, if Michael can let me know which machine is being used for packaging I can see if I can install pandoc on it (should be simple but I know the Jenkins cluster is a bit on the older side).

On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]> wrote:
So the fix is installing pandoc on whichever machine is used for packaging. I thought that was generally done on the machine of the person rolling the release so I wasn't sure it made sense as a JIRA, but from chatting with Josh it sounds like that part might be on of the Jenkins workers - is there a fixed one that is used?

Regardless I'll file a JIRA for this when I get back in front of my desktop (~1 hour or so).

On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust <[hidden email]> wrote:
Thanks for the comments everyone.  This vote fails.  Here's how I think we should proceed:
 - [SPARK-20197] - SparkR CRAN - appears to be resolved
 - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and report if this is a regression and if there is an easy fix that we should wait for.

For all the other test failures, please take the time to look through JIRA and open an issue if one does not already exist so that we can triage if these are just environmental issues.  If I don't hear any objections I'm going to go ahead with RC3 tomorrow.

On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <[hidden email]> wrote:
-1
sorry, found an issue with SparkR CRAN check.
Opened SPARK-20197 and working on fix.


From: [hidden email] <[hidden email]> on behalf of Holden Karau <[hidden email]>
Sent: Friday, March 31, 2017 6:25:20 PM
To: Xiao Li
Cc: Michael Armbrust; [hidden email]
Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
 
-1 (non-binding)

Python packaging doesn't seem to have quite worked out (looking at PKG-INFO the description is "Description: !!!!! missing pandoc do not upload to PyPI !!!!"), ideally it would be nice to have this as a version we upgrade to PyPi.
Building this on my own machine results in a longer description.

My guess is that whichever machine was used to package this is missing the pandoc executable (or possibly pypandoc library).

On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]> wrote:
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" class="m_2714703432620577280m_7583406421945121367m_7665899887424243827gmail_msg" target="_blank">425-233-8271

--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Ryan Blue
DB,

This vote already failed and there isn't a RC3 vote yet. If you backport the changes to branch-2.1 they will make it into the next RC.

rb

On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <[hidden email]> wrote:
-1

I think that back-porting SPARK-20270 and SPARK-18555 are very important since it's a critical bug that na.fill will mess up the data in Long even the data isn't null.

Thanks. 


Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0

On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <[hidden email]> wrote:
Following up, the issues with missing pypandoc/pandoc on the packaging machine has been resolved.

On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]> wrote:
See SPARK-20216, if Michael can let me know which machine is being used for packaging I can see if I can install pandoc on it (should be simple but I know the Jenkins cluster is a bit on the older side).

On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]> wrote:
So the fix is installing pandoc on whichever machine is used for packaging. I thought that was generally done on the machine of the person rolling the release so I wasn't sure it made sense as a JIRA, but from chatting with Josh it sounds like that part might be on of the Jenkins workers - is there a fixed one that is used?

Regardless I'll file a JIRA for this when I get back in front of my desktop (~1 hour or so).

On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust <[hidden email]> wrote:
Thanks for the comments everyone.  This vote fails.  Here's how I think we should proceed:
 - [SPARK-20197] - SparkR CRAN - appears to be resolved
 - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and report if this is a regression and if there is an easy fix that we should wait for.

For all the other test failures, please take the time to look through JIRA and open an issue if one does not already exist so that we can triage if these are just environmental issues.  If I don't hear any objections I'm going to go ahead with RC3 tomorrow.

On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <[hidden email]> wrote:
-1
sorry, found an issue with SparkR CRAN check.
Opened SPARK-20197 and working on fix.


From: [hidden email] <[hidden email]> on behalf of Holden Karau <[hidden email]>
Sent: Friday, March 31, 2017 6:25:20 PM
To: Xiao Li
Cc: Michael Armbrust; [hidden email]
Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
 
-1 (non-binding)

Python packaging doesn't seem to have quite worked out (looking at PKG-INFO the description is "Description: !!!!! missing pandoc do not upload to PyPI !!!!"), ideally it would be nice to have this as a version we upgrade to PyPi.
Building this on my own machine results in a longer description.

My guess is that whichever machine was used to package this is missing the pandoc executable (or possibly pypandoc library).

On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]> wrote:
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <[hidden email]>:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" class="m_-4449453350287202471m_2714703432620577280m_7583406421945121367m_7665899887424243827gmail_msg" target="_blank">425-233-8271

--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271




--
Ryan Blue
Software Engineer
Netflix
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

DB Tsai-3
I backported the fix into both branch-2.1 and branch-2.0. Thanks.

Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0


On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <[hidden email]> wrote:

> DB,
>
> This vote already failed and there isn't a RC3 vote yet. If you backport the
> changes to branch-2.1 they will make it into the next RC.
>
> rb
>
> On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <[hidden email]> wrote:
>>
>> -1
>>
>> I think that back-porting SPARK-20270 and SPARK-18555 are very important
>> since it's a critical bug that na.fill will mess up the data in Long even
>> the data isn't null.
>>
>> Thanks.
>>
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <[hidden email]>
>> wrote:
>>>
>>> Following up, the issues with missing pypandoc/pandoc on the packaging
>>> machine has been resolved.
>>>
>>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]>
>>> wrote:
>>>>
>>>> See SPARK-20216, if Michael can let me know which machine is being used
>>>> for packaging I can see if I can install pandoc on it (should be simple but
>>>> I know the Jenkins cluster is a bit on the older side).
>>>>
>>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]>
>>>> wrote:
>>>>>
>>>>> So the fix is installing pandoc on whichever machine is used for
>>>>> packaging. I thought that was generally done on the machine of the person
>>>>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>>>>> chatting with Josh it sounds like that part might be on of the Jenkins
>>>>> workers - is there a fixed one that is used?
>>>>>
>>>>> Regardless I'll file a JIRA for this when I get back in front of my
>>>>> desktop (~1 hour or so).
>>>>>
>>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>> <[hidden email]> wrote:
>>>>>>
>>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>>>>> think we should proceed:
>>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>>>>> report if this is a regression and if there is an easy fix that we should
>>>>>> wait for.
>>>>>>
>>>>>> For all the other test failures, please take the time to look through
>>>>>> JIRA and open an issue if one does not already exist so that we can triage
>>>>>> if these are just environmental issues.  If I don't hear any objections I'm
>>>>>> going to go ahead with RC3 tomorrow.
>>>>>>
>>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>>> <[hidden email]> wrote:
>>>>>>>
>>>>>>> -1
>>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>>>> Opened SPARK-20197 and working on fix.
>>>>>>>
>>>>>>> ________________________________
>>>>>>> From: [hidden email] <[hidden email]> on behalf of
>>>>>>> Holden Karau <[hidden email]>
>>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>>>> To: Xiao Li
>>>>>>> Cc: Michael Armbrust; [hidden email]
>>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>>>>
>>>>>>> -1 (non-binding)
>>>>>>>
>>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>>>>>>> upgrade to PyPi.
>>>>>>> Building this on my own machine results in a longer description.
>>>>>>>
>>>>>>> My guess is that whichever machine was used to package this is
>>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>>>>
>>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> +1
>>>>>>>>
>>>>>>>> Xiao
>>>>>>>>
>>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>>>>> <[hidden email]>:
>>>>>>>>>
>>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
>>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>>>>>>
>>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>>>>>> [ ] -1 Do not release this package because ...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> To learn more about Apache Spark, please see
>>>>>>>>> http://spark.apache.org/
>>>>>>>>>
>>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>>>>>>
>>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>>>>>>
>>>>>>>>> The release files, including signatures, digests, etc. can be found
>>>>>>>>> at:
>>>>>>>>>
>>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>>>>>>
>>>>>>>>> Release artifacts are signed with the following key:
>>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>>>>>
>>>>>>>>> The staging repository for this release can be found at:
>>>>>>>>>
>>>>>>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>>>>>>
>>>>>>>>> The documentation corresponding to this release can be found at:
>>>>>>>>>
>>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> FAQ
>>>>>>>>>
>>>>>>>>> How can I help test this release?
>>>>>>>>>
>>>>>>>>> If you are a Spark user, you can help us test this release by
>>>>>>>>> taking an existing Spark workload and running on this release candidate,
>>>>>>>>> then reporting any regressions.
>>>>>>>>>
>>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>>>>>>
>>>>>>>>> Committers should look at those and triage. Extremely important bug
>>>>>>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>>>>>>
>>>>>>>>> But my bug isn't fixed!??!
>>>>>>>>>
>>>>>>>>> In order to make timely releases, we will typically not hold the
>>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>>>>>>>>>
>>>>>>>>> What happened to RC1?
>>>>>>>>>
>>>>>>>>> There were issues with the release packaging and as a result was
>>>>>>>>> skipped.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Cell : 425-233-8271
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>>
>>>>> --
>>>>> Cell : 425-233-8271
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cell : 425-233-8271
>>>> Twitter: https://twitter.com/holdenkarau
>>>
>>>
>>>
>>>
>>> --
>>> Cell : 425-233-8271
>>> Twitter: https://twitter.com/holdenkarau
>>
>>
>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

12
Loading...