[VOTE] Apache Spark 2.1.1 (RC2)

classic Classic list List threaded Threaded
30 messages Options
12
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Holden Karau
If it would help I'd be more than happy to look at kicking off the packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216 & friends) (I'd still probably need some guidance from a previous release coordinator so I understand if that's not actually faster).

On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <[hidden email]> wrote:
I backported the fix into both branch-2.1 and branch-2.0. Thanks.

Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0


On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <[hidden email]> wrote:
> DB,
>
> This vote already failed and there isn't a RC3 vote yet. If you backport the
> changes to branch-2.1 they will make it into the next RC.
>
> rb
>
> On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <[hidden email]> wrote:
>>
>> -1
>>
>> I think that back-porting SPARK-20270 and SPARK-18555 are very important
>> since it's a critical bug that na.fill will mess up the data in Long even
>> the data isn't null.
>>
>> Thanks.
>>
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <[hidden email]>
>> wrote:
>>>
>>> Following up, the issues with missing pypandoc/pandoc on the packaging
>>> machine has been resolved.
>>>
>>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]>
>>> wrote:
>>>>
>>>> See SPARK-20216, if Michael can let me know which machine is being used
>>>> for packaging I can see if I can install pandoc on it (should be simple but
>>>> I know the Jenkins cluster is a bit on the older side).
>>>>
>>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]>
>>>> wrote:
>>>>>
>>>>> So the fix is installing pandoc on whichever machine is used for
>>>>> packaging. I thought that was generally done on the machine of the person
>>>>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>>>>> chatting with Josh it sounds like that part might be on of the Jenkins
>>>>> workers - is there a fixed one that is used?
>>>>>
>>>>> Regardless I'll file a JIRA for this when I get back in front of my
>>>>> desktop (~1 hour or so).
>>>>>
>>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>> <[hidden email]> wrote:
>>>>>>
>>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>>>>> think we should proceed:
>>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>>>>> report if this is a regression and if there is an easy fix that we should
>>>>>> wait for.
>>>>>>
>>>>>> For all the other test failures, please take the time to look through
>>>>>> JIRA and open an issue if one does not already exist so that we can triage
>>>>>> if these are just environmental issues.  If I don't hear any objections I'm
>>>>>> going to go ahead with RC3 tomorrow.
>>>>>>
>>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>>> <[hidden email]> wrote:
>>>>>>>
>>>>>>> -1
>>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>>>> Opened SPARK-20197 and working on fix.
>>>>>>>
>>>>>>> ________________________________
>>>>>>> From: [hidden email] <[hidden email]> on behalf of
>>>>>>> Holden Karau <[hidden email]>
>>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>>>> To: Xiao Li
>>>>>>> Cc: Michael Armbrust; [hidden email]
>>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>>>>
>>>>>>> -1 (non-binding)
>>>>>>>
>>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>>>>>>> upgrade to PyPi.
>>>>>>> Building this on my own machine results in a longer description.
>>>>>>>
>>>>>>> My guess is that whichever machine was used to package this is
>>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>>>>
>>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> +1
>>>>>>>>
>>>>>>>> Xiao
>>>>>>>>
>>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>>>>> <[hidden email]>:
>>>>>>>>>
>>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
>>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>>>>>>
>>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>>>>>> [ ] -1 Do not release this package because ...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> To learn more about Apache Spark, please see
>>>>>>>>> http://spark.apache.org/
>>>>>>>>>
>>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>>>>>>
>>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>>>>>>
>>>>>>>>> The release files, including signatures, digests, etc. can be found
>>>>>>>>> at:
>>>>>>>>>
>>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>>>>>>
>>>>>>>>> Release artifacts are signed with the following key:
>>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>>>>>
>>>>>>>>> The staging repository for this release can be found at:
>>>>>>>>>
>>>>>>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>>>>>>
>>>>>>>>> The documentation corresponding to this release can be found at:
>>>>>>>>>
>>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> FAQ
>>>>>>>>>
>>>>>>>>> How can I help test this release?
>>>>>>>>>
>>>>>>>>> If you are a Spark user, you can help us test this release by
>>>>>>>>> taking an existing Spark workload and running on this release candidate,
>>>>>>>>> then reporting any regressions.
>>>>>>>>>
>>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>>>>>>
>>>>>>>>> Committers should look at those and triage. Extremely important bug
>>>>>>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>>>>>>
>>>>>>>>> But my bug isn't fixed!??!
>>>>>>>>>
>>>>>>>>> In order to make timely releases, we will typically not hold the
>>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>>>>>>>>>
>>>>>>>>> What happened to RC1?
>>>>>>>>>
>>>>>>>>> There were issues with the release packaging and as a result was
>>>>>>>>> skipped.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Cell : 425-233-8271
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>>
>>>>> --
>>>>> Cell : <a href="tel:425-233-8271" value="+14252338271">425-233-8271
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cell : <a href="tel:425-233-8271" value="+14252338271">425-233-8271
>>>> Twitter: https://twitter.com/holdenkarau
>>>
>>>
>>>
>>>
>>> --
>>> Cell : <a href="tel:425-233-8271" value="+14252338271">425-233-8271
>>> Twitter: https://twitter.com/holdenkarau
>>
>>
>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
Cell : 425-233-8271
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Maciej Bryński
https://issues.apache.org/jira/browse/SPARK-12717

This bug is in Spark since 1.6.0.
Any chance to get this fixed ?

M.

2017-04-14 6:39 GMT+02:00 Holden Karau <[hidden email]>:

> If it would help I'd be more than happy to look at kicking off the packaging
> for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216 &
> friends) (I'd still probably need some guidance from a previous release
> coordinator so I understand if that's not actually faster).
>
> On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <[hidden email]> wrote:
>>
>> I backported the fix into both branch-2.1 and branch-2.0. Thanks.
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>>
>> On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <[hidden email]> wrote:
>> > DB,
>> >
>> > This vote already failed and there isn't a RC3 vote yet. If you backport
>> > the
>> > changes to branch-2.1 they will make it into the next RC.
>> >
>> > rb
>> >
>> > On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <[hidden email]> wrote:
>> >>
>> >> -1
>> >>
>> >> I think that back-porting SPARK-20270 and SPARK-18555 are very
>> >> important
>> >> since it's a critical bug that na.fill will mess up the data in Long
>> >> even
>> >> the data isn't null.
>> >>
>> >> Thanks.
>> >>
>> >>
>> >> Sincerely,
>> >>
>> >> DB Tsai
>> >> ----------------------------------------------------------
>> >> Web: https://www.dbtsai.com
>> >> PGP Key ID: 0x5CED8B896A6BDFA0
>> >>
>> >> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <[hidden email]>
>> >> wrote:
>> >>>
>> >>> Following up, the issues with missing pypandoc/pandoc on the packaging
>> >>> machine has been resolved.
>> >>>
>> >>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]>
>> >>> wrote:
>> >>>>
>> >>>> See SPARK-20216, if Michael can let me know which machine is being
>> >>>> used
>> >>>> for packaging I can see if I can install pandoc on it (should be
>> >>>> simple but
>> >>>> I know the Jenkins cluster is a bit on the older side).
>> >>>>
>> >>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]>
>> >>>> wrote:
>> >>>>>
>> >>>>> So the fix is installing pandoc on whichever machine is used for
>> >>>>> packaging. I thought that was generally done on the machine of the
>> >>>>> person
>> >>>>> rolling the release so I wasn't sure it made sense as a JIRA, but
>> >>>>> from
>> >>>>> chatting with Josh it sounds like that part might be on of the
>> >>>>> Jenkins
>> >>>>> workers - is there a fixed one that is used?
>> >>>>>
>> >>>>> Regardless I'll file a JIRA for this when I get back in front of my
>> >>>>> desktop (~1 hour or so).
>> >>>>>
>> >>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>> >>>>> <[hidden email]> wrote:
>> >>>>>>
>> >>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>> >>>>>> think we should proceed:
>> >>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>> >>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>> >>>>>> report if this is a regression and if there is an easy fix that we
>> >>>>>> should
>> >>>>>> wait for.
>> >>>>>>
>> >>>>>> For all the other test failures, please take the time to look
>> >>>>>> through
>> >>>>>> JIRA and open an issue if one does not already exist so that we can
>> >>>>>> triage
>> >>>>>> if these are just environmental issues.  If I don't hear any
>> >>>>>> objections I'm
>> >>>>>> going to go ahead with RC3 tomorrow.
>> >>>>>>
>> >>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>> >>>>>> <[hidden email]> wrote:
>> >>>>>>>
>> >>>>>>> -1
>> >>>>>>> sorry, found an issue with SparkR CRAN check.
>> >>>>>>> Opened SPARK-20197 and working on fix.
>> >>>>>>>
>> >>>>>>> ________________________________
>> >>>>>>> From: [hidden email] <[hidden email]> on behalf of
>> >>>>>>> Holden Karau <[hidden email]>
>> >>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>> >>>>>>> To: Xiao Li
>> >>>>>>> Cc: Michael Armbrust; [hidden email]
>> >>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>> >>>>>>>
>> >>>>>>> -1 (non-binding)
>> >>>>>>>
>> >>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>> >>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do
>> >>>>>>> not upload
>> >>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version
>> >>>>>>> we
>> >>>>>>> upgrade to PyPi.
>> >>>>>>> Building this on my own machine results in a longer description.
>> >>>>>>>
>> >>>>>>> My guess is that whichever machine was used to package this is
>> >>>>>>> missing the pandoc executable (or possibly pypandoc library).
>> >>>>>>>
>> >>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]>
>> >>>>>>> wrote:
>> >>>>>>>>
>> >>>>>>>> +1
>> >>>>>>>>
>> >>>>>>>> Xiao
>> >>>>>>>>
>> >>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>> >>>>>>>> <[hidden email]>:
>> >>>>>>>>>
>> >>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>> >>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at
>> >>>>>>>>> 16:30 PST and
>> >>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>> >>>>>>>>>
>> >>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>> >>>>>>>>> [ ] -1 Do not release this package because ...
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> To learn more about Apache Spark, please see
>> >>>>>>>>> http://spark.apache.org/
>> >>>>>>>>>
>> >>>>>>>>> The tag to be voted on is v2.1.1-rc2
>> >>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>> >>>>>>>>>
>> >>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>> >>>>>>>>>
>> >>>>>>>>> The release files, including signatures, digests, etc. can be
>> >>>>>>>>> found
>> >>>>>>>>> at:
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>> >>>>>>>>>
>> >>>>>>>>> Release artifacts are signed with the following key:
>> >>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>> >>>>>>>>>
>> >>>>>>>>> The staging repository for this release can be found at:
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>> >>>>>>>>>
>> >>>>>>>>> The documentation corresponding to this release can be found at:
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> FAQ
>> >>>>>>>>>
>> >>>>>>>>> How can I help test this release?
>> >>>>>>>>>
>> >>>>>>>>> If you are a Spark user, you can help us test this release by
>> >>>>>>>>> taking an existing Spark workload and running on this release
>> >>>>>>>>> candidate,
>> >>>>>>>>> then reporting any regressions.
>> >>>>>>>>>
>> >>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>> >>>>>>>>>
>> >>>>>>>>> Committers should look at those and triage. Extremely important
>> >>>>>>>>> bug
>> >>>>>>>>> fixes, documentation, and API tweaks that impact compatibility
>> >>>>>>>>> should be
>> >>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2
>> >>>>>>>>> or 2.2.0.
>> >>>>>>>>>
>> >>>>>>>>> But my bug isn't fixed!??!
>> >>>>>>>>>
>> >>>>>>>>> In order to make timely releases, we will typically not hold the
>> >>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>> >>>>>>>>>
>> >>>>>>>>> What happened to RC1?
>> >>>>>>>>>
>> >>>>>>>>> There were issues with the release packaging and as a result was
>> >>>>>>>>> skipped.
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> --
>> >>>>>>> Cell : 425-233-8271
>> >>>>>>> Twitter: https://twitter.com/holdenkarau
>> >>>>>>
>> >>>>>>
>> >>>>> --
>> >>>>> Cell : 425-233-8271
>> >>>>> Twitter: https://twitter.com/holdenkarau
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Cell : 425-233-8271
>> >>>> Twitter: https://twitter.com/holdenkarau
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Cell : 425-233-8271
>> >>> Twitter: https://twitter.com/holdenkarau
>> >>
>> >>
>> >
>> >
>> >
>> > --
>> > Ryan Blue
>> > Software Engineer
>> > Netflix
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: [hidden email]
>>
>
>
>
> --
> Cell : 425-233-8271
> Twitter: https://twitter.com/holdenkarau



--
Maciek Bryński

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Michael Armbrust
In reply to this post by Holden Karau

On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <[hidden email]> wrote:
If it would help I'd be more than happy to look at kicking off the packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216 & friends) (I'd still probably need some guidance from a previous release coordinator so I understand if that's not actually faster).

On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <[hidden email]> wrote:
I backported the fix into both branch-2.1 and branch-2.0. Thanks.

Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0


On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <[hidden email]> wrote:
> DB,
>
> This vote already failed and there isn't a RC3 vote yet. If you backport the
> changes to branch-2.1 they will make it into the next RC.
>
> rb
>
> On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <[hidden email]> wrote:
>>
>> -1
>>
>> I think that back-porting SPARK-20270 and SPARK-18555 are very important
>> since it's a critical bug that na.fill will mess up the data in Long even
>> the data isn't null.
>>
>> Thanks.
>>
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <[hidden email]>
>> wrote:
>>>
>>> Following up, the issues with missing pypandoc/pandoc on the packaging
>>> machine has been resolved.
>>>
>>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]>
>>> wrote:
>>>>
>>>> See SPARK-20216, if Michael can let me know which machine is being used
>>>> for packaging I can see if I can install pandoc on it (should be simple but
>>>> I know the Jenkins cluster is a bit on the older side).
>>>>
>>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]>
>>>> wrote:
>>>>>
>>>>> So the fix is installing pandoc on whichever machine is used for
>>>>> packaging. I thought that was generally done on the machine of the person
>>>>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>>>>> chatting with Josh it sounds like that part might be on of the Jenkins
>>>>> workers - is there a fixed one that is used?
>>>>>
>>>>> Regardless I'll file a JIRA for this when I get back in front of my
>>>>> desktop (~1 hour or so).
>>>>>
>>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>> <[hidden email]> wrote:
>>>>>>
>>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>>>>> think we should proceed:
>>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>>>>> report if this is a regression and if there is an easy fix that we should
>>>>>> wait for.
>>>>>>
>>>>>> For all the other test failures, please take the time to look through
>>>>>> JIRA and open an issue if one does not already exist so that we can triage
>>>>>> if these are just environmental issues.  If I don't hear any objections I'm
>>>>>> going to go ahead with RC3 tomorrow.
>>>>>>
>>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>>> <[hidden email]> wrote:
>>>>>>>
>>>>>>> -1
>>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>>>> Opened SPARK-20197 and working on fix.
>>>>>>>
>>>>>>> ________________________________
>>>>>>> From: [hidden email] <[hidden email]> on behalf of
>>>>>>> Holden Karau <[hidden email]>
>>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>>>> To: Xiao Li
>>>>>>> Cc: Michael Armbrust; [hidden email]
>>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>>>>
>>>>>>> -1 (non-binding)
>>>>>>>
>>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>>>>>>> upgrade to PyPi.
>>>>>>> Building this on my own machine results in a longer description.
>>>>>>>
>>>>>>> My guess is that whichever machine was used to package this is
>>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>>>>
>>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> +1
>>>>>>>>
>>>>>>>> Xiao
>>>>>>>>
>>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>>>>> <[hidden email]>:
>>>>>>>>>
>>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
>>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>>>>>>
>>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>>>>>> [ ] -1 Do not release this package because ...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> To learn more about Apache Spark, please see
>>>>>>>>> http://spark.apache.org/
>>>>>>>>>
>>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>>>>>>
>>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>>>>>>
>>>>>>>>> The release files, including signatures, digests, etc. can be found
>>>>>>>>> at:
>>>>>>>>>
>>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>>>>>>
>>>>>>>>> Release artifacts are signed with the following key:
>>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>>>>>
>>>>>>>>> The staging repository for this release can be found at:
>>>>>>>>>
>>>>>>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>>>>>>
>>>>>>>>> The documentation corresponding to this release can be found at:
>>>>>>>>>
>>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> FAQ
>>>>>>>>>
>>>>>>>>> How can I help test this release?
>>>>>>>>>
>>>>>>>>> If you are a Spark user, you can help us test this release by
>>>>>>>>> taking an existing Spark workload and running on this release candidate,
>>>>>>>>> then reporting any regressions.
>>>>>>>>>
>>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>>>>>>
>>>>>>>>> Committers should look at those and triage. Extremely important bug
>>>>>>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>>>>>>
>>>>>>>>> But my bug isn't fixed!??!
>>>>>>>>>
>>>>>>>>> In order to make timely releases, we will typically not hold the
>>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>>>>>>>>>
>>>>>>>>> What happened to RC1?
>>>>>>>>>
>>>>>>>>> There were issues with the release packaging and as a result was
>>>>>>>>> skipped.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>>
>>>>> --
>>>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>>> Twitter: https://twitter.com/holdenkarau
>>>
>>>
>>>
>>>
>>> --
>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>> Twitter: https://twitter.com/holdenkarau
>>
>>
>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Holden Karau
Sure, let me dig into it :)

On Fri, Apr 14, 2017 at 4:21 PM, Michael Armbrust <[hidden email]> wrote:

On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <[hidden email]> wrote:
If it would help I'd be more than happy to look at kicking off the packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216 & friends) (I'd still probably need some guidance from a previous release coordinator so I understand if that's not actually faster).

On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <[hidden email]> wrote:
I backported the fix into both branch-2.1 and branch-2.0. Thanks.

Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0


On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <[hidden email]> wrote:
> DB,
>
> This vote already failed and there isn't a RC3 vote yet. If you backport the
> changes to branch-2.1 they will make it into the next RC.
>
> rb
>
> On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <[hidden email]> wrote:
>>
>> -1
>>
>> I think that back-porting SPARK-20270 and SPARK-18555 are very important
>> since it's a critical bug that na.fill will mess up the data in Long even
>> the data isn't null.
>>
>> Thanks.
>>
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <[hidden email]>
>> wrote:
>>>
>>> Following up, the issues with missing pypandoc/pandoc on the packaging
>>> machine has been resolved.
>>>
>>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]>
>>> wrote:
>>>>
>>>> See SPARK-20216, if Michael can let me know which machine is being used
>>>> for packaging I can see if I can install pandoc on it (should be simple but
>>>> I know the Jenkins cluster is a bit on the older side).
>>>>
>>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]>
>>>> wrote:
>>>>>
>>>>> So the fix is installing pandoc on whichever machine is used for
>>>>> packaging. I thought that was generally done on the machine of the person
>>>>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>>>>> chatting with Josh it sounds like that part might be on of the Jenkins
>>>>> workers - is there a fixed one that is used?
>>>>>
>>>>> Regardless I'll file a JIRA for this when I get back in front of my
>>>>> desktop (~1 hour or so).
>>>>>
>>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>> <[hidden email]> wrote:
>>>>>>
>>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>>>>> think we should proceed:
>>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>>>>> report if this is a regression and if there is an easy fix that we should
>>>>>> wait for.
>>>>>>
>>>>>> For all the other test failures, please take the time to look through
>>>>>> JIRA and open an issue if one does not already exist so that we can triage
>>>>>> if these are just environmental issues.  If I don't hear any objections I'm
>>>>>> going to go ahead with RC3 tomorrow.
>>>>>>
>>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>>> <[hidden email]> wrote:
>>>>>>>
>>>>>>> -1
>>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>>>> Opened SPARK-20197 and working on fix.
>>>>>>>
>>>>>>> ________________________________
>>>>>>> From: [hidden email] <[hidden email]> on behalf of
>>>>>>> Holden Karau <[hidden email]>
>>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>>>> To: Xiao Li
>>>>>>> Cc: Michael Armbrust; [hidden email]
>>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>>>>
>>>>>>> -1 (non-binding)
>>>>>>>
>>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>>>>>>> upgrade to PyPi.
>>>>>>> Building this on my own machine results in a longer description.
>>>>>>>
>>>>>>> My guess is that whichever machine was used to package this is
>>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>>>>
>>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> +1
>>>>>>>>
>>>>>>>> Xiao
>>>>>>>>
>>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>>>>> <[hidden email]>:
>>>>>>>>>
>>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
>>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>>>>>>
>>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>>>>>> [ ] -1 Do not release this package because ...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> To learn more about Apache Spark, please see
>>>>>>>>> http://spark.apache.org/
>>>>>>>>>
>>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>>>>>>
>>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>>>>>>
>>>>>>>>> The release files, including signatures, digests, etc. can be found
>>>>>>>>> at:
>>>>>>>>>
>>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>>>>>>
>>>>>>>>> Release artifacts are signed with the following key:
>>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>>>>>
>>>>>>>>> The staging repository for this release can be found at:
>>>>>>>>>
>>>>>>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>>>>>>
>>>>>>>>> The documentation corresponding to this release can be found at:
>>>>>>>>>
>>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> FAQ
>>>>>>>>>
>>>>>>>>> How can I help test this release?
>>>>>>>>>
>>>>>>>>> If you are a Spark user, you can help us test this release by
>>>>>>>>> taking an existing Spark workload and running on this release candidate,
>>>>>>>>> then reporting any regressions.
>>>>>>>>>
>>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>>>>>>
>>>>>>>>> Committers should look at those and triage. Extremely important bug
>>>>>>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>>>>>>
>>>>>>>>> But my bug isn't fixed!??!
>>>>>>>>>
>>>>>>>>> In order to make timely releases, we will typically not hold the
>>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>>>>>>>>>
>>>>>>>>> What happened to RC1?
>>>>>>>>>
>>>>>>>>> There were issues with the release packaging and as a result was
>>>>>>>>> skipped.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>>
>>>>> --
>>>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>>> Twitter: https://twitter.com/holdenkarau
>>>
>>>
>>>
>>>
>>> --
>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>> Twitter: https://twitter.com/holdenkarau
>>
>>
>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271




--
Cell : 425-233-8271
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Holden Karau
At first glance the error seems similar to one Pedro Rodriguez ran into during 2.0, so I'm looping Pedor in if they happen to have any insight into what was the cause last time.

On Fri, Apr 14, 2017 at 4:40 PM, Holden Karau <[hidden email]> wrote:
Sure, let me dig into it :)

On Fri, Apr 14, 2017 at 4:21 PM, Michael Armbrust <[hidden email]> wrote:

On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <[hidden email]> wrote:
If it would help I'd be more than happy to look at kicking off the packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216 & friends) (I'd still probably need some guidance from a previous release coordinator so I understand if that's not actually faster).

On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <[hidden email]> wrote:
I backported the fix into both branch-2.1 and branch-2.0. Thanks.

Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0


On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <[hidden email]> wrote:
> DB,
>
> This vote already failed and there isn't a RC3 vote yet. If you backport the
> changes to branch-2.1 they will make it into the next RC.
>
> rb
>
> On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <[hidden email]> wrote:
>>
>> -1
>>
>> I think that back-porting SPARK-20270 and SPARK-18555 are very important
>> since it's a critical bug that na.fill will mess up the data in Long even
>> the data isn't null.
>>
>> Thanks.
>>
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <[hidden email]>
>> wrote:
>>>
>>> Following up, the issues with missing pypandoc/pandoc on the packaging
>>> machine has been resolved.
>>>
>>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]>
>>> wrote:
>>>>
>>>> See SPARK-20216, if Michael can let me know which machine is being used
>>>> for packaging I can see if I can install pandoc on it (should be simple but
>>>> I know the Jenkins cluster is a bit on the older side).
>>>>
>>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]>
>>>> wrote:
>>>>>
>>>>> So the fix is installing pandoc on whichever machine is used for
>>>>> packaging. I thought that was generally done on the machine of the person
>>>>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>>>>> chatting with Josh it sounds like that part might be on of the Jenkins
>>>>> workers - is there a fixed one that is used?
>>>>>
>>>>> Regardless I'll file a JIRA for this when I get back in front of my
>>>>> desktop (~1 hour or so).
>>>>>
>>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>> <[hidden email]> wrote:
>>>>>>
>>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>>>>> think we should proceed:
>>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>>>>> report if this is a regression and if there is an easy fix that we should
>>>>>> wait for.
>>>>>>
>>>>>> For all the other test failures, please take the time to look through
>>>>>> JIRA and open an issue if one does not already exist so that we can triage
>>>>>> if these are just environmental issues.  If I don't hear any objections I'm
>>>>>> going to go ahead with RC3 tomorrow.
>>>>>>
>>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>>> <[hidden email]> wrote:
>>>>>>>
>>>>>>> -1
>>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>>>> Opened SPARK-20197 and working on fix.
>>>>>>>
>>>>>>> ________________________________
>>>>>>> From: [hidden email] <[hidden email]> on behalf of
>>>>>>> Holden Karau <[hidden email]>
>>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>>>> To: Xiao Li
>>>>>>> Cc: Michael Armbrust; [hidden email]
>>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>>>>
>>>>>>> -1 (non-binding)
>>>>>>>
>>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>>>>>>> upgrade to PyPi.
>>>>>>> Building this on my own machine results in a longer description.
>>>>>>>
>>>>>>> My guess is that whichever machine was used to package this is
>>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>>>>
>>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> +1
>>>>>>>>
>>>>>>>> Xiao
>>>>>>>>
>>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>>>>> <[hidden email]>:
>>>>>>>>>
>>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
>>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>>>>>>
>>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>>>>>> [ ] -1 Do not release this package because ...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> To learn more about Apache Spark, please see
>>>>>>>>> http://spark.apache.org/
>>>>>>>>>
>>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>>>>>>
>>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>>>>>>
>>>>>>>>> The release files, including signatures, digests, etc. can be found
>>>>>>>>> at:
>>>>>>>>>
>>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>>>>>>
>>>>>>>>> Release artifacts are signed with the following key:
>>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>>>>>
>>>>>>>>> The staging repository for this release can be found at:
>>>>>>>>>
>>>>>>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>>>>>>
>>>>>>>>> The documentation corresponding to this release can be found at:
>>>>>>>>>
>>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> FAQ
>>>>>>>>>
>>>>>>>>> How can I help test this release?
>>>>>>>>>
>>>>>>>>> If you are a Spark user, you can help us test this release by
>>>>>>>>> taking an existing Spark workload and running on this release candidate,
>>>>>>>>> then reporting any regressions.
>>>>>>>>>
>>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>>>>>>
>>>>>>>>> Committers should look at those and triage. Extremely important bug
>>>>>>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>>>>>>
>>>>>>>>> But my bug isn't fixed!??!
>>>>>>>>>
>>>>>>>>> In order to make timely releases, we will typically not hold the
>>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>>>>>>>>>
>>>>>>>>> What happened to RC1?
>>>>>>>>>
>>>>>>>>> There were issues with the release packaging and as a result was
>>>>>>>>> skipped.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>>
>>>>> --
>>>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>>> Twitter: https://twitter.com/holdenkarau
>>>
>>>
>>>
>>>
>>> --
>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>> Twitter: https://twitter.com/holdenkarau
>>
>>
>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : 425-233-8271
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Holden Karau
Ok and with a bit more digging between RC2 and RC3 we apparently switched which JVM we are building the docs with.


HEAD is now at 2ed19cf... Preparing Spark release v2.1.1-rc3  | HEAD is now at 02b165d... Preparing Spark release v2.1.1-rc2
Checked out Spark git hash 2ed19cf                            | Checked out Spark git hash 02b165d
Building Spark docs                                             Building Spark docs
Configuration file: /home/jenkins/workspace/spark-release-doc   Configuration file: /home/jenkins/workspace/spark-release-doc
Moving to project root and building API docs.                   Moving to project root and building API docs.
Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /   Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /
Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.             | Using /usr/java/jdk1.7.0_79 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.       Note, this will be overridden by -java-home if it is set.

There have been some known issues with building the docs with JDK8 and I believe those fixes are in mainline, and we could cherry pick these changes in -- but I think it might be more reasonable to just build the 2.1 docs with JDK7.

What do people think?


On Fri, Apr 14, 2017 at 4:53 PM, Holden Karau <[hidden email]> wrote:
At first glance the error seems similar to one Pedro Rodriguez ran into during 2.0, so I'm looping Pedor in if they happen to have any insight into what was the cause last time.

On Fri, Apr 14, 2017 at 4:40 PM, Holden Karau <[hidden email]> wrote:
Sure, let me dig into it :)

On Fri, Apr 14, 2017 at 4:21 PM, Michael Armbrust <[hidden email]> wrote:

On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <[hidden email]> wrote:
If it would help I'd be more than happy to look at kicking off the packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216 & friends) (I'd still probably need some guidance from a previous release coordinator so I understand if that's not actually faster).

On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <[hidden email]> wrote:
I backported the fix into both branch-2.1 and branch-2.0. Thanks.

Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0


On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <[hidden email]> wrote:
> DB,
>
> This vote already failed and there isn't a RC3 vote yet. If you backport the
> changes to branch-2.1 they will make it into the next RC.
>
> rb
>
> On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <[hidden email]> wrote:
>>
>> -1
>>
>> I think that back-porting SPARK-20270 and SPARK-18555 are very important
>> since it's a critical bug that na.fill will mess up the data in Long even
>> the data isn't null.
>>
>> Thanks.
>>
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <[hidden email]>
>> wrote:
>>>
>>> Following up, the issues with missing pypandoc/pandoc on the packaging
>>> machine has been resolved.
>>>
>>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]>
>>> wrote:
>>>>
>>>> See SPARK-20216, if Michael can let me know which machine is being used
>>>> for packaging I can see if I can install pandoc on it (should be simple but
>>>> I know the Jenkins cluster is a bit on the older side).
>>>>
>>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]>
>>>> wrote:
>>>>>
>>>>> So the fix is installing pandoc on whichever machine is used for
>>>>> packaging. I thought that was generally done on the machine of the person
>>>>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>>>>> chatting with Josh it sounds like that part might be on of the Jenkins
>>>>> workers - is there a fixed one that is used?
>>>>>
>>>>> Regardless I'll file a JIRA for this when I get back in front of my
>>>>> desktop (~1 hour or so).
>>>>>
>>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>> <[hidden email]> wrote:
>>>>>>
>>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>>>>> think we should proceed:
>>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>>>>> report if this is a regression and if there is an easy fix that we should
>>>>>> wait for.
>>>>>>
>>>>>> For all the other test failures, please take the time to look through
>>>>>> JIRA and open an issue if one does not already exist so that we can triage
>>>>>> if these are just environmental issues.  If I don't hear any objections I'm
>>>>>> going to go ahead with RC3 tomorrow.
>>>>>>
>>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>>> <[hidden email]> wrote:
>>>>>>>
>>>>>>> -1
>>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>>>> Opened SPARK-20197 and working on fix.
>>>>>>>
>>>>>>> ________________________________
>>>>>>> From: [hidden email] <[hidden email]> on behalf of
>>>>>>> Holden Karau <[hidden email]>
>>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>>>> To: Xiao Li
>>>>>>> Cc: Michael Armbrust; [hidden email]
>>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>>>>
>>>>>>> -1 (non-binding)
>>>>>>>
>>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>>>>>>> upgrade to PyPi.
>>>>>>> Building this on my own machine results in a longer description.
>>>>>>>
>>>>>>> My guess is that whichever machine was used to package this is
>>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>>>>
>>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> +1
>>>>>>>>
>>>>>>>> Xiao
>>>>>>>>
>>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>>>>> <[hidden email]>:
>>>>>>>>>
>>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
>>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>>>>>>
>>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>>>>>> [ ] -1 Do not release this package because ...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> To learn more about Apache Spark, please see
>>>>>>>>> http://spark.apache.org/
>>>>>>>>>
>>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>>>>>>
>>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>>>>>>
>>>>>>>>> The release files, including signatures, digests, etc. can be found
>>>>>>>>> at:
>>>>>>>>>
>>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>>>>>>
>>>>>>>>> Release artifacts are signed with the following key:
>>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>>>>>
>>>>>>>>> The staging repository for this release can be found at:
>>>>>>>>>
>>>>>>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>>>>>>
>>>>>>>>> The documentation corresponding to this release can be found at:
>>>>>>>>>
>>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> FAQ
>>>>>>>>>
>>>>>>>>> How can I help test this release?
>>>>>>>>>
>>>>>>>>> If you are a Spark user, you can help us test this release by
>>>>>>>>> taking an existing Spark workload and running on this release candidate,
>>>>>>>>> then reporting any regressions.
>>>>>>>>>
>>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>>>>>>
>>>>>>>>> Committers should look at those and triage. Extremely important bug
>>>>>>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>>>>>>
>>>>>>>>> But my bug isn't fixed!??!
>>>>>>>>>
>>>>>>>>> In order to make timely releases, we will typically not hold the
>>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>>>>>>>>>
>>>>>>>>> What happened to RC1?
>>>>>>>>>
>>>>>>>>> There were issues with the release packaging and as a result was
>>>>>>>>> skipped.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>>
>>>>> --
>>>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>>> Twitter: https://twitter.com/holdenkarau
>>>
>>>
>>>
>>>
>>> --
>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>> Twitter: https://twitter.com/holdenkarau
>>
>>
>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : 425-233-8271
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Ryan Blue
I've hit this before, where Javadoc for 1.8 is much more strict than 1.7.

I think we should definitely use Java 1.7 for the release if we used it for the previous releases in the 2.1 line. We don't want to break java 1.7 users in a patch release.

rb

On Fri, Apr 14, 2017 at 5:21 PM, Holden Karau <[hidden email]> wrote:
Ok and with a bit more digging between RC2 and RC3 we apparently switched which JVM we are building the docs with.


HEAD is now at 2ed19cf... Preparing Spark release v2.1.1-rc3  | HEAD is now at 02b165d... Preparing Spark release v2.1.1-rc2
Checked out Spark git hash 2ed19cf                            | Checked out Spark git hash 02b165d
Building Spark docs                                             Building Spark docs
Configuration file: /home/jenkins/workspace/spark-release-doc   Configuration file: /home/jenkins/workspace/spark-release-doc
Moving to project root and building API docs.                   Moving to project root and building API docs.
Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /   Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /
Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.             | Using /usr/java/jdk1.7.0_79 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.       Note, this will be overridden by -java-home if it is set.

There have been some known issues with building the docs with JDK8 and I believe those fixes are in mainline, and we could cherry pick these changes in -- but I think it might be more reasonable to just build the 2.1 docs with JDK7.

What do people think?


On Fri, Apr 14, 2017 at 4:53 PM, Holden Karau <[hidden email]> wrote:
At first glance the error seems similar to one Pedro Rodriguez ran into during 2.0, so I'm looping Pedor in if they happen to have any insight into what was the cause last time.

On Fri, Apr 14, 2017 at 4:40 PM, Holden Karau <[hidden email]> wrote:
Sure, let me dig into it :)

On Fri, Apr 14, 2017 at 4:21 PM, Michael Armbrust <[hidden email]> wrote:

On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <[hidden email]> wrote:
If it would help I'd be more than happy to look at kicking off the packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216 & friends) (I'd still probably need some guidance from a previous release coordinator so I understand if that's not actually faster).

On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <[hidden email]> wrote:
I backported the fix into both branch-2.1 and branch-2.0. Thanks.

Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0


On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <[hidden email]> wrote:
> DB,
>
> This vote already failed and there isn't a RC3 vote yet. If you backport the
> changes to branch-2.1 they will make it into the next RC.
>
> rb
>
> On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <[hidden email]> wrote:
>>
>> -1
>>
>> I think that back-porting SPARK-20270 and SPARK-18555 are very important
>> since it's a critical bug that na.fill will mess up the data in Long even
>> the data isn't null.
>>
>> Thanks.
>>
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <[hidden email]>
>> wrote:
>>>
>>> Following up, the issues with missing pypandoc/pandoc on the packaging
>>> machine has been resolved.
>>>
>>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <[hidden email]>
>>> wrote:
>>>>
>>>> See SPARK-20216, if Michael can let me know which machine is being used
>>>> for packaging I can see if I can install pandoc on it (should be simple but
>>>> I know the Jenkins cluster is a bit on the older side).
>>>>
>>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <[hidden email]>
>>>> wrote:
>>>>>
>>>>> So the fix is installing pandoc on whichever machine is used for
>>>>> packaging. I thought that was generally done on the machine of the person
>>>>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>>>>> chatting with Josh it sounds like that part might be on of the Jenkins
>>>>> workers - is there a fixed one that is used?
>>>>>
>>>>> Regardless I'll file a JIRA for this when I get back in front of my
>>>>> desktop (~1 hour or so).
>>>>>
>>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>> <[hidden email]> wrote:
>>>>>>
>>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>>>>> think we should proceed:
>>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>>>>> report if this is a regression and if there is an easy fix that we should
>>>>>> wait for.
>>>>>>
>>>>>> For all the other test failures, please take the time to look through
>>>>>> JIRA and open an issue if one does not already exist so that we can triage
>>>>>> if these are just environmental issues.  If I don't hear any objections I'm
>>>>>> going to go ahead with RC3 tomorrow.
>>>>>>
>>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>>> <[hidden email]> wrote:
>>>>>>>
>>>>>>> -1
>>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>>>> Opened SPARK-20197 and working on fix.
>>>>>>>
>>>>>>> ________________________________
>>>>>>> From: [hidden email] <[hidden email]> on behalf of
>>>>>>> Holden Karau <[hidden email]>
>>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>>>> To: Xiao Li
>>>>>>> Cc: Michael Armbrust; [hidden email]
>>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>>>>
>>>>>>> -1 (non-binding)
>>>>>>>
>>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>>>>>>> upgrade to PyPi.
>>>>>>> Building this on my own machine results in a longer description.
>>>>>>>
>>>>>>> My guess is that whichever machine was used to package this is
>>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>>>>
>>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <[hidden email]>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> +1
>>>>>>>>
>>>>>>>> Xiao
>>>>>>>>
>>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>>>>> <[hidden email]>:
>>>>>>>>>
>>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
>>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>>>>>>
>>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>>>>>> [ ] -1 Do not release this package because ...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> To learn more about Apache Spark, please see
>>>>>>>>> http://spark.apache.org/
>>>>>>>>>
>>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>>>>>>
>>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>>>>>>
>>>>>>>>> The release files, including signatures, digests, etc. can be found
>>>>>>>>> at:
>>>>>>>>>
>>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>>>>>>
>>>>>>>>> Release artifacts are signed with the following key:
>>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>>>>>
>>>>>>>>> The staging repository for this release can be found at:
>>>>>>>>>
>>>>>>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>>>>>>
>>>>>>>>> The documentation corresponding to this release can be found at:
>>>>>>>>>
>>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> FAQ
>>>>>>>>>
>>>>>>>>> How can I help test this release?
>>>>>>>>>
>>>>>>>>> If you are a Spark user, you can help us test this release by
>>>>>>>>> taking an existing Spark workload and running on this release candidate,
>>>>>>>>> then reporting any regressions.
>>>>>>>>>
>>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>>>>>>
>>>>>>>>> Committers should look at those and triage. Extremely important bug
>>>>>>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>>>>>>
>>>>>>>>> But my bug isn't fixed!??!
>>>>>>>>>
>>>>>>>>> In order to make timely releases, we will typically not hold the
>>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>>>>>>>>>
>>>>>>>>> What happened to RC1?
>>>>>>>>>
>>>>>>>>> There were issues with the release packaging and as a result was
>>>>>>>>> skipped.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>>
>>>>> --
>>>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>>> Twitter: https://twitter.com/holdenkarau
>>>
>>>
>>>
>>>
>>> --
>>> Cell : <a href="tel:425-233-8271" value="+14252338271" target="_blank">425-233-8271
>>> Twitter: https://twitter.com/holdenkarau
>>
>>
>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271




--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Cell : <a href="tel:(425)%20233-8271" value="+14252338271" target="_blank">425-233-8271



--
Ryan Blue
Software Engineer
Netflix
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Sean Owen
I don't think this is an example of Java 8 javadoc being more strict; it is not finding classes, not complaining about syntax.
(Hyukjin cleaned up all of the javadoc 8 errors in master, and they're different and much more extensive!)

It wouldn't necessarily break anything to build with Java 8 because it'll still emit Java 7 bytecode, etc.

That said, it may very well be that it is somehow due to Java 7 vs 8, and is probably best to stick to 1.7 in the release build. 

On Sat, Apr 15, 2017 at 1:38 AM Ryan Blue <[hidden email]> wrote:
I've hit this before, where Javadoc for 1.8 is much more strict than 1.7.

I think we should definitely use Java 1.7 for the release if we used it for the previous releases in the 2.1 line. We don't want to break java 1.7 users in a patch release.

rb

On Fri, Apr 14, 2017 at 5:21 PM, Holden Karau <[hidden email]> wrote:
Ok and with a bit more digging between RC2 and RC3 we apparently switched which JVM we are building the docs with.


HEAD is now at 2ed19cf... Preparing Spark release v2.1.1-rc3  | HEAD is now at 02b165d... Preparing Spark release v2.1.1-rc2
Checked out Spark git hash 2ed19cf                            | Checked out Spark git hash 02b165d
Building Spark docs                                             Building Spark docs
Configuration file: /home/jenkins/workspace/spark-release-doc   Configuration file: /home/jenkins/workspace/spark-release-doc
Moving to project root and building API docs.                   Moving to project root and building API docs.
Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /   Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /
Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.             | Using /usr/java/jdk1.7.0_79 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.       Note, this will be overridden by -java-home if it is set.

There have been some known issues with building the docs with JDK8 and I believe those fixes are in mainline, and we could cherry pick these changes in -- but I think it might be more reasonable to just build the 2.1 docs with JDK7.

What do people think?

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Holden Karau
I think this is Java 8 v Java 7, if you look at the previous build you see a lot of the same missing classes but tagged as "warning" rather than "error". I think all in all it makes sense to stick to JDK7 to build the legacy build which have been built with it previously.

If there is consensus on that I'm happy to update the env variables for the RC3 build to set a JDK7 JAVA_HOME (but I'd want to double check with someone about which jobs need to be updated to make sure I don't miss any).

On Sat, Apr 15, 2017 at 2:33 AM, Sean Owen <[hidden email]> wrote:
I don't think this is an example of Java 8 javadoc being more strict; it is not finding classes, not complaining about syntax.
(Hyukjin cleaned up all of the javadoc 8 errors in master, and they're different and much more extensive!)

It wouldn't necessarily break anything to build with Java 8 because it'll still emit Java 7 bytecode, etc.

That said, it may very well be that it is somehow due to Java 7 vs 8, and is probably best to stick to 1.7 in the release build. 

On Sat, Apr 15, 2017 at 1:38 AM Ryan Blue <[hidden email]> wrote:
I've hit this before, where Javadoc for 1.8 is much more strict than 1.7.

I think we should definitely use Java 1.7 for the release if we used it for the previous releases in the 2.1 line. We don't want to break java 1.7 users in a patch release.

rb

On Fri, Apr 14, 2017 at 5:21 PM, Holden Karau <[hidden email]> wrote:
Ok and with a bit more digging between RC2 and RC3 we apparently switched which JVM we are building the docs with.


HEAD is now at 2ed19cf... Preparing Spark release v2.1.1-rc3  | HEAD is now at 02b165d... Preparing Spark release v2.1.1-rc2
Checked out Spark git hash 2ed19cf                            | Checked out Spark git hash 02b165d
Building Spark docs                                             Building Spark docs
Configuration file: /home/jenkins/workspace/spark-release-doc   Configuration file: /home/jenkins/workspace/spark-release-doc
Moving to project root and building API docs.                   Moving to project root and building API docs.
Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /   Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /
Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.             | Using /usr/java/jdk1.7.0_79 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.       Note, this will be overridden by -java-home if it is set.

There have been some known issues with building the docs with JDK8 and I believe those fixes are in mainline, and we could cherry pick these changes in -- but I think it might be more reasonable to just build the 2.1 docs with JDK7.

What do people think?




--
Cell : 425-233-8271
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Michael Armbrust
In reply to this post by Michael Armbrust
In case it wasn't obvious by the appearance of RC3, this vote failed.

On Thu, Mar 30, 2017 at 4:09 PM, Michael Armbrust <[hidden email]> wrote:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:

Release artifacts are signed with the following key:

The staging repository for this release can be found at:

The documentation corresponding to this release can be found at:


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.

12
Loading...