Build timeout -- continuous-integration/appveyor/pr — AppVeyor build failed

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Build timeout -- continuous-integration/appveyor/pr — AppVeyor build failed

ifilonenko
Hi dev,

I recently updated an on-going PR [https://github.com/apache/spark/pull/21092] that was updated with a merge that included a lot of commits from master and I got the following error:

continuous-integration/appveyor/pr — AppVeyor build failed

due to:

Build execution time has reached the maximum allowed time for your plan (90 minutes).

seen here: https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/2300-master

As this is the first time I am seeing this, I am wondering if this is in relation to a large merge and if it is, I am wondering if the timeout can be increased. 

Thanks!

Best,
Ilan Filonenko
Reply | Threaded
Open this post in threaded view
|

Re: Build timeout -- continuous-integration/appveyor/pr — AppVeyor build failed

Hyukjin Kwon
From a very quick look, I believe that's just occasional network issue in AppVeyor. For example, in this case:
This took 26ish mins and seems further downloading jars look mins much more than usual.

FYI, It usually takes built 35 ~ 40 mins and R tests 25 ~ 30 mins where usually ends up 1 hour 5 min.
Will take another look to reduce the time if the usual time reaches 1 hour and 30 mins (which is the current AppVeyor limit).

The timeout is already increased from 1 hour to 1 hour and 30 mins. They still look disallowing to increase timeout anymore.
I contacted with them few times and manually requested this.

For the best, I believe we usually just rebase rather than merging the commits in any case as mentioned in the contribution guide.
The test failure in the PR should be ignorable if that's not directly related with SparkR.


Thanks.



2018-05-14 8:45 GMT+08:00 Ilan Filonenko <[hidden email]>:
Hi dev,

I recently updated an on-going PR [https://github.com/apache/spark/pull/21092] that was updated with a merge that included a lot of commits from master and I got the following error:

continuous-integration/appveyor/pr — AppVeyor build failed

due to:

Build execution time has reached the maximum allowed time for your plan (90 minutes).

seen here: https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/2300-master

As this is the first time I am seeing this, I am wondering if this is in relation to a large merge and if it is, I am wondering if the timeout can be increased. 

Thanks!

Best,
Ilan Filonenko

Reply | Threaded
Open this post in threaded view
|

Re: Build timeout -- continuous-integration/appveyor/pr — AppVeyor build failed

Holden Karau
On Sun, May 13, 2018 at 9:43 PM Hyukjin Kwon <[hidden email]> wrote:
From a very quick look, I believe that's just occasional network issue in AppVeyor. For example, in this case:
This took 26ish mins and seems further downloading jars look mins much more than usual.

FYI, It usually takes built 35 ~ 40 mins and R tests 25 ~ 30 mins where usually ends up 1 hour 5 min.
Will take another look to reduce the time if the usual time reaches 1 hour and 30 mins (which is the current AppVeyor limit).

The timeout is already increased from 1 hour to 1 hour and 30 mins. They still look disallowing to increase timeout anymore.
I contacted with them few times and manually requested this.

For the best, I believe we usually just rebase rather than merging the commits in any case as mentioned in the contribution guide.
I don’t recal this being a thing that we actually go that far in encouraging. The guide says rebases are one of the ways folks can keep their PRs up to date, but no actually preference is stated. I tend to see PRs from different folks doing either rebases or merges since we do squash commits anyways.

I know for some developers keeping their branch up to date merge commits tend to be less effort, and provided the diff is still clear and the resulting merge is also clean I don’t see an issue.
The test failure in the PR should be ignorable if that's not directly related with SparkR.


Thanks.



2018-05-14 8:45 GMT+08:00 Ilan Filonenko <[hidden email]>:
Hi dev,

I recently updated an on-going PR [https://github.com/apache/spark/pull/21092] that was updated with a merge that included a lot of commits from master and I got the following error:

continuous-integration/appveyor/pr — AppVeyor build failed

due to:

Build execution time has reached the maximum allowed time for your plan (90 minutes).

seen here: https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/2300-master

As this is the first time I am seeing this, I am wondering if this is in relation to a large merge and if it is, I am wondering if the timeout can be increased. 

Thanks!

Best,
Ilan Filonenko

--
Reply | Threaded
Open this post in threaded view
|

Re: Build timeout -- continuous-integration/appveyor/pr — AppVeyor build failed

Hyukjin Kwon
Yup, I am not saying it's required but might be better since that's written in the guide as so and at least am seeing rebase is more frequent.
Also, usually merging commits trigger the AppVeyor build if it includes some changes in R
It's fine to merge the commits but better to rebase to save AppVeyor resource and prevent such confusions.


2018-05-14 10:05 GMT+08:00 Holden Karau <[hidden email]>:
On Sun, May 13, 2018 at 9:43 PM Hyukjin Kwon <[hidden email]> wrote:
From a very quick look, I believe that's just occasional network issue in AppVeyor. For example, in this case:
This took 26ish mins and seems further downloading jars look mins much more than usual.

FYI, It usually takes built 35 ~ 40 mins and R tests 25 ~ 30 mins where usually ends up 1 hour 5 min.
Will take another look to reduce the time if the usual time reaches 1 hour and 30 mins (which is the current AppVeyor limit).

The timeout is already increased from 1 hour to 1 hour and 30 mins. They still look disallowing to increase timeout anymore.
I contacted with them few times and manually requested this.

For the best, I believe we usually just rebase rather than merging the commits in any case as mentioned in the contribution guide.
I don’t recal this being a thing that we actually go that far in encouraging. The guide says rebases are one of the ways folks can keep their PRs up to date, but no actually preference is stated. I tend to see PRs from different folks doing either rebases or merges since we do squash commits anyways.

I know for some developers keeping their branch up to date merge commits tend to be less effort, and provided the diff is still clear and the resulting merge is also clean I don’t see an issue.
The test failure in the PR should be ignorable if that's not directly related with SparkR.


Thanks.



2018-05-14 8:45 GMT+08:00 Ilan Filonenko <[hidden email]>:
Hi dev,

I recently updated an on-going PR [https://github.com/apache/spark/pull/21092] that was updated with a merge that included a lot of commits from master and I got the following error:

continuous-integration/appveyor/pr — AppVeyor build failed

due to:

Build execution time has reached the maximum allowed time for your plan (90 minutes).

seen here: https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/2300-master

As this is the first time I am seeing this, I am wondering if this is in relation to a large merge and if it is, I am wondering if the timeout can be increased. 

Thanks!

Best,
Ilan Filonenko

--