What is d3kbcqa49mib13.cloudfront.net ?

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

What is d3kbcqa49mib13.cloudfront.net ?

Sean Owen
Not a big deal, but Mark noticed that this test now downloads Spark artifacts from the same 'direct download' link available on the downloads page:

https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala#L53


I don't know of any particular problem with this, which is a parallel download option in addition to the Apache mirrors. It's also the default.

Does anyone know what this bucket is and if there's a strong reason we can't just use mirrors?
Reply | Threaded
Open this post in threaded view
|

Re: What is d3kbcqa49mib13.cloudfront.net ?

Shivaram Venkataraman
The bucket comes from Cloudfront, a CDN thats part of AWS. There was a
bunch of discussion about this back in 2013
https://lists.apache.org/thread.html/9a72ff7ce913dd85a6b112b1b2de536dcda74b28b050f70646aba0ac@1380147885@%3Cdev.spark.apache.org%3E

Shivaram

On Wed, Sep 13, 2017 at 9:30 AM, Sean Owen <[hidden email]> wrote:

> Not a big deal, but Mark noticed that this test now downloads Spark
> artifacts from the same 'direct download' link available on the downloads
> page:
>
> https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala#L53
>
> https://d3kbcqa49mib13.cloudfront.net/spark-$version-bin-hadoop2.7.tgz
>
> I don't know of any particular problem with this, which is a parallel
> download option in addition to the Apache mirrors. It's also the default.
>
> Does anyone know what this bucket is and if there's a strong reason we can't
> just use mirrors?

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: What is d3kbcqa49mib13.cloudfront.net ?

Sean Owen
Ah right yeah I know it's an S3 bucket. Thanks for the context. Although I imagine the reasons it was set up no longer apply so much (you can get a direct mirror download link), and so it would probably be possible to retire this, there's also no big rush to. I wasn't clear from the thread whether it was agreed that the non-Apache link should be the default though. 

On Wed, Sep 13, 2017 at 6:27 PM Shivaram Venkataraman <[hidden email]> wrote:
The bucket comes from Cloudfront, a CDN thats part of AWS. There was a
bunch of discussion about this back in 2013
https://lists.apache.org/thread.html/9a72ff7ce913dd85a6b112b1b2de536dcda74b28b050f70646aba0ac@1380147885@%3Cdev.spark.apache.org%3E

Shivaram

On Wed, Sep 13, 2017 at 9:30 AM, Sean Owen <[hidden email]> wrote:
> Not a big deal, but Mark noticed that this test now downloads Spark
> artifacts from the same 'direct download' link available on the downloads
> page:
>
> https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala#L53
>
> https://d3kbcqa49mib13.cloudfront.net/spark-$version-bin-hadoop2.7.tgz
>
> I don't know of any particular problem with this, which is a parallel
> download option in addition to the Apache mirrors. It's also the default.
>
> Does anyone know what this bucket is and if there's a strong reason we can't
> just use mirrors?
Reply | Threaded
Open this post in threaded view
|

Re: What is d3kbcqa49mib13.cloudfront.net ?

Mark Hamstra
In reply to this post by Shivaram Venkataraman
Yeah, but that discussion and use case is a bit different -- providing a different route to download the final released and approved artifacts that were built using only acceptable artifacts and sources vs. building and checking prior to release using something that is not from an Apache mirror. This new use case puts us in the position of approving spark artifacts that weren't built entirely from canonical resources located in presumably secure and monitored repositories. Incorporating something that is not completely trusted or approved into the process of building something that we are then going to approve as trusted is different from the prior use of cloudfront. 

On Wed, Sep 13, 2017 at 10:26 AM, Shivaram Venkataraman <[hidden email]> wrote:
The bucket comes from Cloudfront, a CDN thats part of AWS. There was a
bunch of discussion about this back in 2013
https://lists.apache.org/thread.html/9a72ff7ce913dd85a6b112b1b2de536dcda74b28b050f70646aba0ac@1380147885@%3Cdev.spark.apache.org%3E

Shivaram

On Wed, Sep 13, 2017 at 9:30 AM, Sean Owen <[hidden email]> wrote:
> Not a big deal, but Mark noticed that this test now downloads Spark
> artifacts from the same 'direct download' link available on the downloads
> page:
>
> https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala#L53
>
> https://d3kbcqa49mib13.cloudfront.net/spark-$version-bin-hadoop2.7.tgz
>
> I don't know of any particular problem with this, which is a parallel
> download option in addition to the Apache mirrors. It's also the default.
>
> Does anyone know what this bucket is and if there's a strong reason we can't
> just use mirrors?

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]


Reply | Threaded
Open this post in threaded view
|

Re: What is d3kbcqa49mib13.cloudfront.net ?

Shivaram Venkataraman
Mark, I agree with your point on the risks of using Cloudfront while
building Spark. I was only trying to provide background on when we
started using Cloudfront.

Personally, I don't have enough about context about the test case in
question (e.g. Why are we downloading Spark in a test case ?).

Thanks
Shivaram

On Wed, Sep 13, 2017 at 11:50 AM, Mark Hamstra <[hidden email]> wrote:

> Yeah, but that discussion and use case is a bit different -- providing a
> different route to download the final released and approved artifacts that
> were built using only acceptable artifacts and sources vs. building and
> checking prior to release using something that is not from an Apache mirror.
> This new use case puts us in the position of approving spark artifacts that
> weren't built entirely from canonical resources located in presumably secure
> and monitored repositories. Incorporating something that is not completely
> trusted or approved into the process of building something that we are then
> going to approve as trusted is different from the prior use of cloudfront.
>
> On Wed, Sep 13, 2017 at 10:26 AM, Shivaram Venkataraman
> <[hidden email]> wrote:
>>
>> The bucket comes from Cloudfront, a CDN thats part of AWS. There was a
>> bunch of discussion about this back in 2013
>>
>> https://lists.apache.org/thread.html/9a72ff7ce913dd85a6b112b1b2de536dcda74b28b050f70646aba0ac@1380147885@%3Cdev.spark.apache.org%3E
>>
>> Shivaram
>>
>> On Wed, Sep 13, 2017 at 9:30 AM, Sean Owen <[hidden email]> wrote:
>> > Not a big deal, but Mark noticed that this test now downloads Spark
>> > artifacts from the same 'direct download' link available on the
>> > downloads
>> > page:
>> >
>> >
>> > https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala#L53
>> >
>> > https://d3kbcqa49mib13.cloudfront.net/spark-$version-bin-hadoop2.7.tgz
>> >
>> > I don't know of any particular problem with this, which is a parallel
>> > download option in addition to the Apache mirrors. It's also the
>> > default.
>> >
>> > Does anyone know what this bucket is and if there's a strong reason we
>> > can't
>> > just use mirrors?
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: [hidden email]
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: What is d3kbcqa49mib13.cloudfront.net ?

cloud0fan
That test case is trying to test the backward compatibility of `HiveExternalCatalog`. It downloads official Spark releases and creates tables with them, and then read these tables via the current Spark.

About the download link, I just picked it from the Spark website, and this link is the default one when you choose "direct download". Do we have a better choice?

On Thu, Sep 14, 2017 at 3:05 AM, Shivaram Venkataraman <[hidden email]> wrote:
Mark, I agree with your point on the risks of using Cloudfront while
building Spark. I was only trying to provide background on when we
started using Cloudfront.

Personally, I don't have enough about context about the test case in
question (e.g. Why are we downloading Spark in a test case ?).

Thanks
Shivaram

On Wed, Sep 13, 2017 at 11:50 AM, Mark Hamstra <[hidden email]> wrote:
> Yeah, but that discussion and use case is a bit different -- providing a
> different route to download the final released and approved artifacts that
> were built using only acceptable artifacts and sources vs. building and
> checking prior to release using something that is not from an Apache mirror.
> This new use case puts us in the position of approving spark artifacts that
> weren't built entirely from canonical resources located in presumably secure
> and monitored repositories. Incorporating something that is not completely
> trusted or approved into the process of building something that we are then
> going to approve as trusted is different from the prior use of cloudfront.
>
> On Wed, Sep 13, 2017 at 10:26 AM, Shivaram Venkataraman
> <[hidden email]> wrote:
>>
>> The bucket comes from Cloudfront, a CDN thats part of AWS. There was a
>> bunch of discussion about this back in 2013
>>
>> https://lists.apache.org/thread.html/9a72ff7ce913dd85a6b112b1b2de536dcda74b28b050f70646aba0ac@1380147885@%3Cdev.spark.apache.org%3E
>>
>> Shivaram
>>
>> On Wed, Sep 13, 2017 at 9:30 AM, Sean Owen <[hidden email]> wrote:
>> > Not a big deal, but Mark noticed that this test now downloads Spark
>> > artifacts from the same 'direct download' link available on the
>> > downloads
>> > page:
>> >
>> >
>> > https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala#L53
>> >
>> > https://d3kbcqa49mib13.cloudfront.net/spark-$version-bin-hadoop2.7.tgz
>> >
>> > I don't know of any particular problem with this, which is a parallel
>> > download option in addition to the Apache mirrors. It's also the
>> > default.
>> >
>> > Does anyone know what this bucket is and if there's a strong reason we
>> > can't
>> > just use mirrors?
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: [hidden email]
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]


Reply | Threaded
Open this post in threaded view
|

Re: What is d3kbcqa49mib13.cloudfront.net ?

Mark Hamstra
The problem is that it's not really an "official" download link, but rather just a supplemental convenience. While that may be ok when distributing artifacts, it's more of a problem when actually building and testing artifacts. In the latter case, the download should really only be from an Apache mirror.

On Thu, Sep 14, 2017 at 1:20 AM, Wenchen Fan <[hidden email]> wrote:
That test case is trying to test the backward compatibility of `HiveExternalCatalog`. It downloads official Spark releases and creates tables with them, and then read these tables via the current Spark.

About the download link, I just picked it from the Spark website, and this link is the default one when you choose "direct download". Do we have a better choice?

On Thu, Sep 14, 2017 at 3:05 AM, Shivaram Venkataraman <[hidden email]> wrote:
Mark, I agree with your point on the risks of using Cloudfront while
building Spark. I was only trying to provide background on when we
started using Cloudfront.

Personally, I don't have enough about context about the test case in
question (e.g. Why are we downloading Spark in a test case ?).

Thanks
Shivaram

On Wed, Sep 13, 2017 at 11:50 AM, Mark Hamstra <[hidden email]> wrote:
> Yeah, but that discussion and use case is a bit different -- providing a
> different route to download the final released and approved artifacts that
> were built using only acceptable artifacts and sources vs. building and
> checking prior to release using something that is not from an Apache mirror.
> This new use case puts us in the position of approving spark artifacts that
> weren't built entirely from canonical resources located in presumably secure
> and monitored repositories. Incorporating something that is not completely
> trusted or approved into the process of building something that we are then
> going to approve as trusted is different from the prior use of cloudfront.
>
> On Wed, Sep 13, 2017 at 10:26 AM, Shivaram Venkataraman
> <[hidden email]> wrote:
>>
>> The bucket comes from Cloudfront, a CDN thats part of AWS. There was a
>> bunch of discussion about this back in 2013
>>
>> https://lists.apache.org/thread.html/9a72ff7ce913dd85a6b112b1b2de536dcda74b28b050f70646aba0ac@1380147885@%3Cdev.spark.apache.org%3E
>>
>> Shivaram
>>
>> On Wed, Sep 13, 2017 at 9:30 AM, Sean Owen <[hidden email]> wrote:
>> > Not a big deal, but Mark noticed that this test now downloads Spark
>> > artifacts from the same 'direct download' link available on the
>> > downloads
>> > page:
>> >
>> >
>> > https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala#L53
>> >
>> > https://d3kbcqa49mib13.cloudfront.net/spark-$version-bin-hadoop2.7.tgz
>> >
>> > I don't know of any particular problem with this, which is a parallel
>> > download option in addition to the Apache mirrors. It's also the
>> > default.
>> >
>> > Does anyone know what this bucket is and if there's a strong reason we
>> > can't
>> > just use mirrors?
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: [hidden email]
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]



Reply | Threaded
Open this post in threaded view
|

Re: What is d3kbcqa49mib13.cloudfront.net ?

Sean Owen
I think the download could use the Apache mirror, yeah. I don't know if there's a reason that it must though. What's good enough for releases is good enough for this purpose. People might not like the big download in the tests if it really came up as an issue we could find ways to cache it better locally. I brought it up more as a question than a problem to solve.

On Thu, Sep 14, 2017 at 5:02 PM Mark Hamstra <[hidden email]> wrote:
The problem is that it's not really an "official" download link, but rather just a supplemental convenience. While that may be ok when distributing artifacts, it's more of a problem when actually building and testing artifacts. In the latter case, the download should really only be from an Apache mirror.

On Thu, Sep 14, 2017 at 1:20 AM, Wenchen Fan <[hidden email]> wrote:
That test case is trying to test the backward compatibility of `HiveExternalCatalog`. It downloads official Spark releases and creates tables with them, and then read these tables via the current Spark.

About the download link, I just picked it from the Spark website, and this link is the default one when you choose "direct download". Do we have a better choice?

On Thu, Sep 14, 2017 at 3:05 AM, Shivaram Venkataraman <[hidden email]> wrote:
Mark, I agree with your point on the risks of using Cloudfront while
building Spark. I was only trying to provide background on when we
started using Cloudfront.

Personally, I don't have enough about context about the test case in
question (e.g. Why are we downloading Spark in a test case ?).

Thanks
Shivaram

On Wed, Sep 13, 2017 at 11:50 AM, Mark Hamstra <[hidden email]> wrote:
> Yeah, but that discussion and use case is a bit different -- providing a
> different route to download the final released and approved artifacts that
> were built using only acceptable artifacts and sources vs. building and
> checking prior to release using something that is not from an Apache mirror.
> This new use case puts us in the position of approving spark artifacts that
> weren't built entirely from canonical resources located in presumably secure
> and monitored repositories. Incorporating something that is not completely
> trusted or approved into the process of building something that we are then
> going to approve as trusted is different from the prior use of cloudfront.
>
> On Wed, Sep 13, 2017 at 10:26 AM, Shivaram Venkataraman
> <[hidden email]> wrote:
>>
>> The bucket comes from Cloudfront, a CDN thats part of AWS. There was a
>> bunch of discussion about this back in 2013
>>
>> https://lists.apache.org/thread.html/9a72ff7ce913dd85a6b112b1b2de536dcda74b28b050f70646aba0ac@1380147885@%3Cdev.spark.apache.org%3E
>>
>> Shivaram
>>
>> On Wed, Sep 13, 2017 at 9:30 AM, Sean Owen <[hidden email]> wrote:
>> > Not a big deal, but Mark noticed that this test now downloads Spark
>> > artifacts from the same 'direct download' link available on the
>> > downloads
>> > page:
>> >
>> >
>> > https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala#L53
>> >
>> > https://d3kbcqa49mib13.cloudfront.net/spark-$version-bin-hadoop2.7.tgz
>> >
>> > I don't know of any particular problem with this, which is a parallel
>> > download option in addition to the Apache mirrors. It's also the
>> > default.
>> >
>> > Does anyone know what this bucket is and if there's a strong reason we
>> > can't
>> > just use mirrors?
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: [hidden email]
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]



Reply | Threaded
Open this post in threaded view
|

Re: What is d3kbcqa49mib13.cloudfront.net ?

Shixiong(Ryan) Zhu
Can we just create those tables once locally using official Spark versions and commit them? Then the unit tests can just read these files and don't need to download Spark.

On Thu, Sep 14, 2017 at 8:13 AM, Sean Owen <[hidden email]> wrote:
I think the download could use the Apache mirror, yeah. I don't know if there's a reason that it must though. What's good enough for releases is good enough for this purpose. People might not like the big download in the tests if it really came up as an issue we could find ways to cache it better locally. I brought it up more as a question than a problem to solve.

On Thu, Sep 14, 2017 at 5:02 PM Mark Hamstra <[hidden email]> wrote:
The problem is that it's not really an "official" download link, but rather just a supplemental convenience. While that may be ok when distributing artifacts, it's more of a problem when actually building and testing artifacts. In the latter case, the download should really only be from an Apache mirror.

On Thu, Sep 14, 2017 at 1:20 AM, Wenchen Fan <[hidden email]> wrote:
That test case is trying to test the backward compatibility of `HiveExternalCatalog`. It downloads official Spark releases and creates tables with them, and then read these tables via the current Spark.

About the download link, I just picked it from the Spark website, and this link is the default one when you choose "direct download". Do we have a better choice?

On Thu, Sep 14, 2017 at 3:05 AM, Shivaram Venkataraman <[hidden email]> wrote:
Mark, I agree with your point on the risks of using Cloudfront while
building Spark. I was only trying to provide background on when we
started using Cloudfront.

Personally, I don't have enough about context about the test case in
question (e.g. Why are we downloading Spark in a test case ?).

Thanks
Shivaram

On Wed, Sep 13, 2017 at 11:50 AM, Mark Hamstra <[hidden email]> wrote:
> Yeah, but that discussion and use case is a bit different -- providing a
> different route to download the final released and approved artifacts that
> were built using only acceptable artifacts and sources vs. building and
> checking prior to release using something that is not from an Apache mirror.
> This new use case puts us in the position of approving spark artifacts that
> weren't built entirely from canonical resources located in presumably secure
> and monitored repositories. Incorporating something that is not completely
> trusted or approved into the process of building something that we are then
> going to approve as trusted is different from the prior use of cloudfront.
>
> On Wed, Sep 13, 2017 at 10:26 AM, Shivaram Venkataraman
> <[hidden email]> wrote:
>>
>> The bucket comes from Cloudfront, a CDN thats part of AWS. There was a
>> bunch of discussion about this back in 2013
>>
>> https://lists.apache.org/thread.html/9a72ff7ce913dd85a6b112b1b2de536dcda74b28b050f70646aba0ac@1380147885@%3Cdev.spark.apache.org%3E
>>
>> Shivaram
>>
>> On Wed, Sep 13, 2017 at 9:30 AM, Sean Owen <[hidden email]> wrote:
>> > Not a big deal, but Mark noticed that this test now downloads Spark
>> > artifacts from the same 'direct download' link available on the
>> > downloads
>> > page:
>> >
>> >
>> > https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala#L53
>> >
>> > https://d3kbcqa49mib13.cloudfront.net/spark-$version-bin-hadoop2.7.tgz
>> >
>> > I don't know of any particular problem with this, which is a parallel
>> > download option in addition to the Apache mirrors. It's also the
>> > default.
>> >
>> > Does anyone know what this bucket is and if there's a strong reason we
>> > can't
>> > just use mirrors?
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: [hidden email]
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




Reply | Threaded
Open this post in threaded view
|

Re: What is d3kbcqa49mib13.cloudfront.net ?

cloud0fan
I'm afraid that will keep people away from contributing to this test suite, as they need to download spark with different versions to create the testing tables...

On Sat, Sep 16, 2017 at 4:48 AM, Shixiong(Ryan) Zhu <[hidden email]> wrote:
Can we just create those tables once locally using official Spark versions and commit them? Then the unit tests can just read these files and don't need to download Spark.

On Thu, Sep 14, 2017 at 8:13 AM, Sean Owen <[hidden email]> wrote:
I think the download could use the Apache mirror, yeah. I don't know if there's a reason that it must though. What's good enough for releases is good enough for this purpose. People might not like the big download in the tests if it really came up as an issue we could find ways to cache it better locally. I brought it up more as a question than a problem to solve.

On Thu, Sep 14, 2017 at 5:02 PM Mark Hamstra <[hidden email]> wrote:
The problem is that it's not really an "official" download link, but rather just a supplemental convenience. While that may be ok when distributing artifacts, it's more of a problem when actually building and testing artifacts. In the latter case, the download should really only be from an Apache mirror.

On Thu, Sep 14, 2017 at 1:20 AM, Wenchen Fan <[hidden email]> wrote:
That test case is trying to test the backward compatibility of `HiveExternalCatalog`. It downloads official Spark releases and creates tables with them, and then read these tables via the current Spark.

About the download link, I just picked it from the Spark website, and this link is the default one when you choose "direct download". Do we have a better choice?

On Thu, Sep 14, 2017 at 3:05 AM, Shivaram Venkataraman <[hidden email]> wrote:
Mark, I agree with your point on the risks of using Cloudfront while
building Spark. I was only trying to provide background on when we
started using Cloudfront.

Personally, I don't have enough about context about the test case in
question (e.g. Why are we downloading Spark in a test case ?).

Thanks
Shivaram

On Wed, Sep 13, 2017 at 11:50 AM, Mark Hamstra <[hidden email]> wrote:
> Yeah, but that discussion and use case is a bit different -- providing a
> different route to download the final released and approved artifacts that
> were built using only acceptable artifacts and sources vs. building and
> checking prior to release using something that is not from an Apache mirror.
> This new use case puts us in the position of approving spark artifacts that
> weren't built entirely from canonical resources located in presumably secure
> and monitored repositories. Incorporating something that is not completely
> trusted or approved into the process of building something that we are then
> going to approve as trusted is different from the prior use of cloudfront.
>
> On Wed, Sep 13, 2017 at 10:26 AM, Shivaram Venkataraman
> <[hidden email]> wrote:
>>
>> The bucket comes from Cloudfront, a CDN thats part of AWS. There was a
>> bunch of discussion about this back in 2013
>>
>> https://lists.apache.org/thread.html/9a72ff7ce913dd85a6b112b1b2de536dcda74b28b050f70646aba0ac@1380147885@%3Cdev.spark.apache.org%3E
>>
>> Shivaram
>>
>> On Wed, Sep 13, 2017 at 9:30 AM, Sean Owen <[hidden email]> wrote:
>> > Not a big deal, but Mark noticed that this test now downloads Spark
>> > artifacts from the same 'direct download' link available on the
>> > downloads
>> > page:
>> >
>> >
>> > https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala#L53
>> >
>> > https://d3kbcqa49mib13.cloudfront.net/spark-$version-bin-hadoop2.7.tgz
>> >
>> > I don't know of any particular problem with this, which is a parallel
>> > download option in addition to the Apache mirrors. It's also the
>> > default.
>> >
>> > Does anyone know what this bucket is and if there's a strong reason we
>> > can't
>> > just use mirrors?
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: [hidden email]
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]