[ANNOUNCE] Announcing Apache Spark 3.0.0-preview

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

[ANNOUNCE] Announcing Apache Spark 3.0.0-preview

Jiang Xingbo
Hi all,

To enable wide-scale community testing of the upcoming Spark 3.0 release, the Apache Spark community has posted a preview release of Spark 3.0. This preview is not a stable release in terms of either API or functionality, but it is meant to give the community early access to try the code that will become Spark 3.0. If you would like to test the release, please download it, and send feedback using either the mailing lists or JIRA.

There are a lot of exciting new features added to Spark 3.0, including Dynamic Partition Pruning, Adaptive Query Execution, Accelerator-aware Scheduling, Data Source API with Catalog Supports, Vectorization in SparkR, support of Hadoop 3/JDK 11/Scala 2.12, and many more. For a full list of major features and changes in Spark 3.0.0-preview, please check the thread(http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-3-0-preview-release-feature-list-and-major-changes-td28050.html).

We'd like to thank our contributors and users for their contributions and early feedback to this release. This release would not have been possible without you.

To download Spark 3.0.0-preview, head over to the download page: https://archive.apache.org/dist/spark/spark-3.0.0-preview

Thanks,

Xingbo
Reply | Threaded
Open this post in threaded view
|

Re: [ANNOUNCE] Announcing Apache Spark 3.0.0-preview

Nicholas Chammas
Data Source API with Catalog Supports

Where can we read more about this? The linked Nabble thread doesn't mention the word "Catalog".

On Thu, Nov 7, 2019 at 5:53 PM Xingbo Jiang <[hidden email]> wrote:
Hi all,

To enable wide-scale community testing of the upcoming Spark 3.0 release, the Apache Spark community has posted a preview release of Spark 3.0. This preview is not a stable release in terms of either API or functionality, but it is meant to give the community early access to try the code that will become Spark 3.0. If you would like to test the release, please download it, and send feedback using either the mailing lists or JIRA.

There are a lot of exciting new features added to Spark 3.0, including Dynamic Partition Pruning, Adaptive Query Execution, Accelerator-aware Scheduling, Data Source API with Catalog Supports, Vectorization in SparkR, support of Hadoop 3/JDK 11/Scala 2.12, and many more. For a full list of major features and changes in Spark 3.0.0-preview, please check the thread(http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-3-0-preview-release-feature-list-and-major-changes-td28050.html).

We'd like to thank our contributors and users for their contributions and early feedback to this release. This release would not have been possible without you.

To download Spark 3.0.0-preview, head over to the download page: https://archive.apache.org/dist/spark/spark-3.0.0-preview

Thanks,

Xingbo