Re: FYI: The evolution on `CHAR` type behavior

classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

rxin
I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.


On Mon, Mar 16, 2020 at 5:27 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you, Stephen and Reynold.

To Reynold.

The way I see the following is a little different.

      > CHAR is an undocumented data type without clearly defined semantics.

Let me describe in Apache Spark User's View point.

Apache Spark started to claim `HiveContext` (and `hql/hiveql` function) at Apache Spark 1.x without much documentation. In addition, there still exists an effort which is trying to keep it in 3.0.0 age.

       https://issues.apache.org/jira/browse/SPARK-31088
       Add back HiveContext and createExternalTable

Historically, we tried to make many SQL-based customer migrate their workloads from Apache Hive into Apache Spark through `HiveContext`.

Although Apache Spark didn't have a good document about the inconsistent behavior among its data sources, Apache Hive has been providing its documentation and many customers rely the behavior.

      - https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types

At that time, frequently in on-prem Hadoop clusters by well-known vendors, many existing huge tables were created by Apache Hive, not Apache Spark. And, Apache Spark is used for boosting SQL performance with its *caching*. This was true because Apache Spark was added into the Hadoop-vendor products later than Apache Hive.

Until the turning point at Apache Spark 2.0, we tried to catch up more features to be consistent at least with Hive tables in Apache Hive and Apache Spark because two SQL engines share the same tables.

For the following, technically, while Apache Hive doesn't changed its existing behavior in this part, Apache Spark evolves inevitably by moving away from the original Apache Spark old behaviors one-by-one.

      >  the value is already fucked up

The following is the change log.

      - When we switched the default value of `convertMetastoreParquet`. (at Apache Spark 1.2)
      - When we switched the default value of `convertMetastoreOrc` (at Apache Spark 2.4)
      - When we switched `CREATE TABLE` itself. (Change `TEXT` table to `PARQUET` table at Apache Spark 3.0)

To sum up, this has been a well-known issue in the community and among the customers.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:24 PM Stephen Coy <[hidden email]> wrote:
Hi there,

I’m kind of new around here, but I have had experience with all of all the so called “big iron” databases such as Oracle, IBM DB2 and Microsoft SQL Server as well as Postgresql.

They all support the notion of “ANSI padding” for CHAR columns - which means that such columns are always space padded, and they default to having this enabled (for ANSI compliance).

MySQL also supports it, but it defaults to leaving it disabled for historical reasons not unlike what we have here.

In my opinion we should push toward standards compliance where possible and then document where it cannot work.

If users don’t like the padding on CHAR columns then they should change to VARCHAR - I believe that was its purpose in the first place, and it does not dictate any sort of “padding".

I can see why you might “ban” the use of CHAR columns where they cannot be consistently supported, but VARCHAR is a different animal and I would expect it to work consistently everywhere.


Cheers,

Steve C

On 17 Mar 2020, at 10:01 am, Dongjoon Hyun <[hidden email]> wrote:

Hi, Reynold.
(And +Michael Armbrust)

If you think so, do you think it's okay that we change the return value silently? Then, I'm wondering why we reverted `TRIM` functions then?

> Are we sure "not padding" is "incorrect"?

Bests,
Dongjoon.


On Sun, Mar 15, 2020 at 11:15 PM Gourav Sengupta <[hidden email]> wrote:
Hi,

100% agree with Reynold.


Regards,
Gourav Sengupta

On Mon, Mar 16, 2020 at 3:31 AM Reynold Xin <[hidden email]> wrote:
Are we sure "not padding" is "incorrect"?

I don't know whether ANSI SQL actually requires padding, but plenty of databases don't actually pad.

https://docs.snowflake.net/manuals/sql-reference/data-types-text.html : "Snowflake currently deviates from common CHAR semantics in that strings shorter than the maximum length are not space-padded at the end."









On Sun, Mar 15, 2020 at 7:02 PM, Dongjoon Hyun <[hidden email]> wrote:
Hi, Reynold.

Please see the following for the context.

"Revert SPARK-30098 Use default datasource as provider for CREATE TABLE syntax"

I raised the above issue according to the new rubric, and the banning was the proposed alternative to reduce the potential issue.

Please give us your opinion since it's still PR.

Bests,
Dongjoon.

On Sat, Mar 14, 2020 at 17:54 Reynold Xin <[hidden email]> wrote:
I don’t understand this change. Wouldn’t this “ban” confuse the hell out of both new and old users?

For old users, their old code that was working for char(3) would now stop working. 

For new users, depending on whether the underlying metastore char(3) is either supported but different from ansi Sql (which is not that big of a deal if we explain it) or not supported. 

On Sat, Mar 14, 2020 at 3:51 PM Dongjoon Hyun <[hidden email]> wrote:
Hi, All.

Apache Spark has been suffered from a known consistency issue on `CHAR` type behavior among its usages and configurations. However, the evolution direction has been gradually moving forward to be consistent inside Apache Spark because we don't have `CHAR` offically. The following is the summary.

With 1.6.x ~ 2.3.x, `STORED PARQUET` has the following different result.
(`spark.sql.hive.convertMetastoreParquet=false` provides a fallback to Hive behavior.)

    spark-sql> CREATE TABLE t1(a CHAR(3));
    spark-sql> CREATE TABLE t2(a CHAR(3)) STORED AS ORC;
    spark-sql> CREATE TABLE t3(a CHAR(3)) STORED AS PARQUET;

    spark-sql> INSERT INTO TABLE t1 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t2 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t3 SELECT 'a ';

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a   3
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 2.4.0, `STORED AS ORC` became consistent.
(`spark.sql.hive.convertMetastoreOrc=false` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 3.0.0-preview2, `CREATE TABLE` (without `STORED AS` clause) became consistent.
(`spark.sql.legacy.createHiveTableByDefault.enabled=true` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a 2
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

In addition, in 3.0.0, SPARK-31147 aims to ban `CHAR/VARCHAR` type in the following syntax to be safe.

    CREATE TABLE t(a CHAR(3));
    https://github.com/apache/spark/pull/27902

This email is sent out to inform you based on the new policy we voted.
The recommendation is always using Apache Spark's native type `String`.

Bests,
Dongjoon.

References:
1. "CHAR implementation?", 2017/09/15
     https://lists.apache.org/thread.html/96b004331d9762e356053b5c8c97e953e398e489d15e1b49e775702f%40%3Cdev.spark.apache.org%3E
2. "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E


This email contains confidential information of and is the copyright of Infomedia. It must not be forwarded, amended or disclosed without consent of the sender. If you received this message by mistake, please advise the sender and delete all copies. Security of transmission on the internet cannot be guaranteed, could be infected, intercepted, or corrupted and you should ensure you have suitable antivirus protection in place. By sending us your or any third party personal details, you consent to (or confirm you have obtained consent from such third parties) to Infomedia’s privacy policy. http://www.infomedia.com.au/privacy-policy/


smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

rxin
BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.




On Mon, Mar 16, 2020 at 5:29 PM, Reynold Xin <[hidden email]> wrote:
I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.


On Mon, Mar 16, 2020 at 5:27 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you, Stephen and Reynold.

To Reynold.

The way I see the following is a little different.

      > CHAR is an undocumented data type without clearly defined semantics.

Let me describe in Apache Spark User's View point.

Apache Spark started to claim `HiveContext` (and `hql/hiveql` function) at Apache Spark 1.x without much documentation. In addition, there still exists an effort which is trying to keep it in 3.0.0 age.

       https://issues.apache.org/jira/browse/SPARK-31088
       Add back HiveContext and createExternalTable

Historically, we tried to make many SQL-based customer migrate their workloads from Apache Hive into Apache Spark through `HiveContext`.

Although Apache Spark didn't have a good document about the inconsistent behavior among its data sources, Apache Hive has been providing its documentation and many customers rely the behavior.

      - https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types

At that time, frequently in on-prem Hadoop clusters by well-known vendors, many existing huge tables were created by Apache Hive, not Apache Spark. And, Apache Spark is used for boosting SQL performance with its *caching*. This was true because Apache Spark was added into the Hadoop-vendor products later than Apache Hive.

Until the turning point at Apache Spark 2.0, we tried to catch up more features to be consistent at least with Hive tables in Apache Hive and Apache Spark because two SQL engines share the same tables.

For the following, technically, while Apache Hive doesn't changed its existing behavior in this part, Apache Spark evolves inevitably by moving away from the original Apache Spark old behaviors one-by-one.

      >  the value is already fucked up

The following is the change log.

      - When we switched the default value of `convertMetastoreParquet`. (at Apache Spark 1.2)
      - When we switched the default value of `convertMetastoreOrc` (at Apache Spark 2.4)
      - When we switched `CREATE TABLE` itself. (Change `TEXT` table to `PARQUET` table at Apache Spark 3.0)

To sum up, this has been a well-known issue in the community and among the customers.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:24 PM Stephen Coy <[hidden email]> wrote:
Hi there,

I’m kind of new around here, but I have had experience with all of all the so called “big iron” databases such as Oracle, IBM DB2 and Microsoft SQL Server as well as Postgresql.

They all support the notion of “ANSI padding” for CHAR columns - which means that such columns are always space padded, and they default to having this enabled (for ANSI compliance).

MySQL also supports it, but it defaults to leaving it disabled for historical reasons not unlike what we have here.

In my opinion we should push toward standards compliance where possible and then document where it cannot work.

If users don’t like the padding on CHAR columns then they should change to VARCHAR - I believe that was its purpose in the first place, and it does not dictate any sort of “padding".

I can see why you might “ban” the use of CHAR columns where they cannot be consistently supported, but VARCHAR is a different animal and I would expect it to work consistently everywhere.


Cheers,

Steve C

On 17 Mar 2020, at 10:01 am, Dongjoon Hyun <[hidden email]> wrote:

Hi, Reynold.
(And +Michael Armbrust)

If you think so, do you think it's okay that we change the return value silently? Then, I'm wondering why we reverted `TRIM` functions then?

> Are we sure "not padding" is "incorrect"?

Bests,
Dongjoon.


On Sun, Mar 15, 2020 at 11:15 PM Gourav Sengupta <[hidden email]> wrote:
Hi,

100% agree with Reynold.


Regards,
Gourav Sengupta

On Mon, Mar 16, 2020 at 3:31 AM Reynold Xin <[hidden email]> wrote:
Are we sure "not padding" is "incorrect"?

I don't know whether ANSI SQL actually requires padding, but plenty of databases don't actually pad.

https://docs.snowflake.net/manuals/sql-reference/data-types-text.html : "Snowflake currently deviates from common CHAR semantics in that strings shorter than the maximum length are not space-padded at the end."









On Sun, Mar 15, 2020 at 7:02 PM, Dongjoon Hyun <[hidden email]> wrote:
Hi, Reynold.

Please see the following for the context.

"Revert SPARK-30098 Use default datasource as provider for CREATE TABLE syntax"

I raised the above issue according to the new rubric, and the banning was the proposed alternative to reduce the potential issue.

Please give us your opinion since it's still PR.

Bests,
Dongjoon.

On Sat, Mar 14, 2020 at 17:54 Reynold Xin <[hidden email]> wrote:
I don’t understand this change. Wouldn’t this “ban” confuse the hell out of both new and old users?

For old users, their old code that was working for char(3) would now stop working. 

For new users, depending on whether the underlying metastore char(3) is either supported but different from ansi Sql (which is not that big of a deal if we explain it) or not supported. 

On Sat, Mar 14, 2020 at 3:51 PM Dongjoon Hyun <[hidden email]> wrote:
Hi, All.

Apache Spark has been suffered from a known consistency issue on `CHAR` type behavior among its usages and configurations. However, the evolution direction has been gradually moving forward to be consistent inside Apache Spark because we don't have `CHAR` offically. The following is the summary.

With 1.6.x ~ 2.3.x, `STORED PARQUET` has the following different result.
(`spark.sql.hive.convertMetastoreParquet=false` provides a fallback to Hive behavior.)

    spark-sql> CREATE TABLE t1(a CHAR(3));
    spark-sql> CREATE TABLE t2(a CHAR(3)) STORED AS ORC;
    spark-sql> CREATE TABLE t3(a CHAR(3)) STORED AS PARQUET;

    spark-sql> INSERT INTO TABLE t1 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t2 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t3 SELECT 'a ';

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a   3
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 2.4.0, `STORED AS ORC` became consistent.
(`spark.sql.hive.convertMetastoreOrc=false` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 3.0.0-preview2, `CREATE TABLE` (without `STORED AS` clause) became consistent.
(`spark.sql.legacy.createHiveTableByDefault.enabled=true` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a 2
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

In addition, in 3.0.0, SPARK-31147 aims to ban `CHAR/VARCHAR` type in the following syntax to be safe.

    CREATE TABLE t(a CHAR(3));
    https://github.com/apache/spark/pull/27902

This email is sent out to inform you based on the new policy we voted.
The recommendation is always using Apache Spark's native type `String`.

Bests,
Dongjoon.

References:
1. "CHAR implementation?", 2017/09/15
     https://lists.apache.org/thread.html/96b004331d9762e356053b5c8c97e953e398e489d15e1b49e775702f%40%3Cdev.spark.apache.org%3E
2. "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E


This email contains confidential information of and is the copyright of Infomedia. It must not be forwarded, amended or disclosed without consent of the sender. If you received this message by mistake, please advise the sender and delete all copies. Security of transmission on the internet cannot be guaranteed, could be infected, intercepted, or corrupted and you should ensure you have suitable antivirus protection in place. By sending us your or any third party personal details, you consent to (or confirm you have obtained consent from such third parties) to Infomedia’s privacy policy. http://www.infomedia.com.au/privacy-policy/


smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

rxin
−User

char barely showed up (honestly negligible). I was comparing select vs select.



On Mon, Mar 16, 2020 at 5:40 PM, Dongjoon Hyun <[hidden email]> wrote:
Ur, are you comparing the number of SELECT statement with TRIM and CREATE statements with `CHAR`?

> I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.

We need to discuss more about what to do. This thread is what I expected exactly. :)

> BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:35 PM Reynold Xin <[hidden email]> wrote:
BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.




On Mon, Mar 16, 2020 at 5:29 PM, Reynold Xin <[hidden email]> wrote:
I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.


On Mon, Mar 16, 2020 at 5:27 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you, Stephen and Reynold.

To Reynold.

The way I see the following is a little different.

      > CHAR is an undocumented data type without clearly defined semantics.

Let me describe in Apache Spark User's View point.

Apache Spark started to claim `HiveContext` (and `hql/hiveql` function) at Apache Spark 1.x without much documentation. In addition, there still exists an effort which is trying to keep it in 3.0.0 age.

       https://issues.apache.org/jira/browse/SPARK-31088
       Add back HiveContext and createExternalTable

Historically, we tried to make many SQL-based customer migrate their workloads from Apache Hive into Apache Spark through `HiveContext`.

Although Apache Spark didn't have a good document about the inconsistent behavior among its data sources, Apache Hive has been providing its documentation and many customers rely the behavior.

      - https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types

At that time, frequently in on-prem Hadoop clusters by well-known vendors, many existing huge tables were created by Apache Hive, not Apache Spark. And, Apache Spark is used for boosting SQL performance with its *caching*. This was true because Apache Spark was added into the Hadoop-vendor products later than Apache Hive.

Until the turning point at Apache Spark 2.0, we tried to catch up more features to be consistent at least with Hive tables in Apache Hive and Apache Spark because two SQL engines share the same tables.

For the following, technically, while Apache Hive doesn't changed its existing behavior in this part, Apache Spark evolves inevitably by moving away from the original Apache Spark old behaviors one-by-one.

      >  the value is already fucked up

The following is the change log.

      - When we switched the default value of `convertMetastoreParquet`. (at Apache Spark 1.2)
      - When we switched the default value of `convertMetastoreOrc` (at Apache Spark 2.4)
      - When we switched `CREATE TABLE` itself. (Change `TEXT` table to `PARQUET` table at Apache Spark 3.0)

To sum up, this has been a well-known issue in the community and among the customers.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:24 PM Stephen Coy <[hidden email]> wrote:
Hi there,

I’m kind of new around here, but I have had experience with all of all the so called “big iron” databases such as Oracle, IBM DB2 and Microsoft SQL Server as well as Postgresql.

They all support the notion of “ANSI padding” for CHAR columns - which means that such columns are always space padded, and they default to having this enabled (for ANSI compliance).

MySQL also supports it, but it defaults to leaving it disabled for historical reasons not unlike what we have here.

In my opinion we should push toward standards compliance where possible and then document where it cannot work.

If users don’t like the padding on CHAR columns then they should change to VARCHAR - I believe that was its purpose in the first place, and it does not dictate any sort of “padding".

I can see why you might “ban” the use of CHAR columns where they cannot be consistently supported, but VARCHAR is a different animal and I would expect it to work consistently everywhere.


Cheers,

Steve C

On 17 Mar 2020, at 10:01 am, Dongjoon Hyun <[hidden email]> wrote:

Hi, Reynold.
(And +Michael Armbrust)

If you think so, do you think it's okay that we change the return value silently? Then, I'm wondering why we reverted `TRIM` functions then?

> Are we sure "not padding" is "incorrect"?

Bests,
Dongjoon.


On Sun, Mar 15, 2020 at 11:15 PM Gourav Sengupta <[hidden email]> wrote:
Hi,

100% agree with Reynold.


Regards,
Gourav Sengupta

On Mon, Mar 16, 2020 at 3:31 AM Reynold Xin <[hidden email]> wrote:
Are we sure "not padding" is "incorrect"?

I don't know whether ANSI SQL actually requires padding, but plenty of databases don't actually pad.

https://docs.snowflake.net/manuals/sql-reference/data-types-text.html : "Snowflake currently deviates from common CHAR semantics in that strings shorter than the maximum length are not space-padded at the end."









On Sun, Mar 15, 2020 at 7:02 PM, Dongjoon Hyun <[hidden email]> wrote:
Hi, Reynold.

Please see the following for the context.

"Revert SPARK-30098 Use default datasource as provider for CREATE TABLE syntax"

I raised the above issue according to the new rubric, and the banning was the proposed alternative to reduce the potential issue.

Please give us your opinion since it's still PR.

Bests,
Dongjoon.

On Sat, Mar 14, 2020 at 17:54 Reynold Xin <[hidden email]> wrote:
I don’t understand this change. Wouldn’t this “ban” confuse the hell out of both new and old users?

For old users, their old code that was working for char(3) would now stop working. 

For new users, depending on whether the underlying metastore char(3) is either supported but different from ansi Sql (which is not that big of a deal if we explain it) or not supported. 

On Sat, Mar 14, 2020 at 3:51 PM Dongjoon Hyun <[hidden email]> wrote:
Hi, All.

Apache Spark has been suffered from a known consistency issue on `CHAR` type behavior among its usages and configurations. However, the evolution direction has been gradually moving forward to be consistent inside Apache Spark because we don't have `CHAR` offically. The following is the summary.

With 1.6.x ~ 2.3.x, `STORED PARQUET` has the following different result.
(`spark.sql.hive.convertMetastoreParquet=false` provides a fallback to Hive behavior.)

    spark-sql> CREATE TABLE t1(a CHAR(3));
    spark-sql> CREATE TABLE t2(a CHAR(3)) STORED AS ORC;
    spark-sql> CREATE TABLE t3(a CHAR(3)) STORED AS PARQUET;

    spark-sql> INSERT INTO TABLE t1 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t2 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t3 SELECT 'a ';

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a   3
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 2.4.0, `STORED AS ORC` became consistent.
(`spark.sql.hive.convertMetastoreOrc=false` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 3.0.0-preview2, `CREATE TABLE` (without `STORED AS` clause) became consistent.
(`spark.sql.legacy.createHiveTableByDefault.enabled=true` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a 2
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

In addition, in 3.0.0, SPARK-31147 aims to ban `CHAR/VARCHAR` type in the following syntax to be safe.

    CREATE TABLE t(a CHAR(3));
    https://github.com/apache/spark/pull/27902

This email is sent out to inform you based on the new policy we voted.
The recommendation is always using Apache Spark's native type `String`.

Bests,
Dongjoon.

References:
1. "CHAR implementation?", 2017/09/15
     https://lists.apache.org/thread.html/96b004331d9762e356053b5c8c97e953e398e489d15e1b49e775702f%40%3Cdev.spark.apache.org%3E
2. "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E


This email contains confidential information of and is the copyright of Infomedia. It must not be forwarded, amended or disclosed without consent of the sender. If you received this message by mistake, please advise the sender and delete all copies. Security of transmission on the internet cannot be guaranteed, could be infected, intercepted, or corrupted and you should ensure you have suitable antivirus protection in place. By sending us your or any third party personal details, you consent to (or confirm you have obtained consent from such third parties) to Infomedia’s privacy policy. http://www.infomedia.com.au/privacy-policy/


smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

Dongjoon Hyun-2
Thank you for sharing and confirming.

We had better consider all heterogeneous customers in the world. And, I also have experiences with the non-negligible cases in on-prem.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:42 PM Reynold Xin <[hidden email]> wrote:
−User

char barely showed up (honestly negligible). I was comparing select vs select.



On Mon, Mar 16, 2020 at 5:40 PM, Dongjoon Hyun <[hidden email]> wrote:
Ur, are you comparing the number of SELECT statement with TRIM and CREATE statements with `CHAR`?

> I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.

We need to discuss more about what to do. This thread is what I expected exactly. :)

> BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:35 PM Reynold Xin <[hidden email]> wrote:
BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.




On Mon, Mar 16, 2020 at 5:29 PM, Reynold Xin <[hidden email]> wrote:
I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.


On Mon, Mar 16, 2020 at 5:27 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you, Stephen and Reynold.

To Reynold.

The way I see the following is a little different.

      > CHAR is an undocumented data type without clearly defined semantics.

Let me describe in Apache Spark User's View point.

Apache Spark started to claim `HiveContext` (and `hql/hiveql` function) at Apache Spark 1.x without much documentation. In addition, there still exists an effort which is trying to keep it in 3.0.0 age.

       https://issues.apache.org/jira/browse/SPARK-31088
       Add back HiveContext and createExternalTable

Historically, we tried to make many SQL-based customer migrate their workloads from Apache Hive into Apache Spark through `HiveContext`.

Although Apache Spark didn't have a good document about the inconsistent behavior among its data sources, Apache Hive has been providing its documentation and many customers rely the behavior.

      - https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types

At that time, frequently in on-prem Hadoop clusters by well-known vendors, many existing huge tables were created by Apache Hive, not Apache Spark. And, Apache Spark is used for boosting SQL performance with its *caching*. This was true because Apache Spark was added into the Hadoop-vendor products later than Apache Hive.

Until the turning point at Apache Spark 2.0, we tried to catch up more features to be consistent at least with Hive tables in Apache Hive and Apache Spark because two SQL engines share the same tables.

For the following, technically, while Apache Hive doesn't changed its existing behavior in this part, Apache Spark evolves inevitably by moving away from the original Apache Spark old behaviors one-by-one.

      >  the value is already fucked up

The following is the change log.

      - When we switched the default value of `convertMetastoreParquet`. (at Apache Spark 1.2)
      - When we switched the default value of `convertMetastoreOrc` (at Apache Spark 2.4)
      - When we switched `CREATE TABLE` itself. (Change `TEXT` table to `PARQUET` table at Apache Spark 3.0)

To sum up, this has been a well-known issue in the community and among the customers.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:24 PM Stephen Coy <[hidden email]> wrote:
Hi there,

I’m kind of new around here, but I have had experience with all of all the so called “big iron” databases such as Oracle, IBM DB2 and Microsoft SQL Server as well as Postgresql.

They all support the notion of “ANSI padding” for CHAR columns - which means that such columns are always space padded, and they default to having this enabled (for ANSI compliance).

MySQL also supports it, but it defaults to leaving it disabled for historical reasons not unlike what we have here.

In my opinion we should push toward standards compliance where possible and then document where it cannot work.

If users don’t like the padding on CHAR columns then they should change to VARCHAR - I believe that was its purpose in the first place, and it does not dictate any sort of “padding".

I can see why you might “ban” the use of CHAR columns where they cannot be consistently supported, but VARCHAR is a different animal and I would expect it to work consistently everywhere.


Cheers,

Steve C

On 17 Mar 2020, at 10:01 am, Dongjoon Hyun <[hidden email]> wrote:

Hi, Reynold.
(And +Michael Armbrust)

If you think so, do you think it's okay that we change the return value silently? Then, I'm wondering why we reverted `TRIM` functions then?

> Are we sure "not padding" is "incorrect"?

Bests,
Dongjoon.


On Sun, Mar 15, 2020 at 11:15 PM Gourav Sengupta <[hidden email]> wrote:
Hi,

100% agree with Reynold.


Regards,
Gourav Sengupta

On Mon, Mar 16, 2020 at 3:31 AM Reynold Xin <[hidden email]> wrote:
Are we sure "not padding" is "incorrect"?

I don't know whether ANSI SQL actually requires padding, but plenty of databases don't actually pad.

https://docs.snowflake.net/manuals/sql-reference/data-types-text.html : "Snowflake currently deviates from common CHAR semantics in that strings shorter than the maximum length are not space-padded at the end."









On Sun, Mar 15, 2020 at 7:02 PM, Dongjoon Hyun <[hidden email]> wrote:
Hi, Reynold.

Please see the following for the context.

"Revert SPARK-30098 Use default datasource as provider for CREATE TABLE syntax"

I raised the above issue according to the new rubric, and the banning was the proposed alternative to reduce the potential issue.

Please give us your opinion since it's still PR.

Bests,
Dongjoon.

On Sat, Mar 14, 2020 at 17:54 Reynold Xin <[hidden email]> wrote:
I don’t understand this change. Wouldn’t this “ban” confuse the hell out of both new and old users?

For old users, their old code that was working for char(3) would now stop working. 

For new users, depending on whether the underlying metastore char(3) is either supported but different from ansi Sql (which is not that big of a deal if we explain it) or not supported. 

On Sat, Mar 14, 2020 at 3:51 PM Dongjoon Hyun <[hidden email]> wrote:
Hi, All.

Apache Spark has been suffered from a known consistency issue on `CHAR` type behavior among its usages and configurations. However, the evolution direction has been gradually moving forward to be consistent inside Apache Spark because we don't have `CHAR` offically. The following is the summary.

With 1.6.x ~ 2.3.x, `STORED PARQUET` has the following different result.
(`spark.sql.hive.convertMetastoreParquet=false` provides a fallback to Hive behavior.)

    spark-sql> CREATE TABLE t1(a CHAR(3));
    spark-sql> CREATE TABLE t2(a CHAR(3)) STORED AS ORC;
    spark-sql> CREATE TABLE t3(a CHAR(3)) STORED AS PARQUET;

    spark-sql> INSERT INTO TABLE t1 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t2 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t3 SELECT 'a ';

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a   3
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 2.4.0, `STORED AS ORC` became consistent.
(`spark.sql.hive.convertMetastoreOrc=false` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 3.0.0-preview2, `CREATE TABLE` (without `STORED AS` clause) became consistent.
(`spark.sql.legacy.createHiveTableByDefault.enabled=true` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a 2
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

In addition, in 3.0.0, SPARK-31147 aims to ban `CHAR/VARCHAR` type in the following syntax to be safe.

    CREATE TABLE t(a CHAR(3));
    https://github.com/apache/spark/pull/27902

This email is sent out to inform you based on the new policy we voted.
The recommendation is always using Apache Spark's native type `String`.

Bests,
Dongjoon.

References:
1. "CHAR implementation?", 2017/09/15
     https://lists.apache.org/thread.html/96b004331d9762e356053b5c8c97e953e398e489d15e1b49e775702f%40%3Cdev.spark.apache.org%3E
2. "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E


This email contains confidential information of and is the copyright of Infomedia. It must not be forwarded, amended or disclosed without consent of the sender. If you received this message by mistake, please advise the sender and delete all copies. Security of transmission on the internet cannot be guaranteed, could be infected, intercepted, or corrupted and you should ensure you have suitable antivirus protection in place. By sending us your or any third party personal details, you consent to (or confirm you have obtained consent from such third parties) to Infomedia’s privacy policy. http://www.infomedia.com.au/privacy-policy/

Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

rxin
For sure.

There's another reason I feel char is not that important and it's more important to be internally consistent (e.g. all data sources support it with the same behavior, vs one data sources do one behavior and another do the other). char was created at a time when cpu was slow and storage was expensive, and being able to pack things nicely at fixed length was highly useful. The fact that it was padded was initially done for performance, not for the padding itself. A lot has changed since char was invented, and with modern technologies (columnar, dictionary encoding, etc) there is little reason to use a char data type for anything. As a matter of fact, Spark internally converts char type to string to work with.


I see two solutions really.

1. We require padding, and ban all uses of char when it is not properly padded. This would ban all the native data sources, which are the primarily way people are using Spark. This leaves only char support for tables going through Hive serdes, which are slow to begin with. It is basically Dongjoon and Wenchen's suggestion. This turns char support into a compatibility feature only for some Hive tables that cannot be converted into Spark native data sources. This has confusing end-user behavior because depending on whether that Hive table is converted into Spark native data sources, we might or might not support char type.

An extension to the above is to introduce padding for char type across the board, and make char type a first class data type. There are a lot of work to introduce another data type, especially for one that has virtually no usage and its usage will likely continue to decline in the future (just reason from first principle based on the reason char was introduced in the first place).

Now I'm assuming it's a lot of work to do char properly. But if it is not the case (e.g. just a simple rule to insert padding at planning time), then maybe it's worth doing it this way. I'm totally OK with this too.

What I'd oppose is to just ban char for the native data sources, and do not have a plan to address this problem systematically.


2. Just forget about padding, like what Snowflake and MySQL have done. Document that char(x) is just an alias for string. And then move on. Almost no work needs to be done...







On Mon, Mar 16, 2020 at 5:54 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you for sharing and confirming.

We had better consider all heterogeneous customers in the world. And, I also have experiences with the non-negligible cases in on-prem.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:42 PM Reynold Xin <[hidden email]> wrote:
−User

char barely showed up (honestly negligible). I was comparing select vs select.



On Mon, Mar 16, 2020 at 5:40 PM, Dongjoon Hyun <[hidden email]> wrote:
Ur, are you comparing the number of SELECT statement with TRIM and CREATE statements with `CHAR`?

> I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.

We need to discuss more about what to do. This thread is what I expected exactly. :)

> BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:35 PM Reynold Xin <[hidden email]> wrote:
BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.




On Mon, Mar 16, 2020 at 5:29 PM, Reynold Xin <[hidden email]> wrote:
I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.


On Mon, Mar 16, 2020 at 5:27 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you, Stephen and Reynold.

To Reynold.

The way I see the following is a little different.

      > CHAR is an undocumented data type without clearly defined semantics.

Let me describe in Apache Spark User's View point.

Apache Spark started to claim `HiveContext` (and `hql/hiveql` function) at Apache Spark 1.x without much documentation. In addition, there still exists an effort which is trying to keep it in 3.0.0 age.

       https://issues.apache.org/jira/browse/SPARK-31088
       Add back HiveContext and createExternalTable

Historically, we tried to make many SQL-based customer migrate their workloads from Apache Hive into Apache Spark through `HiveContext`.

Although Apache Spark didn't have a good document about the inconsistent behavior among its data sources, Apache Hive has been providing its documentation and many customers rely the behavior.

      - https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types

At that time, frequently in on-prem Hadoop clusters by well-known vendors, many existing huge tables were created by Apache Hive, not Apache Spark. And, Apache Spark is used for boosting SQL performance with its *caching*. This was true because Apache Spark was added into the Hadoop-vendor products later than Apache Hive.

Until the turning point at Apache Spark 2.0, we tried to catch up more features to be consistent at least with Hive tables in Apache Hive and Apache Spark because two SQL engines share the same tables.

For the following, technically, while Apache Hive doesn't changed its existing behavior in this part, Apache Spark evolves inevitably by moving away from the original Apache Spark old behaviors one-by-one.

      >  the value is already fucked up

The following is the change log.

      - When we switched the default value of `convertMetastoreParquet`. (at Apache Spark 1.2)
      - When we switched the default value of `convertMetastoreOrc` (at Apache Spark 2.4)
      - When we switched `CREATE TABLE` itself. (Change `TEXT` table to `PARQUET` table at Apache Spark 3.0)

To sum up, this has been a well-known issue in the community and among the customers.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:24 PM Stephen Coy <[hidden email]> wrote:
Hi there,

I’m kind of new around here, but I have had experience with all of all the so called “big iron” databases such as Oracle, IBM DB2 and Microsoft SQL Server as well as Postgresql.

They all support the notion of “ANSI padding” for CHAR columns - which means that such columns are always space padded, and they default to having this enabled (for ANSI compliance).

MySQL also supports it, but it defaults to leaving it disabled for historical reasons not unlike what we have here.

In my opinion we should push toward standards compliance where possible and then document where it cannot work.

If users don’t like the padding on CHAR columns then they should change to VARCHAR - I believe that was its purpose in the first place, and it does not dictate any sort of “padding".

I can see why you might “ban” the use of CHAR columns where they cannot be consistently supported, but VARCHAR is a different animal and I would expect it to work consistently everywhere.


Cheers,

Steve C

On 17 Mar 2020, at 10:01 am, Dongjoon Hyun <[hidden email]> wrote:

Hi, Reynold.
(And +Michael Armbrust)

If you think so, do you think it's okay that we change the return value silently? Then, I'm wondering why we reverted `TRIM` functions then?

> Are we sure "not padding" is "incorrect"?

Bests,
Dongjoon.


On Sun, Mar 15, 2020 at 11:15 PM Gourav Sengupta <[hidden email]> wrote:
Hi,

100% agree with Reynold.


Regards,
Gourav Sengupta

On Mon, Mar 16, 2020 at 3:31 AM Reynold Xin <[hidden email]> wrote:
Are we sure "not padding" is "incorrect"?

I don't know whether ANSI SQL actually requires padding, but plenty of databases don't actually pad.

https://docs.snowflake.net/manuals/sql-reference/data-types-text.html : "Snowflake currently deviates from common CHAR semantics in that strings shorter than the maximum length are not space-padded at the end."









On Sun, Mar 15, 2020 at 7:02 PM, Dongjoon Hyun <[hidden email]> wrote:
Hi, Reynold.

Please see the following for the context.

"Revert SPARK-30098 Use default datasource as provider for CREATE TABLE syntax"

I raised the above issue according to the new rubric, and the banning was the proposed alternative to reduce the potential issue.

Please give us your opinion since it's still PR.

Bests,
Dongjoon.

On Sat, Mar 14, 2020 at 17:54 Reynold Xin <[hidden email]> wrote:
I don’t understand this change. Wouldn’t this “ban” confuse the hell out of both new and old users?

For old users, their old code that was working for char(3) would now stop working. 

For new users, depending on whether the underlying metastore char(3) is either supported but different from ansi Sql (which is not that big of a deal if we explain it) or not supported. 

On Sat, Mar 14, 2020 at 3:51 PM Dongjoon Hyun <[hidden email]> wrote:
Hi, All.

Apache Spark has been suffered from a known consistency issue on `CHAR` type behavior among its usages and configurations. However, the evolution direction has been gradually moving forward to be consistent inside Apache Spark because we don't have `CHAR` offically. The following is the summary.

With 1.6.x ~ 2.3.x, `STORED PARQUET` has the following different result.
(`spark.sql.hive.convertMetastoreParquet=false` provides a fallback to Hive behavior.)

    spark-sql> CREATE TABLE t1(a CHAR(3));
    spark-sql> CREATE TABLE t2(a CHAR(3)) STORED AS ORC;
    spark-sql> CREATE TABLE t3(a CHAR(3)) STORED AS PARQUET;

    spark-sql> INSERT INTO TABLE t1 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t2 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t3 SELECT 'a ';

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a   3
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 2.4.0, `STORED AS ORC` became consistent.
(`spark.sql.hive.convertMetastoreOrc=false` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 3.0.0-preview2, `CREATE TABLE` (without `STORED AS` clause) became consistent.
(`spark.sql.legacy.createHiveTableByDefault.enabled=true` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a 2
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

In addition, in 3.0.0, SPARK-31147 aims to ban `CHAR/VARCHAR` type in the following syntax to be safe.

    CREATE TABLE t(a CHAR(3));
    https://github.com/apache/spark/pull/27902

This email is sent out to inform you based on the new policy we voted.
The recommendation is always using Apache Spark's native type `String`.

Bests,
Dongjoon.

References:
1. "CHAR implementation?", 2017/09/15
     https://lists.apache.org/thread.html/96b004331d9762e356053b5c8c97e953e398e489d15e1b49e775702f%40%3Cdev.spark.apache.org%3E
2. "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E


This email contains confidential information of and is the copyright of Infomedia. It must not be forwarded, amended or disclosed without consent of the sender. If you received this message by mistake, please advise the sender and delete all copies. Security of transmission on the internet cannot be guaranteed, could be infected, intercepted, or corrupted and you should ensure you have suitable antivirus protection in place. By sending us your or any third party personal details, you consent to (or confirm you have obtained consent from such third parties) to Infomedia’s privacy policy. http://www.infomedia.com.au/privacy-policy/


smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

Stephen Coy
I don’t think I can recall any usages of type CHAR in any situation.

Really, it’s only use (on any traditional SQL database) would be when you *want* a fixed width character column that has been right padded with spaces.


On 17 Mar 2020, at 12:13 pm, Reynold Xin <[hidden email]> wrote:

For sure.

There's another reason I feel char is not that important and it's more important to be internally consistent (e.g. all data sources support it with the same behavior, vs one data sources do one behavior and another do the other). char was created at a time when cpu was slow and storage was expensive, and being able to pack things nicely at fixed length was highly useful. The fact that it was padded was initially done for performance, not for the padding itself. A lot has changed since char was invented, and with modern technologies (columnar, dictionary encoding, etc) there is little reason to use a char data type for anything. As a matter of fact, Spark internally converts char type to string to work with.


I see two solutions really.

1. We require padding, and ban all uses of char when it is not properly padded. This would ban all the native data sources, which are the primarily way people are using Spark. This leaves only char support for tables going through Hive serdes, which are slow to begin with. It is basically Dongjoon and Wenchen's suggestion. This turns char support into a compatibility feature only for some Hive tables that cannot be converted into Spark native data sources. This has confusing end-user behavior because depending on whether that Hive table is converted into Spark native data sources, we might or might not support char type.

An extension to the above is to introduce padding for char type across the board, and make char type a first class data type. There are a lot of work to introduce another data type, especially for one that has virtually no usage and its usage will likely continue to decline in the future (just reason from first principle based on the reason char was introduced in the first place).

Now I'm assuming it's a lot of work to do char properly. But if it is not the case (e.g. just a simple rule to insert padding at planning time), then maybe it's worth doing it this way. I'm totally OK with this too.

What I'd oppose is to just ban char for the native data sources, and do not have a plan to address this problem systematically.


2. Just forget about padding, like what Snowflake and MySQL have done. Document that char(x) is just an alias for string. And then move on. Almost no work needs to be done...







On Mon, Mar 16, 2020 at 5:54 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you for sharing and confirming.

We had better consider all heterogeneous customers in the world. And, I also have experiences with the non-negligible cases in on-prem.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:42 PM Reynold Xin <[hidden email]> wrote:
−User

char barely showed up (honestly negligible). I was comparing select vs select.



On Mon, Mar 16, 2020 at 5:40 PM, Dongjoon Hyun <[hidden email]> wrote:
Ur, are you comparing the number of SELECT statement with TRIM and CREATE statements with `CHAR`?

> I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.

We need to discuss more about what to do. This thread is what I expected exactly. :)

> BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:35 PM Reynold Xin <[hidden email]> wrote:
BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.




On Mon, Mar 16, 2020 at 5:29 PM, Reynold Xin <[hidden email]> wrote:
I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.


On Mon, Mar 16, 2020 at 5:27 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you, Stephen and Reynold.

To Reynold.

The way I see the following is a little different.

      > CHAR is an undocumented data type without clearly defined semantics.

Let me describe in Apache Spark User's View point.

Apache Spark started to claim `HiveContext` (and `hql/hiveql` function) at Apache Spark 1.x without much documentation. In addition, there still exists an effort which is trying to keep it in 3.0.0 age.

       https://issues.apache.org/jira/browse/SPARK-31088
       Add back HiveContext and createExternalTable

Historically, we tried to make many SQL-based customer migrate their workloads from Apache Hive into Apache Spark through `HiveContext`.

Although Apache Spark didn't have a good document about the inconsistent behavior among its data sources, Apache Hive has been providing its documentation and many customers rely the behavior.

      - https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types

At that time, frequently in on-prem Hadoop clusters by well-known vendors, many existing huge tables were created by Apache Hive, not Apache Spark. And, Apache Spark is used for boosting SQL performance with its *caching*. This was true because Apache Spark was added into the Hadoop-vendor products later than Apache Hive.

Until the turning point at Apache Spark 2.0, we tried to catch up more features to be consistent at least with Hive tables in Apache Hive and Apache Spark because two SQL engines share the same tables.

For the following, technically, while Apache Hive doesn't changed its existing behavior in this part, Apache Spark evolves inevitably by moving away from the original Apache Spark old behaviors one-by-one.

      >  the value is already fucked up

The following is the change log.

      - When we switched the default value of `convertMetastoreParquet`. (at Apache Spark 1.2)
      - When we switched the default value of `convertMetastoreOrc` (at Apache Spark 2.4)
      - When we switched `CREATE TABLE` itself. (Change `TEXT` table to `PARQUET` table at Apache Spark 3.0)

To sum up, this has been a well-known issue in the community and among the customers.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:24 PM Stephen Coy <[hidden email]> wrote:
Hi there,

I’m kind of new around here, but I have had experience with all of all the so called “big iron” databases such as Oracle, IBM DB2 and Microsoft SQL Server as well as Postgresql.

They all support the notion of “ANSI padding” for CHAR columns - which means that such columns are always space padded, and they default to having this enabled (for ANSI compliance).

MySQL also supports it, but it defaults to leaving it disabled for historical reasons not unlike what we have here.

In my opinion we should push toward standards compliance where possible and then document where it cannot work.

If users don’t like the padding on CHAR columns then they should change to VARCHAR - I believe that was its purpose in the first place, and it does not dictate any sort of “padding".

I can see why you might “ban” the use of CHAR columns where they cannot be consistently supported, but VARCHAR is a different animal and I would expect it to work consistently everywhere.


Cheers,

Steve C

On 17 Mar 2020, at 10:01 am, Dongjoon Hyun <[hidden email]> wrote:

Hi, Reynold.
(And +Michael Armbrust)

If you think so, do you think it's okay that we change the return value silently? Then, I'm wondering why we reverted `TRIM` functions then?

> Are we sure "not padding" is "incorrect"?

Bests,
Dongjoon.


On Sun, Mar 15, 2020 at 11:15 PM Gourav Sengupta <[hidden email]> wrote:
Hi,

100% agree with Reynold.


Regards,
Gourav Sengupta

On Mon, Mar 16, 2020 at 3:31 AM Reynold Xin <[hidden email]> wrote:
Are we sure "not padding" is "incorrect"?

I don't know whether ANSI SQL actually requires padding, but plenty of databases don't actually pad.

https://docs.snowflake.net/manuals/sql-reference/data-types-text.html : "Snowflake currently deviates from common CHAR semantics in that strings shorter than the maximum length are not space-padded at the end."









On Sun, Mar 15, 2020 at 7:02 PM, Dongjoon Hyun <[hidden email]> wrote:
Hi, Reynold.

Please see the following for the context.

"Revert SPARK-30098 Use default datasource as provider for CREATE TABLE syntax"

I raised the above issue according to the new rubric, and the banning was the proposed alternative to reduce the potential issue.

Please give us your opinion since it's still PR.

Bests,
Dongjoon.

On Sat, Mar 14, 2020 at 17:54 Reynold Xin <[hidden email]> wrote:
I don’t understand this change. Wouldn’t this “ban” confuse the hell out of both new and old users?

For old users, their old code that was working for char(3) would now stop working. 

For new users, depending on whether the underlying metastore char(3) is either supported but different from ansi Sql (which is not that big of a deal if we explain it) or not supported. 

On Sat, Mar 14, 2020 at 3:51 PM Dongjoon Hyun <[hidden email]> wrote:
Hi, All.

Apache Spark has been suffered from a known consistency issue on `CHAR` type behavior among its usages and configurations. However, the evolution direction has been gradually moving forward to be consistent inside Apache Spark because we don't have `CHAR` offically. The following is the summary.

With 1.6.x ~ 2.3.x, `STORED PARQUET` has the following different result.
(`spark.sql.hive.convertMetastoreParquet=false` provides a fallback to Hive behavior.)

    spark-sql> CREATE TABLE t1(a CHAR(3));
    spark-sql> CREATE TABLE t2(a CHAR(3)) STORED AS ORC;
    spark-sql> CREATE TABLE t3(a CHAR(3)) STORED AS PARQUET;

    spark-sql> INSERT INTO TABLE t1 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t2 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t3 SELECT 'a ';

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a   3
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 2.4.0, `STORED AS ORC` became consistent.
(`spark.sql.hive.convertMetastoreOrc=false` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 3.0.0-preview2, `CREATE TABLE` (without `STORED AS` clause) became consistent.
(`spark.sql.legacy.createHiveTableByDefault.enabled=true` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a 2
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

In addition, in 3.0.0, SPARK-31147 aims to ban `CHAR/VARCHAR` type in the following syntax to be safe.

    CREATE TABLE t(a CHAR(3));
    https://github.com/apache/spark/pull/27902

This email is sent out to inform you based on the new policy we voted.
The recommendation is always using Apache Spark's native type `String`.

Bests,
Dongjoon.

References:
1. "CHAR implementation?", 2017/09/15
     https://lists.apache.org/thread.html/96b004331d9762e356053b5c8c97e953e398e489d15e1b49e775702f%40%3Cdev.spark.apache.org%3E
2. "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E


This email contains confidential information of and is the copyright of Infomedia. It must not be forwarded, amended or disclosed without consent of the sender. If you received this message by mistake, please advise the sender and delete all copies. Security of transmission on the internet cannot be guaranteed, could be infected, intercepted, or corrupted and you should ensure you have suitable antivirus protection in place. By sending us your or any third party personal details, you consent to (or confirm you have obtained consent from such third parties) to Infomedia’s privacy policy. http://www.infomedia.com.au/privacy-policy/


Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

cloud0fan
I agree that Spark can define the semantic of CHAR(x) differently than the SQL standard (no padding), and ask the data sources to follow it. But the problem is, some data sources may not be able to skip padding, like the Hive serde table.

On the other hand, it's easier to require padding for CHAR(x). Even if some data sources don't support padding, Spark can simply do the padding at the read time, using the `rpad` function. However, if CHAR(x) is rarely used, maybe we should just ban it and only keep it for Hive compatibility, to save our work.

VARCHAR(x) is a different story as it's a commonly used data type in databases. It has a length limitation which can help the backed engine to make better decisions when dealing with it. Currently Spark just treats VARCHAR(x) as string type, which works fine in most cases, but different data sources may have different behaviors during writing. For example, pgsql JDBC data source fails the writing if length limitation is hit, Hive serde table simply truncate the chars exceeding length limitation, Parquet data source writes whatever string it gets.

We can just document that, the underlying data source may or may not enforce the length limitation of VARCHAR(x). Or we can make VARCHAR(x) a first-class data type, which requires a lot more changes (type coercion, type cast, etc.).

Before we make a final decision, I think it's reasonable to ban CHAR/VARCHAR in non-Hive-serde tables in 3.0, so that we don't introduce silent result changing here.

Any ideas are welcome!

Thanks,
Wenchen

On Tue, Mar 17, 2020 at 11:29 AM Stephen Coy <[hidden email]> wrote:
I don’t think I can recall any usages of type CHAR in any situation.

Really, it’s only use (on any traditional SQL database) would be when you *want* a fixed width character column that has been right padded with spaces.


On 17 Mar 2020, at 12:13 pm, Reynold Xin <[hidden email]> wrote:

For sure.

There's another reason I feel char is not that important and it's more important to be internally consistent (e.g. all data sources support it with the same behavior, vs one data sources do one behavior and another do the other). char was created at a time when cpu was slow and storage was expensive, and being able to pack things nicely at fixed length was highly useful. The fact that it was padded was initially done for performance, not for the padding itself. A lot has changed since char was invented, and with modern technologies (columnar, dictionary encoding, etc) there is little reason to use a char data type for anything. As a matter of fact, Spark internally converts char type to string to work with.


I see two solutions really.

1. We require padding, and ban all uses of char when it is not properly padded. This would ban all the native data sources, which are the primarily way people are using Spark. This leaves only char support for tables going through Hive serdes, which are slow to begin with. It is basically Dongjoon and Wenchen's suggestion. This turns char support into a compatibility feature only for some Hive tables that cannot be converted into Spark native data sources. This has confusing end-user behavior because depending on whether that Hive table is converted into Spark native data sources, we might or might not support char type.

An extension to the above is to introduce padding for char type across the board, and make char type a first class data type. There are a lot of work to introduce another data type, especially for one that has virtually no usage and its usage will likely continue to decline in the future (just reason from first principle based on the reason char was introduced in the first place).

Now I'm assuming it's a lot of work to do char properly. But if it is not the case (e.g. just a simple rule to insert padding at planning time), then maybe it's worth doing it this way. I'm totally OK with this too.

What I'd oppose is to just ban char for the native data sources, and do not have a plan to address this problem systematically.


2. Just forget about padding, like what Snowflake and MySQL have done. Document that char(x) is just an alias for string. And then move on. Almost no work needs to be done...







On Mon, Mar 16, 2020 at 5:54 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you for sharing and confirming.

We had better consider all heterogeneous customers in the world. And, I also have experiences with the non-negligible cases in on-prem.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:42 PM Reynold Xin <[hidden email]> wrote:
−User

char barely showed up (honestly negligible). I was comparing select vs select.



On Mon, Mar 16, 2020 at 5:40 PM, Dongjoon Hyun <[hidden email]> wrote:
Ur, are you comparing the number of SELECT statement with TRIM and CREATE statements with `CHAR`?

> I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.

We need to discuss more about what to do. This thread is what I expected exactly. :)

> BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:35 PM Reynold Xin <[hidden email]> wrote:
BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.




On Mon, Mar 16, 2020 at 5:29 PM, Reynold Xin <[hidden email]> wrote:
I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.


On Mon, Mar 16, 2020 at 5:27 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you, Stephen and Reynold.

To Reynold.

The way I see the following is a little different.

      > CHAR is an undocumented data type without clearly defined semantics.

Let me describe in Apache Spark User's View point.

Apache Spark started to claim `HiveContext` (and `hql/hiveql` function) at Apache Spark 1.x without much documentation. In addition, there still exists an effort which is trying to keep it in 3.0.0 age.

       https://issues.apache.org/jira/browse/SPARK-31088
       Add back HiveContext and createExternalTable

Historically, we tried to make many SQL-based customer migrate their workloads from Apache Hive into Apache Spark through `HiveContext`.

Although Apache Spark didn't have a good document about the inconsistent behavior among its data sources, Apache Hive has been providing its documentation and many customers rely the behavior.

      - https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types

At that time, frequently in on-prem Hadoop clusters by well-known vendors, many existing huge tables were created by Apache Hive, not Apache Spark. And, Apache Spark is used for boosting SQL performance with its *caching*. This was true because Apache Spark was added into the Hadoop-vendor products later than Apache Hive.

Until the turning point at Apache Spark 2.0, we tried to catch up more features to be consistent at least with Hive tables in Apache Hive and Apache Spark because two SQL engines share the same tables.

For the following, technically, while Apache Hive doesn't changed its existing behavior in this part, Apache Spark evolves inevitably by moving away from the original Apache Spark old behaviors one-by-one.

      >  the value is already fucked up

The following is the change log.

      - When we switched the default value of `convertMetastoreParquet`. (at Apache Spark 1.2)
      - When we switched the default value of `convertMetastoreOrc` (at Apache Spark 2.4)
      - When we switched `CREATE TABLE` itself. (Change `TEXT` table to `PARQUET` table at Apache Spark 3.0)

To sum up, this has been a well-known issue in the community and among the customers.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:24 PM Stephen Coy <[hidden email]> wrote:
Hi there,

I’m kind of new around here, but I have had experience with all of all the so called “big iron” databases such as Oracle, IBM DB2 and Microsoft SQL Server as well as Postgresql.

They all support the notion of “ANSI padding” for CHAR columns - which means that such columns are always space padded, and they default to having this enabled (for ANSI compliance).

MySQL also supports it, but it defaults to leaving it disabled for historical reasons not unlike what we have here.

In my opinion we should push toward standards compliance where possible and then document where it cannot work.

If users don’t like the padding on CHAR columns then they should change to VARCHAR - I believe that was its purpose in the first place, and it does not dictate any sort of “padding".

I can see why you might “ban” the use of CHAR columns where they cannot be consistently supported, but VARCHAR is a different animal and I would expect it to work consistently everywhere.


Cheers,

Steve C

On 17 Mar 2020, at 10:01 am, Dongjoon Hyun <[hidden email]> wrote:

Hi, Reynold.
(And +Michael Armbrust)

If you think so, do you think it's okay that we change the return value silently? Then, I'm wondering why we reverted `TRIM` functions then?

> Are we sure "not padding" is "incorrect"?

Bests,
Dongjoon.


On Sun, Mar 15, 2020 at 11:15 PM Gourav Sengupta <[hidden email]> wrote:
Hi,

100% agree with Reynold.


Regards,
Gourav Sengupta

On Mon, Mar 16, 2020 at 3:31 AM Reynold Xin <[hidden email]> wrote:
Are we sure "not padding" is "incorrect"?

I don't know whether ANSI SQL actually requires padding, but plenty of databases don't actually pad.

https://docs.snowflake.net/manuals/sql-reference/data-types-text.html : "Snowflake currently deviates from common CHAR semantics in that strings shorter than the maximum length are not space-padded at the end."









On Sun, Mar 15, 2020 at 7:02 PM, Dongjoon Hyun <[hidden email]> wrote:
Hi, Reynold.

Please see the following for the context.

"Revert SPARK-30098 Use default datasource as provider for CREATE TABLE syntax"

I raised the above issue according to the new rubric, and the banning was the proposed alternative to reduce the potential issue.

Please give us your opinion since it's still PR.

Bests,
Dongjoon.

On Sat, Mar 14, 2020 at 17:54 Reynold Xin <[hidden email]> wrote:
I don’t understand this change. Wouldn’t this “ban” confuse the hell out of both new and old users?

For old users, their old code that was working for char(3) would now stop working. 

For new users, depending on whether the underlying metastore char(3) is either supported but different from ansi Sql (which is not that big of a deal if we explain it) or not supported. 

On Sat, Mar 14, 2020 at 3:51 PM Dongjoon Hyun <[hidden email]> wrote:
Hi, All.

Apache Spark has been suffered from a known consistency issue on `CHAR` type behavior among its usages and configurations. However, the evolution direction has been gradually moving forward to be consistent inside Apache Spark because we don't have `CHAR` offically. The following is the summary.

With 1.6.x ~ 2.3.x, `STORED PARQUET` has the following different result.
(`spark.sql.hive.convertMetastoreParquet=false` provides a fallback to Hive behavior.)

    spark-sql> CREATE TABLE t1(a CHAR(3));
    spark-sql> CREATE TABLE t2(a CHAR(3)) STORED AS ORC;
    spark-sql> CREATE TABLE t3(a CHAR(3)) STORED AS PARQUET;

    spark-sql> INSERT INTO TABLE t1 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t2 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t3 SELECT 'a ';

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a   3
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 2.4.0, `STORED AS ORC` became consistent.
(`spark.sql.hive.convertMetastoreOrc=false` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 3.0.0-preview2, `CREATE TABLE` (without `STORED AS` clause) became consistent.
(`spark.sql.legacy.createHiveTableByDefault.enabled=true` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a 2
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

In addition, in 3.0.0, SPARK-31147 aims to ban `CHAR/VARCHAR` type in the following syntax to be safe.

    CREATE TABLE t(a CHAR(3));
    https://github.com/apache/spark/pull/27902

This email is sent out to inform you based on the new policy we voted.
The recommendation is always using Apache Spark's native type `String`.

Bests,
Dongjoon.

References:
1. "CHAR implementation?", 2017/09/15
     https://lists.apache.org/thread.html/96b004331d9762e356053b5c8c97e953e398e489d15e1b49e775702f%40%3Cdev.spark.apache.org%3E
2. "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E


This email contains confidential information of and is the copyright of Infomedia. It must not be forwarded, amended or disclosed without consent of the sender. If you received this message by mistake, please advise the sender and delete all copies. Security of transmission on the internet cannot be guaranteed, could be infected, intercepted, or corrupted and you should ensure you have suitable antivirus protection in place. By sending us your or any third party personal details, you consent to (or confirm you have obtained consent from such third parties) to Infomedia’s privacy policy. http://www.infomedia.com.au/privacy-policy/


Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

Maryann Xue
It would be super weird not to support VARCHAR as SQL engine. Banning CHAR is probably fine, as its semantics is genuinely confusing.
We can issue a warning when parsing VARCHAR with a limit and suggest the usage of String instead.

On Tue, Mar 17, 2020 at 10:27 AM Wenchen Fan <[hidden email]> wrote:
I agree that Spark can define the semantic of CHAR(x) differently than the SQL standard (no padding), and ask the data sources to follow it. But the problem is, some data sources may not be able to skip padding, like the Hive serde table.

On the other hand, it's easier to require padding for CHAR(x). Even if some data sources don't support padding, Spark can simply do the padding at the read time, using the `rpad` function. However, if CHAR(x) is rarely used, maybe we should just ban it and only keep it for Hive compatibility, to save our work.

VARCHAR(x) is a different story as it's a commonly used data type in databases. It has a length limitation which can help the backed engine to make better decisions when dealing with it. Currently Spark just treats VARCHAR(x) as string type, which works fine in most cases, but different data sources may have different behaviors during writing. For example, pgsql JDBC data source fails the writing if length limitation is hit, Hive serde table simply truncate the chars exceeding length limitation, Parquet data source writes whatever string it gets.

We can just document that, the underlying data source may or may not enforce the length limitation of VARCHAR(x). Or we can make VARCHAR(x) a first-class data type, which requires a lot more changes (type coercion, type cast, etc.).

Before we make a final decision, I think it's reasonable to ban CHAR/VARCHAR in non-Hive-serde tables in 3.0, so that we don't introduce silent result changing here.

Any ideas are welcome!

Thanks,
Wenchen

On Tue, Mar 17, 2020 at 11:29 AM Stephen Coy <[hidden email]> wrote:
I don’t think I can recall any usages of type CHAR in any situation.

Really, it’s only use (on any traditional SQL database) would be when you *want* a fixed width character column that has been right padded with spaces.


On 17 Mar 2020, at 12:13 pm, Reynold Xin <[hidden email]> wrote:

For sure.

There's another reason I feel char is not that important and it's more important to be internally consistent (e.g. all data sources support it with the same behavior, vs one data sources do one behavior and another do the other). char was created at a time when cpu was slow and storage was expensive, and being able to pack things nicely at fixed length was highly useful. The fact that it was padded was initially done for performance, not for the padding itself. A lot has changed since char was invented, and with modern technologies (columnar, dictionary encoding, etc) there is little reason to use a char data type for anything. As a matter of fact, Spark internally converts char type to string to work with.


I see two solutions really.

1. We require padding, and ban all uses of char when it is not properly padded. This would ban all the native data sources, which are the primarily way people are using Spark. This leaves only char support for tables going through Hive serdes, which are slow to begin with. It is basically Dongjoon and Wenchen's suggestion. This turns char support into a compatibility feature only for some Hive tables that cannot be converted into Spark native data sources. This has confusing end-user behavior because depending on whether that Hive table is converted into Spark native data sources, we might or might not support char type.

An extension to the above is to introduce padding for char type across the board, and make char type a first class data type. There are a lot of work to introduce another data type, especially for one that has virtually no usage and its usage will likely continue to decline in the future (just reason from first principle based on the reason char was introduced in the first place).

Now I'm assuming it's a lot of work to do char properly. But if it is not the case (e.g. just a simple rule to insert padding at planning time), then maybe it's worth doing it this way. I'm totally OK with this too.

What I'd oppose is to just ban char for the native data sources, and do not have a plan to address this problem systematically.


2. Just forget about padding, like what Snowflake and MySQL have done. Document that char(x) is just an alias for string. And then move on. Almost no work needs to be done...







On Mon, Mar 16, 2020 at 5:54 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you for sharing and confirming.

We had better consider all heterogeneous customers in the world. And, I also have experiences with the non-negligible cases in on-prem.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:42 PM Reynold Xin <[hidden email]> wrote:
−User

char barely showed up (honestly negligible). I was comparing select vs select.



On Mon, Mar 16, 2020 at 5:40 PM, Dongjoon Hyun <[hidden email]> wrote:
Ur, are you comparing the number of SELECT statement with TRIM and CREATE statements with `CHAR`?

> I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.

We need to discuss more about what to do. This thread is what I expected exactly. :)

> BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:35 PM Reynold Xin <[hidden email]> wrote:
BTW I'm not opposing us sticking to SQL standard (I'm in general for it). I was merely pointing out that if we deviate away from SQL standard in any way we are considered "wrong" or "incorrect". That argument itself is flawed when plenty of other popular database systems also deviate away from the standard on this specific behavior.




On Mon, Mar 16, 2020 at 5:29 PM, Reynold Xin <[hidden email]> wrote:
I looked up our usage logs (sorry I can't share this publicly) and trim has at least four orders of magnitude higher usage than char.


On Mon, Mar 16, 2020 at 5:27 PM, Dongjoon Hyun <[hidden email]> wrote:
Thank you, Stephen and Reynold.

To Reynold.

The way I see the following is a little different.

      > CHAR is an undocumented data type without clearly defined semantics.

Let me describe in Apache Spark User's View point.

Apache Spark started to claim `HiveContext` (and `hql/hiveql` function) at Apache Spark 1.x without much documentation. In addition, there still exists an effort which is trying to keep it in 3.0.0 age.

       https://issues.apache.org/jira/browse/SPARK-31088
       Add back HiveContext and createExternalTable

Historically, we tried to make many SQL-based customer migrate their workloads from Apache Hive into Apache Spark through `HiveContext`.

Although Apache Spark didn't have a good document about the inconsistent behavior among its data sources, Apache Hive has been providing its documentation and many customers rely the behavior.

      - https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types

At that time, frequently in on-prem Hadoop clusters by well-known vendors, many existing huge tables were created by Apache Hive, not Apache Spark. And, Apache Spark is used for boosting SQL performance with its *caching*. This was true because Apache Spark was added into the Hadoop-vendor products later than Apache Hive.

Until the turning point at Apache Spark 2.0, we tried to catch up more features to be consistent at least with Hive tables in Apache Hive and Apache Spark because two SQL engines share the same tables.

For the following, technically, while Apache Hive doesn't changed its existing behavior in this part, Apache Spark evolves inevitably by moving away from the original Apache Spark old behaviors one-by-one.

      >  the value is already fucked up

The following is the change log.

      - When we switched the default value of `convertMetastoreParquet`. (at Apache Spark 1.2)
      - When we switched the default value of `convertMetastoreOrc` (at Apache Spark 2.4)
      - When we switched `CREATE TABLE` itself. (Change `TEXT` table to `PARQUET` table at Apache Spark 3.0)

To sum up, this has been a well-known issue in the community and among the customers.

Bests,
Dongjoon.

On Mon, Mar 16, 2020 at 5:24 PM Stephen Coy <[hidden email]> wrote:
Hi there,

I’m kind of new around here, but I have had experience with all of all the so called “big iron” databases such as Oracle, IBM DB2 and Microsoft SQL Server as well as Postgresql.

They all support the notion of “ANSI padding” for CHAR columns - which means that such columns are always space padded, and they default to having this enabled (for ANSI compliance).

MySQL also supports it, but it defaults to leaving it disabled for historical reasons not unlike what we have here.

In my opinion we should push toward standards compliance where possible and then document where it cannot work.

If users don’t like the padding on CHAR columns then they should change to VARCHAR - I believe that was its purpose in the first place, and it does not dictate any sort of “padding".

I can see why you might “ban” the use of CHAR columns where they cannot be consistently supported, but VARCHAR is a different animal and I would expect it to work consistently everywhere.


Cheers,

Steve C

On 17 Mar 2020, at 10:01 am, Dongjoon Hyun <[hidden email]> wrote:

Hi, Reynold.
(And +Michael Armbrust)

If you think so, do you think it's okay that we change the return value silently? Then, I'm wondering why we reverted `TRIM` functions then?

> Are we sure "not padding" is "incorrect"?

Bests,
Dongjoon.


On Sun, Mar 15, 2020 at 11:15 PM Gourav Sengupta <[hidden email]> wrote:
Hi,

100% agree with Reynold.


Regards,
Gourav Sengupta

On Mon, Mar 16, 2020 at 3:31 AM Reynold Xin <[hidden email]> wrote:
Are we sure "not padding" is "incorrect"?

I don't know whether ANSI SQL actually requires padding, but plenty of databases don't actually pad.

https://docs.snowflake.net/manuals/sql-reference/data-types-text.html : "Snowflake currently deviates from common CHAR semantics in that strings shorter than the maximum length are not space-padded at the end."









On Sun, Mar 15, 2020 at 7:02 PM, Dongjoon Hyun <[hidden email]> wrote:
Hi, Reynold.

Please see the following for the context.

"Revert SPARK-30098 Use default datasource as provider for CREATE TABLE syntax"

I raised the above issue according to the new rubric, and the banning was the proposed alternative to reduce the potential issue.

Please give us your opinion since it's still PR.

Bests,
Dongjoon.

On Sat, Mar 14, 2020 at 17:54 Reynold Xin <[hidden email]> wrote:
I don’t understand this change. Wouldn’t this “ban” confuse the hell out of both new and old users?

For old users, their old code that was working for char(3) would now stop working. 

For new users, depending on whether the underlying metastore char(3) is either supported but different from ansi Sql (which is not that big of a deal if we explain it) or not supported. 

On Sat, Mar 14, 2020 at 3:51 PM Dongjoon Hyun <[hidden email]> wrote:
Hi, All.

Apache Spark has been suffered from a known consistency issue on `CHAR` type behavior among its usages and configurations. However, the evolution direction has been gradually moving forward to be consistent inside Apache Spark because we don't have `CHAR` offically. The following is the summary.

With 1.6.x ~ 2.3.x, `STORED PARQUET` has the following different result.
(`spark.sql.hive.convertMetastoreParquet=false` provides a fallback to Hive behavior.)

    spark-sql> CREATE TABLE t1(a CHAR(3));
    spark-sql> CREATE TABLE t2(a CHAR(3)) STORED AS ORC;
    spark-sql> CREATE TABLE t3(a CHAR(3)) STORED AS PARQUET;

    spark-sql> INSERT INTO TABLE t1 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t2 SELECT 'a ';
    spark-sql> INSERT INTO TABLE t3 SELECT 'a ';

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a   3
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 2.4.0, `STORED AS ORC` became consistent.
(`spark.sql.hive.convertMetastoreOrc=false` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a   3
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

Since 3.0.0-preview2, `CREATE TABLE` (without `STORED AS` clause) became consistent.
(`spark.sql.legacy.createHiveTableByDefault.enabled=true` provides a fallback to Hive behavior.)

    spark-sql> SELECT a, length(a) FROM t1;
    a 2
    spark-sql> SELECT a, length(a) FROM t2;
    a 2
    spark-sql> SELECT a, length(a) FROM t3;
    a 2

In addition, in 3.0.0, SPARK-31147 aims to ban `CHAR/VARCHAR` type in the following syntax to be safe.

    CREATE TABLE t(a CHAR(3));
    https://github.com/apache/spark/pull/27902

This email is sent out to inform you based on the new policy we voted.
The recommendation is always using Apache Spark's native type `String`.

Bests,
Dongjoon.

References:
1. "CHAR implementation?", 2017/09/15
     https://lists.apache.org/thread.html/96b004331d9762e356053b5c8c97e953e398e489d15e1b49e775702f%40%3Cdev.spark.apache.org%3E
2. "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E


This email contains confidential information of and is the copyright of Infomedia. It must not be forwarded, amended or disclosed without consent of the sender. If you received this message by mistake, please advise the sender and delete all copies. Security of transmission on the internet cannot be guaranteed, could be infected, intercepted, or corrupted and you should ensure you have suitable antivirus protection in place. By sending us your or any third party personal details, you consent to (or confirm you have obtained consent from such third parties) to Infomedia’s privacy policy. http://www.infomedia.com.au/privacy-policy/


Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

Michael Armbrust
In reply to this post by rxin
What I'd oppose is to just ban char for the native data sources, and do not have a plan to address this problem systematically.

+1
 
Just forget about padding, like what Snowflake and MySQL have done. Document that char(x) is just an alias for string. And then move on. Almost no work needs to be done...

+1 
 
Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

cloud0fan
OK let me put a proposal here:

1. Permanently ban CHAR for native data source tables, and only keep it for Hive compatibility.
It's OK to forget about padding like what Snowflake and MySQL have done. But it's hard for Spark to require consistent behavior about CHAR type in all data sources. Since CHAR type is not that useful nowadays, seems OK to just ban it. Another way is to document that the padding of CHAR type is data source dependent, but it's a bit weird to leave this inconsistency in Spark.

2. Leave VARCHAR unchanged in 3.0
VARCHAR type is so widely used in databases and it's weird if Spark doesn't support it. VARCHAR type is exactly the same as Spark StringType when the length limitation is not hit, and I'm fine to temporarily leave this flaw in 3.0 and users may hit behavior changes when the string values hit the VARCHAR length limitation.

3. Finalize the VARCHAR behavior in 3.1
For now I have 2 ideas:
a) Make VARCHAR(x) a first-class data type. This means Spark data sources should support VARCHAR, and CREATE TABLE should fail if a column is VARCHAR type and the underlying data source doesn't support it (e.g. JSON/CSV). Type cast, type coercion, table insertion, etc. should be updated as well.
b) Simply document that, the underlying data source may or may not enforce the length limitation of VARCHAR(x).

Please let me know if you have different ideas.

Thanks,
Wenchen

On Wed, Mar 18, 2020 at 1:08 AM Michael Armbrust <[hidden email]> wrote:
What I'd oppose is to just ban char for the native data sources, and do not have a plan to address this problem systematically.

+1
 
Just forget about padding, like what Snowflake and MySQL have done. Document that char(x) is just an alias for string. And then move on. Almost no work needs to be done...

+1 
 
Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

Dongjoon Hyun-2
+1 for Wenchen's suggestion.

I believe that the difference and effects are informed widely and discussed in many ways twice.

First, this was shared on last December.

    "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E

Second (at this time in this thread), this has been discussed according to the new community rubric.

    - https://spark.apache.org/versioning-policy.html (Section: "Considerations When Breaking APIs")

Thank you all.

Bests,
Dongjoon.

On Tue, Mar 17, 2020 at 10:41 PM Wenchen Fan <[hidden email]> wrote:
OK let me put a proposal here:

1. Permanently ban CHAR for native data source tables, and only keep it for Hive compatibility.
It's OK to forget about padding like what Snowflake and MySQL have done. But it's hard for Spark to require consistent behavior about CHAR type in all data sources. Since CHAR type is not that useful nowadays, seems OK to just ban it. Another way is to document that the padding of CHAR type is data source dependent, but it's a bit weird to leave this inconsistency in Spark.

2. Leave VARCHAR unchanged in 3.0
VARCHAR type is so widely used in databases and it's weird if Spark doesn't support it. VARCHAR type is exactly the same as Spark StringType when the length limitation is not hit, and I'm fine to temporarily leave this flaw in 3.0 and users may hit behavior changes when the string values hit the VARCHAR length limitation.

3. Finalize the VARCHAR behavior in 3.1
For now I have 2 ideas:
a) Make VARCHAR(x) a first-class data type. This means Spark data sources should support VARCHAR, and CREATE TABLE should fail if a column is VARCHAR type and the underlying data source doesn't support it (e.g. JSON/CSV). Type cast, type coercion, table insertion, etc. should be updated as well.
b) Simply document that, the underlying data source may or may not enforce the length limitation of VARCHAR(x).

Please let me know if you have different ideas.

Thanks,
Wenchen

On Wed, Mar 18, 2020 at 1:08 AM Michael Armbrust <[hidden email]> wrote:
What I'd oppose is to just ban char for the native data sources, and do not have a plan to address this problem systematically.

+1
 
Just forget about padding, like what Snowflake and MySQL have done. Document that char(x) is just an alias for string. And then move on. Almost no work needs to be done...

+1 
 
Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

rxin
You are joking when you said " informed widely and discussed in many ways twice" right?


(Yes it talked about changing the default data source provider, but that's just one of the ways we are exposing this char/varchar issue).



On Thu, Mar 19, 2020 at 8:41 PM, Dongjoon Hyun <[hidden email]> wrote:
+1 for Wenchen's suggestion.

I believe that the difference and effects are informed widely and discussed in many ways twice.

First, this was shared on last December.

    "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E

Second (at this time in this thread), this has been discussed according to the new community rubric.

    - https://spark.apache.org/versioning-policy.html (Section: "Considerations When Breaking APIs")

Thank you all.

Bests,
Dongjoon.

On Tue, Mar 17, 2020 at 10:41 PM Wenchen Fan <[hidden email]> wrote:
OK let me put a proposal here:

1. Permanently ban CHAR for native data source tables, and only keep it for Hive compatibility.
It's OK to forget about padding like what Snowflake and MySQL have done. But it's hard for Spark to require consistent behavior about CHAR type in all data sources. Since CHAR type is not that useful nowadays, seems OK to just ban it. Another way is to document that the padding of CHAR type is data source dependent, but it's a bit weird to leave this inconsistency in Spark.

2. Leave VARCHAR unchanged in 3.0
VARCHAR type is so widely used in databases and it's weird if Spark doesn't support it. VARCHAR type is exactly the same as Spark StringType when the length limitation is not hit, and I'm fine to temporarily leave this flaw in 3.0 and users may hit behavior changes when the string values hit the VARCHAR length limitation.

3. Finalize the VARCHAR behavior in 3.1
For now I have 2 ideas:
a) Make VARCHAR(x) a first-class data type. This means Spark data sources should support VARCHAR, and CREATE TABLE should fail if a column is VARCHAR type and the underlying data source doesn't support it (e.g. JSON/CSV). Type cast, type coercion, table insertion, etc. should be updated as well.
b) Simply document that, the underlying data source may or may not enforce the length limitation of VARCHAR(x).

Please let me know if you have different ideas.

Thanks,
Wenchen

On Wed, Mar 18, 2020 at 1:08 AM Michael Armbrust <[hidden email]> wrote:
What I'd oppose is to just ban char for the native data sources, and do not have a plan to address this problem systematically.

+1
 
Just forget about padding, like what Snowflake and MySQL have done. Document that char(x) is just an alias for string. And then move on. Almost no work needs to be done...

+1 


smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

Dongjoon Hyun-2
Technically, I has been suffered with (1) `CREATE TABLE` due to many difference for a long time (since 2017). So, I had a wrong assumption for the implication of that "(2) FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", Reynold. I admit that. You may not feel in the similar way. However, it was a lot to me. Also, switching `convertMetastoreOrc` at 2.4 was a big change to me although there will be no difference for Parquet-only users.

Dongjoon.

> References:
> 1. "CHAR implementation?", 2017/09/15
>      https://lists.apache.org/thread.html/96b004331d9762e356053b5c8c97e953e398e489d15e1b49e775702f%40%3Cdev.spark.apache.org%3E
> 2. "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
>    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E



On Thu, Mar 19, 2020 at 8:47 PM Reynold Xin <[hidden email]> wrote:
You are joking when you said " informed widely and discussed in many ways twice" right?


(Yes it talked about changing the default data source provider, but that's just one of the ways we are exposing this char/varchar issue).



On Thu, Mar 19, 2020 at 8:41 PM, Dongjoon Hyun <[hidden email]> wrote:
+1 for Wenchen's suggestion.

I believe that the difference and effects are informed widely and discussed in many ways twice.

First, this was shared on last December.

    "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E

Second (at this time in this thread), this has been discussed according to the new community rubric.

    - https://spark.apache.org/versioning-policy.html (Section: "Considerations When Breaking APIs")

Thank you all.

Bests,
Dongjoon.

On Tue, Mar 17, 2020 at 10:41 PM Wenchen Fan <[hidden email]> wrote:
OK let me put a proposal here:

1. Permanently ban CHAR for native data source tables, and only keep it for Hive compatibility.
It's OK to forget about padding like what Snowflake and MySQL have done. But it's hard for Spark to require consistent behavior about CHAR type in all data sources. Since CHAR type is not that useful nowadays, seems OK to just ban it. Another way is to document that the padding of CHAR type is data source dependent, but it's a bit weird to leave this inconsistency in Spark.

2. Leave VARCHAR unchanged in 3.0
VARCHAR type is so widely used in databases and it's weird if Spark doesn't support it. VARCHAR type is exactly the same as Spark StringType when the length limitation is not hit, and I'm fine to temporarily leave this flaw in 3.0 and users may hit behavior changes when the string values hit the VARCHAR length limitation.

3. Finalize the VARCHAR behavior in 3.1
For now I have 2 ideas:
a) Make VARCHAR(x) a first-class data type. This means Spark data sources should support VARCHAR, and CREATE TABLE should fail if a column is VARCHAR type and the underlying data source doesn't support it (e.g. JSON/CSV). Type cast, type coercion, table insertion, etc. should be updated as well.
b) Simply document that, the underlying data source may or may not enforce the length limitation of VARCHAR(x).

Please let me know if you have different ideas.

Thanks,
Wenchen

On Wed, Mar 18, 2020 at 1:08 AM Michael Armbrust <[hidden email]> wrote:
What I'd oppose is to just ban char for the native data sources, and do not have a plan to address this problem systematically.

+1
 
Just forget about padding, like what Snowflake and MySQL have done. Document that char(x) is just an alias for string. And then move on. Almost no work needs to be done...

+1 

Reply | Threaded
Open this post in threaded view
|

Re: FYI: The evolution on `CHAR` type behavior

rxin
I agree it sucks. We started with some decision that might have made sense back in 2013 (let's use Hive as the default source, and guess what, pick the slowest possible serde by default). We are paying that debt ever since.

Thanks for bringing this thread up though. We don't have a clear solution yet, but at least it made a lot of people aware of the issues.



On Thu, Mar 19, 2020 at 8:56 PM, Dongjoon Hyun <[hidden email]> wrote:
Technically, I has been suffered with (1) `CREATE TABLE` due to many difference for a long time (since 2017). So, I had a wrong assumption for the implication of that "(2) FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", Reynold. I admit that. You may not feel in the similar way. However, it was a lot to me. Also, switching `convertMetastoreOrc` at 2.4 was a big change to me although there will be no difference for Parquet-only users.

Dongjoon.

> References:
> 1. "CHAR implementation?", 2017/09/15
>      https://lists.apache.org/thread.html/96b004331d9762e356053b5c8c97e953e398e489d15e1b49e775702f%40%3Cdev.spark.apache.org%3E
> 2. "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
>    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E



On Thu, Mar 19, 2020 at 8:47 PM Reynold Xin <[hidden email]> wrote:
You are joking when you said " informed widely and discussed in many ways twice" right?


(Yes it talked about changing the default data source provider, but that's just one of the ways we are exposing this char/varchar issue).



On Thu, Mar 19, 2020 at 8:41 PM, Dongjoon Hyun <[hidden email]> wrote:
+1 for Wenchen's suggestion.

I believe that the difference and effects are informed widely and discussed in many ways twice.

First, this was shared on last December.

    "FYI: SPARK-30098 Use default datasource as provider for CREATE TABLE syntax", 2019/12/06
    https://lists.apache.org/thread.html/493f88c10169680191791f9f6962fd16cd0ffa3b06726e92ed04cbe1%40%3Cdev.spark.apache.org%3E

Second (at this time in this thread), this has been discussed according to the new community rubric.

    - https://spark.apache.org/versioning-policy.html (Section: "Considerations When Breaking APIs")

Thank you all.

Bests,
Dongjoon.

On Tue, Mar 17, 2020 at 10:41 PM Wenchen Fan <[hidden email]> wrote:
OK let me put a proposal here:

1. Permanently ban CHAR for native data source tables, and only keep it for Hive compatibility.
It's OK to forget about padding like what Snowflake and MySQL have done. But it's hard for Spark to require consistent behavior about CHAR type in all data sources. Since CHAR type is not that useful nowadays, seems OK to just ban it. Another way is to document that the padding of CHAR type is data source dependent, but it's a bit weird to leave this inconsistency in Spark.

2. Leave VARCHAR unchanged in 3.0
VARCHAR type is so widely used in databases and it's weird if Spark doesn't support it. VARCHAR type is exactly the same as Spark StringType when the length limitation is not hit, and I'm fine to temporarily leave this flaw in 3.0 and users may hit behavior changes when the string values hit the VARCHAR length limitation.

3. Finalize the VARCHAR behavior in 3.1
For now I have 2 ideas:
a) Make VARCHAR(x) a first-class data type. This means Spark data sources should support VARCHAR, and CREATE TABLE should fail if a column is VARCHAR type and the underlying data source doesn't support it (e.g. JSON/CSV). Type cast, type coercion, table insertion, etc. should be updated as well.
b) Simply document that, the underlying data source may or may not enforce the length limitation of VARCHAR(x).

Please let me know if you have different ideas.

Thanks,
Wenchen

On Wed, Mar 18, 2020 at 1:08 AM Michael Armbrust <[hidden email]> wrote:
What I'd oppose is to just ban char for the native data sources, and do not have a plan to address this problem systematically.

+1
 
Just forget about padding, like what Snowflake and MySQL have done. Document that char(x) is just an alias for string. And then move on. Almost no work needs to be done...

+1 


smime.p7s (6K) Download Attachment