Why use EMPTY_DATA_SCHEMA when creating a datasource table

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Why use EMPTY_DATA_SCHEMA when creating a datasource table

JackyLee
Hi, everyone

    I have some questions about creating a datasource table.
    In HiveExternalCatalog.createDataSourceTable,
newSparkSQLSpecificMetastoreTable will replace the table schema with
EMPTY_DATA_SCHEMA and table.partitionSchema.
    So,Why we use EMPTY_DATA_SCHEMA? Why not declare schema in other way?
    There are a lot of datasource tables that don't have partitionSchema, so
they will be replaced as EMPTY_DATA_SCHEMA?
    Even if Spark itself can parse, what if the user views the table
information from the Hive side?

Any one can help me?
thanks.



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]