Automatically mount the user-specific configurations on K8s cluster, using a config map.

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Automatically mount the user-specific configurations on K8s cluster, using a config map.

Prashant Sharma
Hi All,

This is regarding an improvement issue SPARK-30985(https://github.com/apache/spark/pull/27735). Has this caught someone's attention yet?

Basically, SPARK_CONF_DIR hosts all the user specific configuration files, e.g.

  1. spark-defaults.conf - containing all the spark properties.
  2. log4j.properties - Logger configuration.
  3. core-site.xml - Hadoop related configuration.
  4. fairscheduler.xml - Spark's fair scheduling policy at the job level.
  5. metrics.properties - Spark metrics.
  6. Any user specific - library or framework specific configuration file.

I wanted to know, how the users of spark on k8s, propagate these configurations to the spark driver in cluster mode? and what if some configuration needs to be picked by executors (e.g. HDFS core-site.xml etc...)

Please take a look at: https://github.com/apache/spark/pull/27735

Thanks,

Prashant Sharma