graceful shutdown for slave host for Spark Standalone Cluster
We use ec2 to run batch spark jobs to filter and process our data and sometimes we need to replace the host or deploy a new fleet. Since we run the driver in cluster mode and if the host goes down it will be detrimental. We also use some native code to make sure our table is modified by only one customer and that does not allow us to use supervise mode.
I was wondering whether the Standalone cluster has a way to have graceful shutdown for slaves whether it allows finishing the current running driver and executor and does not take any new request from the master. So we can implement a sidecar and once all running driver and executor finish we can allow shut down by the host.