I'm always suffering Spark SQL job fails with error "Container exited with a non-zero exit code 143". I know that it was casused by the memory used execeeds the limits of spark.yarn.executor.memoryOverhead. As shown below, memory allocation request was failed at
18/11/08 17:36:05, then it
RECEIVED SIGNAL TERM. Can spark executor avoid the fate of being destroyed ?