Yarn Log aggregation for a killed spark streaming job
We are seeing an issue around log aggregation under Yarn with Spark streaming. The specific case below is a example - we had to kill a spark streaming job, and would like to see the logs of the consumer so as to find out what happened before we had to kill it.
Yarn reports the status of a killed Spark streaming job with a "log aggregation status" of N/A. Yarn seems to be doing the right thing for all other jobs with respect to log aggregation - jobs that either aborted or were terminated normally after finish.
Any clues on what may be happening. We are using Spark 1.5.2. Is there a fix for such behavior in later releases?