Spark Worker Questions

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Spark Worker Questions

Nicholas Marion

Hello,

Hope this is the right place for these kind of questions:

Deep diving into Spark Worker and spawning of executors. We notice upon killing the worker through a, ./sbin/stop-slave.sh; that the driver continues to request executors over and over again until the Worker is actually taken down. This is because there is an executor that is being terminated in the process. We were hoping to possibly indicate to the master that the worker was coming down and not to try to spawn new executors, even us the available: WorkerState.DECOMMISSIONED, that doesn't seem to ever be used.

The questions are:
Was there an original intention for WorkerState.DECOMMISSIONED that either was removed or never used?

Do you know the code-path that occurs when a Worker is killed through stop-slave.sh? We thought it may be onStop, but that seems to be only used for tests? I thought was to add a messageToMaster from the Worker to say: I'm DECOMMISSIONED, do not schedule executors against me.

Note:
We only have 1 master, 1 slave.



Regards,

NICHOLAS T. MARION
IBM Open Data Analytics for z/OS Service Team Lead

Phone: 1-845-433-5010 | Tie-Line: 293-5010
E-mail:
[hidden email]
Find me on:
LinkedIn: http://www.linkedin.com/in/nicholasmarion
IBM

2455 South Rd
Poughkeepie, New York 12601-5400
United States
IBM Redbooks Silver AuthorData Science Foundations - Level 1