[DISCUSS][K8S] Local dependencies with Kubernetes

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

[DISCUSS][K8S] Local dependencies with Kubernetes

Rob Vesse

Folks

 

One of the big limitations of the current Spark on K8S implementation is that it isn’t possible to use local dependencies (SPARK-23153 [1]) i.e. code, JARs, data etc that only lives on the submission client.  This basically leaves end users with several options on how to actually run their Spark jobs under K8S:

 

  1. Store local dependencies on some external distributed file system e.g. HDFS
  2. Build custom images with their local dependencies
  3. Mount local dependencies into volumes that are mounted by the K8S pods

 

In all cases the onus is on the end user to do the prep work.  Option 1 is unfortunately rare in the environments we’re looking to deploy Spark and Option 2 tends to be a non-starter as many of our customers whitelist approved images i.e. custom images are not permitted.

 

Option 3 is more workable but still requires the users to provide a bunch of extra config options to configure this for simple cases or rely upon the pending pod template feature for complex cases.

 

Ideally this would all just be handled automatically for users in the way that all other resource managers do, the K8S backend even did this at one point in the downstream fork but after a long discussion [2] this got dropped in favour of using Spark standard mechanisms i.e. spark-submit.  Unfortunately this apparently was never followed through upon as it doesn’t work with master as of today.  Moreover I am unclear how this would work in the case of Spark on K8S cluster mode where the driver itself is inside a pod since the spark-submit mechanism is based upon copying from the drivers filesystem to the executors via a file server on the driver, if the driver is inside a pod it won’t be able to see local files on the submission client.  I think this may work out of the box with client mode but I haven’t dug into that enough to verify yet.

 

I would like to start work on addressing this problem but to be honest I am unclear where to start with this.  It seems using the standard spark-submit mechanism is the way to go but I’m not sure how to get around the driver pod issue.  I would appreciate any pointers from folks who’ve looked at this previously on how and where to start on this.

 

Cheers,

 

Rob

 

[1] https://issues.apache.org/jira/browse/SPARK-23153

[2] https://lists.apache.org/thread.html/82b4ae9a2eb5ddeb3f7240ebf154f06f19b830f8b3120038e5d687a1@%3Cdev.spark.apache.org%3E

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

Stavros Kontopoulos-3
Hi Rob,

Interesting topic and affects UX a lot. I provided my thoughts in the related jira.

Best,
Stavros

On Fri, Oct 5, 2018 at 5:53 PM, Rob Vesse <[hidden email]> wrote:

Folks

 

One of the big limitations of the current Spark on K8S implementation is that it isn’t possible to use local dependencies (SPARK-23153 [1]) i.e. code, JARs, data etc that only lives on the submission client.  This basically leaves end users with several options on how to actually run their Spark jobs under K8S:

 

  1. Store local dependencies on some external distributed file system e.g. HDFS
  2. Build custom images with their local dependencies
  3. Mount local dependencies into volumes that are mounted by the K8S pods

 

In all cases the onus is on the end user to do the prep work.  Option 1 is unfortunately rare in the environments we’re looking to deploy Spark and Option 2 tends to be a non-starter as many of our customers whitelist approved images i.e. custom images are not permitted.

 

Option 3 is more workable but still requires the users to provide a bunch of extra config options to configure this for simple cases or rely upon the pending pod template feature for complex cases.

 

Ideally this would all just be handled automatically for users in the way that all other resource managers do, the K8S backend even did this at one point in the downstream fork but after a long discussion [2] this got dropped in favour of using Spark standard mechanisms i.e. spark-submit.  Unfortunately this apparently was never followed through upon as it doesn’t work with master as of today.  Moreover I am unclear how this would work in the case of Spark on K8S cluster mode where the driver itself is inside a pod since the spark-submit mechanism is based upon copying from the drivers filesystem to the executors via a file server on the driver, if the driver is inside a pod it won’t be able to see local files on the submission client.  I think this may work out of the box with client mode but I haven’t dug into that enough to verify yet.

 

I would like to start work on addressing this problem but to be honest I am unclear where to start with this.  It seems using the standard spark-submit mechanism is the way to go but I’m not sure how to get around the driver pod issue.  I would appreciate any pointers from folks who’ve looked at this previously on how and where to start on this.

 

Cheers,

 

Rob

 

[1] https://issues.apache.org/jira/browse/SPARK-23153

[2] https://lists.apache.org/thread.html/82b4ae9a2eb5ddeb3f7240ebf154f06f19b830f8b3120038e5d687a1@%3Cdev.spark.apache.org%3E




Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

Marcelo Vanzin-2
In reply to this post by Rob Vesse
On Fri, Oct 5, 2018 at 7:54 AM Rob Vesse <[hidden email]> wrote:
> Ideally this would all just be handled automatically for users in the way that all other resource managers do

I think you're giving other resource managers too much credit. In
cluster mode, only YARN really distributes local dependencies, because
YARN has that feature (its distributed cache) and Spark just uses it.

Standalone doesn't do it (see SPARK-4160) and I don't remember seeing
anything similar on the Mesos side.

There are things that could be done; e.g. if you have HDFS you could
do a restricted version of what YARN does (upload files to HDFS, and
change the "spark.jars" and "spark.files" URLs to point to HDFS
instead). Or you could turn the submission client into a file server
that the cluster-mode driver downloads files from - although that
requires connectivity from the driver back to the client.

Neither is great, but better than not having that feature.

Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

liyinan926
Agreed with Marcelo that this is not a unique problem to Spark on k8s. For a lot of organizations, hosting dependencies on HDFS seems the choice. One option that the Spark Operator does is to automatically upload application dependencies on the submission client machine to a user-specified S3 or GCS bucket and substitute the local dependencies with the remote ones. But regardless of which option to use to stage local dependencies, it generally only works for small ones like jars or small config/data files.

Yinan   

On Fri, Oct 5, 2018 at 10:28 AM Marcelo Vanzin <[hidden email]> wrote:
On Fri, Oct 5, 2018 at 7:54 AM Rob Vesse <[hidden email]> wrote:
> Ideally this would all just be handled automatically for users in the way that all other resource managers do

I think you're giving other resource managers too much credit. In
cluster mode, only YARN really distributes local dependencies, because
YARN has that feature (its distributed cache) and Spark just uses it.

Standalone doesn't do it (see SPARK-4160) and I don't remember seeing
anything similar on the Mesos side.

There are things that could be done; e.g. if you have HDFS you could
do a restricted version of what YARN does (upload files to HDFS, and
change the "spark.jars" and "spark.files" URLs to point to HDFS
instead). Or you could turn the submission client into a file server
that the cluster-mode driver downloads files from - although that
requires connectivity from the driver back to the client.

Neither is great, but better than not having that feature.

Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

Stavros Kontopoulos-3
In reply to this post by Marcelo Vanzin-2
@Marcelo is correct. Mesos does not have something similar. Only Yarn does due to the distributed cache thing.
I have described most of the above in the the jira also there are some other options.

Best,
Stavros

On Fri, Oct 5, 2018 at 8:28 PM, Marcelo Vanzin <[hidden email]> wrote:
On Fri, Oct 5, 2018 at 7:54 AM Rob Vesse <[hidden email]> wrote:
> Ideally this would all just be handled automatically for users in the way that all other resource managers do

I think you're giving other resource managers too much credit. In
cluster mode, only YARN really distributes local dependencies, because
YARN has that feature (its distributed cache) and Spark just uses it.

Standalone doesn't do it (see SPARK-4160) and I don't remember seeing
anything similar on the Mesos side.

There are things that could be done; e.g. if you have HDFS you could
do a restricted version of what YARN does (upload files to HDFS, and
change the "spark.jars" and "spark.files" URLs to point to HDFS
instead). Or you could turn the submission client into a file server
that the cluster-mode driver downloads files from - although that
requires connectivity from the driver back to the client.

Neither is great, but better than not having that feature.

Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

liyinan926
> Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

If the driver runs on the submission client machine, yes, it should just work. If the driver runs in a pod, however, it faces the same problem as in cluster mode.

Yinan

On Fri, Oct 5, 2018 at 11:06 AM Stavros Kontopoulos <[hidden email]> wrote:
@Marcelo is correct. Mesos does not have something similar. Only Yarn does due to the distributed cache thing.
I have described most of the above in the the jira also there are some other options.

Best,
Stavros

On Fri, Oct 5, 2018 at 8:28 PM, Marcelo Vanzin <[hidden email]> wrote:
On Fri, Oct 5, 2018 at 7:54 AM Rob Vesse <[hidden email]> wrote:
> Ideally this would all just be handled automatically for users in the way that all other resource managers do

I think you're giving other resource managers too much credit. In
cluster mode, only YARN really distributes local dependencies, because
YARN has that feature (its distributed cache) and Spark just uses it.

Standalone doesn't do it (see SPARK-4160) and I don't remember seeing
anything similar on the Mesos side.

There are things that could be done; e.g. if you have HDFS you could
do a restricted version of what YARN does (upload files to HDFS, and
change the "spark.jars" and "spark.files" URLs to point to HDFS
instead). Or you could turn the submission client into a file server
that the cluster-mode driver downloads files from - although that
requires connectivity from the driver back to the client.

Neither is great, but better than not having that feature.

Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

Felix Cheung
Jars and libraries only accessible locally at the driver is fairly limited? Don’t you want the same on all executor?


 

From: Yinan Li <[hidden email]>
Sent: Friday, October 5, 2018 11:25 AM
To: Stavros Kontopoulos
Cc: [hidden email]; dev
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes
 
> Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

If the driver runs on the submission client machine, yes, it should just work. If the driver runs in a pod, however, it faces the same problem as in cluster mode.

Yinan

On Fri, Oct 5, 2018 at 11:06 AM Stavros Kontopoulos <[hidden email]> wrote:
@Marcelo is correct. Mesos does not have something similar. Only Yarn does due to the distributed cache thing.
I have described most of the above in the the jira also there are some other options.

Best,
Stavros

On Fri, Oct 5, 2018 at 8:28 PM, Marcelo Vanzin <[hidden email]> wrote:
On Fri, Oct 5, 2018 at 7:54 AM Rob Vesse <[hidden email]> wrote:
> Ideally this would all just be handled automatically for users in the way that all other resource managers do

I think you're giving other resource managers too much credit. In
cluster mode, only YARN really distributes local dependencies, because
YARN has that feature (its distributed cache) and Spark just uses it.

Standalone doesn't do it (see SPARK-4160) and I don't remember seeing
anything similar on the Mesos side.

There are things that could be done; e.g. if you have HDFS you could
do a restricted version of what YARN does (upload files to HDFS, and
change the "spark.jars" and "spark.files" URLs to point to HDFS
instead). Or you could turn the submission client into a file server
that the cluster-mode driver downloads files from - although that
requires connectivity from the driver back to the client.

Neither is great, but better than not having that feature.

Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

Rob Vesse

Folks, thanks for all the great input. Responding to various points raised:

 

Marcelo/Yinan/Felix –

 

Yes, client mode will work.  The main JAR will be automatically distributed and --jars/--files specified dependencies are also distributed though for --files user code needs to use the appropriate Spark APIs to resolve the actual path i.e. SparkFiles.get()

 

However client mode can be awkward if you want to mix spark-submit distribution with mounting dependencies via volumes since you may need to ensure that dependencies appear at the same path both on the local submission client and when mounted into the executors.  This mainly applies to the case where user code does not use SparkFiles.get() and simply tries to access the path directly.

 

Marcelo/Stavros –

 

Yes I did give the other resource managers too much credit.  From my past experience with Mesos and Standalone I had thought this wasn’t an issue but going back and looking at what we did for both of those it appears we were entirely reliant on the shared file system (whether HDFS, NFS or other POSIX compliant filesystems e.g. Lustre).

 

Since connectivity back to the client is a potential stumbling block for cluster mode I wander if it would be better to think in reverse i.e. rather than having the driver pull from the client have the client push to the driver pod?

 

You can do this manually yourself via kubectl cp so it should be possible to programmatically do this since it looks like this is just a tar piped into a kubectl exec.   This would keep the relevant logic in the Kubernetes specific client which may/may not be desirable depending on whether we’re looking to just fix this for K8S or more generally.  Of course there is probably a fair bit of complexity in making this work but does that sound like something worth exploring?

 

I hadn’t really considered the HA aspect, a first step would be to get the basics working and then look at the HA aspect.  Although if the above theoretical approach is practical that could simply be part of restarting the driver.

 

Rob

 

 

From: Felix Cheung <[hidden email]>
Date: Sunday, 7 October 2018 at 23:00
To: Yinan Li <[hidden email]>, Stavros Kontopoulos <[hidden email]>
Cc: Rob Vesse <[hidden email]>, dev <[hidden email]>
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes

 

Jars and libraries only accessible locally at the driver is fairly limited? Don’t you want the same on all executor?

 

 

 


From: Yinan Li <[hidden email]>
Sent: Friday, October 5, 2018 11:25 AM
To: Stavros Kontopoulos
Cc: [hidden email]; dev
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes

 

> Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

 

If the driver runs on the submission client machine, yes, it should just work. If the driver runs in a pod, however, it faces the same problem as in cluster mode.

 

Yinan

 

On Fri, Oct 5, 2018 at 11:06 AM Stavros Kontopoulos <[hidden email]> wrote:

@Marcelo is correct. Mesos does not have something similar. Only Yarn does due to the distributed cache thing.

I have described most of the above in the the jira also there are some other options.

 

Best,

Stavros

 

On Fri, Oct 5, 2018 at 8:28 PM, Marcelo Vanzin <[hidden email]> wrote:

On Fri, Oct 5, 2018 at 7:54 AM Rob Vesse <[hidden email]> wrote:
> Ideally this would all just be handled automatically for users in the way that all other resource managers do

I think you're giving other resource managers too much credit. In
cluster mode, only YARN really distributes local dependencies, because
YARN has that feature (its distributed cache) and Spark just uses it.

Standalone doesn't do it (see SPARK-4160) and I don't remember seeing
anything similar on the Mesos side.

There are things that could be done; e.g. if you have HDFS you could
do a restricted version of what YARN does (upload files to HDFS, and
change the "spark.jars" and "spark.files" URLs to point to HDFS
instead). Or you could turn the submission client into a file server
that the cluster-mode driver downloads files from - although that
requires connectivity from the driver back to the client.

Neither is great, but better than not having that feature.

Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]



 

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

Marcelo Vanzin-2
On Mon, Oct 8, 2018 at 6:36 AM Rob Vesse <[hidden email]> wrote:
> Since connectivity back to the client is a potential stumbling block for cluster mode I wander if it would be better to think in reverse i.e. rather than having the driver pull from the client have the client push to the driver pod?
>
> You can do this manually yourself via kubectl cp so it should be possible to programmatically do this since it looks like this is just a tar piped into a kubectl exec.   This would keep the relevant logic in the Kubernetes specific client which may/may not be desirable depending on whether we’re looking to just fix this for K8S or more generally.  Of course there is probably a fair bit of complexity in making this work but does that sound like something worth exploring?

That sounds like a good solution especially if there's a programmatic
API for it, instead of having to fork a sub-process to upload the
files.

>  I hadn’t really considered the HA aspect

When you say HA here what do you mean exactly? I don't really expect
two drivers for the same app running at the same time, so my first
guess is you mean "reattempts" just like YARN supports - re-running
the driver if the first one fails?

That can be tricky without some shared storage mechanism, because in
cluster mode the submission client doesn't need to stay alive after
the application starts. Or at least it doesn't with other cluster
managers.


--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

liyinan926
In reply to this post by Rob Vesse
> You can do this manually yourself via kubectl cp so it should be possible to programmatically do this since it looks like this is just a tar piped into a kubectl exec.   This would keep the relevant logic in the Kubernetes specific client which may/may not be desirable depending on whether we’re looking to just fix this for K8S or more generally.  Of course there is probably a fair bit of complexity in making this work but does that sound like something worth exploring?

Yes, kubectl cp is able to copy files from your local machine into a container in a pod. However, the pod must be up and running for this to work. So if you want to use this to upload dependencies to the driver pod, the driver pod must already be up and running. So you may not even have a chance to upload the dependencies at this point.        

On Mon, Oct 8, 2018 at 6:36 AM Rob Vesse <[hidden email]> wrote:

Folks, thanks for all the great input. Responding to various points raised:

 

Marcelo/Yinan/Felix –

 

Yes, client mode will work.  The main JAR will be automatically distributed and --jars/--files specified dependencies are also distributed though for --files user code needs to use the appropriate Spark APIs to resolve the actual path i.e. SparkFiles.get()

 

However client mode can be awkward if you want to mix spark-submit distribution with mounting dependencies via volumes since you may need to ensure that dependencies appear at the same path both on the local submission client and when mounted into the executors.  This mainly applies to the case where user code does not use SparkFiles.get() and simply tries to access the path directly.

 

Marcelo/Stavros –

 

Yes I did give the other resource managers too much credit.  From my past experience with Mesos and Standalone I had thought this wasn’t an issue but going back and looking at what we did for both of those it appears we were entirely reliant on the shared file system (whether HDFS, NFS or other POSIX compliant filesystems e.g. Lustre).

 

Since connectivity back to the client is a potential stumbling block for cluster mode I wander if it would be better to think in reverse i.e. rather than having the driver pull from the client have the client push to the driver pod?

 

You can do this manually yourself via kubectl cp so it should be possible to programmatically do this since it looks like this is just a tar piped into a kubectl exec.   This would keep the relevant logic in the Kubernetes specific client which may/may not be desirable depending on whether we’re looking to just fix this for K8S or more generally.  Of course there is probably a fair bit of complexity in making this work but does that sound like something worth exploring?

 

I hadn’t really considered the HA aspect, a first step would be to get the basics working and then look at the HA aspect.  Although if the above theoretical approach is practical that could simply be part of restarting the driver.

 

Rob

 

 

From: Felix Cheung <[hidden email]>
Date: Sunday, 7 October 2018 at 23:00
To: Yinan Li <[hidden email]>, Stavros Kontopoulos <[hidden email]>
Cc: Rob Vesse <[hidden email]>, dev <[hidden email]>
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes

 

Jars and libraries only accessible locally at the driver is fairly limited? Don’t you want the same on all executor?

 

 

 


From: Yinan Li <[hidden email]>
Sent: Friday, October 5, 2018 11:25 AM
To: Stavros Kontopoulos
Cc: [hidden email]; dev
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes

 

> Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

 

If the driver runs on the submission client machine, yes, it should just work. If the driver runs in a pod, however, it faces the same problem as in cluster mode.

 

Yinan

 

On Fri, Oct 5, 2018 at 11:06 AM Stavros Kontopoulos <[hidden email]> wrote:

@Marcelo is correct. Mesos does not have something similar. Only Yarn does due to the distributed cache thing.

I have described most of the above in the the jira also there are some other options.

 

Best,

Stavros

 

On Fri, Oct 5, 2018 at 8:28 PM, Marcelo Vanzin <[hidden email]> wrote:

On Fri, Oct 5, 2018 at 7:54 AM Rob Vesse <[hidden email]> wrote:
> Ideally this would all just be handled automatically for users in the way that all other resource managers do

I think you're giving other resource managers too much credit. In
cluster mode, only YARN really distributes local dependencies, because
YARN has that feature (its distributed cache) and Spark just uses it.

Standalone doesn't do it (see SPARK-4160) and I don't remember seeing
anything similar on the Mesos side.

There are things that could be done; e.g. if you have HDFS you could
do a restricted version of what YARN does (upload files to HDFS, and
change the "spark.jars" and "spark.files" URLs to point to HDFS
instead). Or you could turn the submission client into a file server
that the cluster-mode driver downloads files from - although that
requires connectivity from the driver back to the client.

Neither is great, but better than not having that feature.

Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]



 

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

Rob Vesse

Well yes.  However the submission client is already able to monitor the driver pod status so can see when it is up and running.  And couldn’t we potentially modify the K8S entry points e.g. KubernetesClientApplication that run inside the driver pods to wait for dependencies to be uploaded?

 

I guess at this stage I am just throwing ideas out there and trying to figure out what’s practical/reasonable

 

Rob

 

From: Yinan Li <[hidden email]>
Date: Monday, 8 October 2018 at 17:36
To: Rob Vesse <[hidden email]>
Cc: dev <[hidden email]>
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes

 

However, the pod must be up and running for this to work. So if you want to use this to upload dependencies to the driver pod, the driver pod must already be up and running. So you may not even have a chance to upload the dependencies at this point.        

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS][K8S] Local dependencies with Kubernetes

Matt Cheah

Relying on kubectl exec may not be the best solution because clusters with locked down security will not grant users permissions to execute arbitrary code in pods. I can’t think of a great alternative right now but I wanted to bring this to our attention for the time being.

 

-Matt Cheah

 

From: Rob Vesse <[hidden email]>
Date: Monday, October 8, 2018 at 10:09 AM
To: dev <[hidden email]>
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes

 

Well yes.  However the submission client is already able to monitor the driver pod status so can see when it is up and running.  And couldn’t we potentially modify the K8S entry points e.g. KubernetesClientApplication that run inside the driver pods to wait for dependencies to be uploaded?

 

I guess at this stage I am just throwing ideas out there and trying to figure out what’s practical/reasonable

 

Rob

 

From: Yinan Li <[hidden email]>
Date: Monday, 8 October 2018 at 17:36
To: Rob Vesse <[hidden email]>
Cc: dev <[hidden email]>
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes

 

However, the pod must be up and running for this to work. So if you want to use this to upload dependencies to the driver pod, the driver pod must already be up and running. So you may not even have a chance to upload the dependencies at this point.        


smime.p7s (6K) Download Attachment