Self join

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Self join

Marco Gaido
Hi all,

I'd like to bring to the attention of a more people a problem which has been there for long, ie, self joins. Currently, we have many troubles with them. This has been reported several times to the community and seems to affect many people, but as of now no solution has been accepted for it.

I created a PR some time ago in order to address the problem (https://github.com/apache/spark/pull/21449), but Wenchen mentioned he tried to fix this problem too but so far no attempt was successful because there is no clear semantic (https://github.com/apache/spark/pull/21449#issuecomment-393554552).

So I'd like to propose to discuss here which is the best approach for tackling this issue, which I think would be great to fix for 3.0.0, so if we decide to introduce breaking changes in the design, we can do that.

Thoughts on this?

Thanks,
Marco
Reply | Threaded
Open this post in threaded view
|

Re: Self join

Ryan Blue
Marco,

Thanks for starting the discussion! I think it would be great to have a clear description of the problem and a proposed solution. Do you have anything like that? It would help bring the rest of us up to speed without reading different pull requests.

Thanks!

rb

On Tue, Dec 11, 2018 at 3:54 AM Marco Gaido <[hidden email]> wrote:
Hi all,

I'd like to bring to the attention of a more people a problem which has been there for long, ie, self joins. Currently, we have many troubles with them. This has been reported several times to the community and seems to affect many people, but as of now no solution has been accepted for it.

I created a PR some time ago in order to address the problem (https://github.com/apache/spark/pull/21449), but Wenchen mentioned he tried to fix this problem too but so far no attempt was successful because there is no clear semantic (https://github.com/apache/spark/pull/21449#issuecomment-393554552).

So I'd like to propose to discuss here which is the best approach for tackling this issue, which I think would be great to fix for 3.0.0, so if we decide to introduce breaking changes in the design, we can do that.

Thoughts on this?

Thanks,
Marco


--
Ryan Blue
Software Engineer
Netflix
Reply | Threaded
Open this post in threaded view
|

Re: Self join

Xiao Li-2
This is a long-standing and well-known issue. We tried to resolve it multiple times. Multiple Spark committers (including me) have tried to resolve it since Spark 1.x. We might need to revisit the APIs of Columns and see whether we need to deprecate some APIs in Spark 3.0.

Thanks,

Xiao

On Tue, Dec 11, 2018 at 8:49 AM Ryan Blue <[hidden email]> wrote:
Marco,

Thanks for starting the discussion! I think it would be great to have a clear description of the problem and a proposed solution. Do you have anything like that? It would help bring the rest of us up to speed without reading different pull requests.

Thanks!

rb

On Tue, Dec 11, 2018 at 3:54 AM Marco Gaido <[hidden email]> wrote:
Hi all,

I'd like to bring to the attention of a more people a problem which has been there for long, ie, self joins. Currently, we have many troubles with them. This has been reported several times to the community and seems to affect many people, but as of now no solution has been accepted for it.

I created a PR some time ago in order to address the problem (https://github.com/apache/spark/pull/21449), but Wenchen mentioned he tried to fix this problem too but so far no attempt was successful because there is no clear semantic (https://github.com/apache/spark/pull/21449#issuecomment-393554552).

So I'd like to propose to discuss here which is the best approach for tackling this issue, which I think would be great to fix for 3.0.0, so if we decide to introduce breaking changes in the design, we can do that.

Thoughts on this?

Thanks,
Marco


--
Ryan Blue
Software Engineer
Netflix


--
https://databricks.com/sparkaisummit/north-america?utm_source=email&utm_medium=signature
Reply | Threaded
Open this post in threaded view
|

Re: Self join

rxin
Do you have a proposal? This is not an issue that only Spark faces. It's an actual in self-join in programmatic languages in which different variables can reference the same underlying object.


On Tue, Dec 11, 2018 at 8:58 AM, Xiao Li <[hidden email]> wrote:
This is a long-standing and well-known issue. We tried to resolve it multiple times. Multiple Spark committers (including me) have tried to resolve it since Spark 1.x. We might need to revisit the APIs of Columns and see whether we need to deprecate some APIs in Spark 3.0.

Thanks,

Xiao

On Tue, Dec 11, 2018 at 8:49 AM Ryan Blue <[hidden email]> wrote:
Marco,

Thanks for starting the discussion! I think it would be great to have a clear description of the problem and a proposed solution. Do you have anything like that? It would help bring the rest of us up to speed without reading different pull requests.

Thanks!

rb

On Tue, Dec 11, 2018 at 3:54 AM Marco Gaido <[hidden email]> wrote:
Hi all,

I'd like to bring to the attention of a more people a problem which has been there for long, ie, self joins. Currently, we have many troubles with them. This has been reported several times to the community and seems to affect many people, but as of now no solution has been accepted for it.

I created a PR some time ago in order to address the problem (https://github.com/apache/spark/pull/21449), but Wenchen mentioned he tried to fix this problem too but so far no attempt was successful because there is no clear semantic (https://github.com/apache/spark/pull/21449#issuecomment-393554552).

So I'd like to propose to discuss here which is the best approach for tackling this issue, which I think would be great to fix for 3.0.0, so if we decide to introduce breaking changes in the design, we can do that.

Thoughts on this?

Thanks,
Marco


--
Ryan Blue
Software Engineer
Netflix


--

Reply | Threaded
Open this post in threaded view
|

Re: Self join

Jörn Franke
In reply to this post by Marco Gaido
I don’t know your exact underlying business problem,  but maybe a graph solution, such as Spark Graphx meets better your requirements. Usually self-joins are done to address some kind of graph problem (even if you would not describe it as such) and is for these kind of problems much more efficient. 

Am 11.12.2018 um 12:44 schrieb Marco Gaido <[hidden email]>:

Hi all,

I'd like to bring to the attention of a more people a problem which has been there for long, ie, self joins. Currently, we have many troubles with them. This has been reported several times to the community and seems to affect many people, but as of now no solution has been accepted for it.

I created a PR some time ago in order to address the problem (https://github.com/apache/spark/pull/21449), but Wenchen mentioned he tried to fix this problem too but so far no attempt was successful because there is no clear semantic (https://github.com/apache/spark/pull/21449#issuecomment-393554552).

So I'd like to propose to discuss here which is the best approach for tackling this issue, which I think would be great to fix for 3.0.0, so if we decide to introduce breaking changes in the design, we can do that.

Thoughts on this?

Thanks,
Marco
Reply | Threaded
Open this post in threaded view
|

Re: Self join

Marco Gaido
Thank you all for your answers.

[hidden email] sure, let me state the problem more clearly: imagine you have 2 dataframes with a common lineage (for instance one is derived from the other by some filtering or anything you prefer). And imagine you want to join these 2 dataframes. Currently, there is a fix by Reynold which deduplicates the join condition in case the condition is an equality one (please notice that in this case, it doesn't matter which one is on the left and which one on the right). But if the condition involves other comparisons, such as a ">" or a "<", this would result in an analysis error, because the attributes on both sides are the same (eg. you have the same id#3 attribute on both sides), and you cannot deduplicate them blindly as which one is on a specific side matters.

[hidden email] my proposal was to add a dataset id in the metadata of each attribute, so that in this case we can distinguish from which dataframe the attribute is coming from, ie. having the DataFrames `df1` and `df2` where `df2` is derived from `df1`, `df1.join(df2, df1("a") > df2("a"))` could be resolved because we would know that the first attribute is taken from `df1` and so it has to be resolved using it and the same for the other. But I am open to any approach to this problem, if other people have better ideas/suggestions.

Thanks,
Marco

Il giorno mar 11 dic 2018 alle ore 18:31 Jörn Franke <[hidden email]> ha scritto:
I don’t know your exact underlying business problem,  but maybe a graph solution, such as Spark Graphx meets better your requirements. Usually self-joins are done to address some kind of graph problem (even if you would not describe it as such) and is for these kind of problems much more efficient. 

Am 11.12.2018 um 12:44 schrieb Marco Gaido <[hidden email]>:

Hi all,

I'd like to bring to the attention of a more people a problem which has been there for long, ie, self joins. Currently, we have many troubles with them. This has been reported several times to the community and seems to affect many people, but as of now no solution has been accepted for it.

I created a PR some time ago in order to address the problem (https://github.com/apache/spark/pull/21449), but Wenchen mentioned he tried to fix this problem too but so far no attempt was successful because there is no clear semantic (https://github.com/apache/spark/pull/21449#issuecomment-393554552).

So I'd like to propose to discuss here which is the best approach for tackling this issue, which I think would be great to fix for 3.0.0, so if we decide to introduce breaking changes in the design, we can do that.

Thoughts on this?

Thanks,
Marco
Reply | Threaded
Open this post in threaded view
|

Re: Self join

Ryan Blue
Marco,

I'm actually asking for a design doc that clearly states the problem and proposes a solution. This is a substantial change and probably should be an SPIP.

I think that would be more likely to generate discussion than referring to PRs or a quick paragraph on the dev list, because the only people that are looking at it now are the ones already familiar with the problem.

rb

On Wed, Dec 12, 2018 at 2:05 AM Marco Gaido <[hidden email]> wrote:
Thank you all for your answers.

[hidden email] sure, let me state the problem more clearly: imagine you have 2 dataframes with a common lineage (for instance one is derived from the other by some filtering or anything you prefer). And imagine you want to join these 2 dataframes. Currently, there is a fix by Reynold which deduplicates the join condition in case the condition is an equality one (please notice that in this case, it doesn't matter which one is on the left and which one on the right). But if the condition involves other comparisons, such as a ">" or a "<", this would result in an analysis error, because the attributes on both sides are the same (eg. you have the same id#3 attribute on both sides), and you cannot deduplicate them blindly as which one is on a specific side matters.

[hidden email] my proposal was to add a dataset id in the metadata of each attribute, so that in this case we can distinguish from which dataframe the attribute is coming from, ie. having the DataFrames `df1` and `df2` where `df2` is derived from `df1`, `df1.join(df2, df1("a") > df2("a"))` could be resolved because we would know that the first attribute is taken from `df1` and so it has to be resolved using it and the same for the other. But I am open to any approach to this problem, if other people have better ideas/suggestions.

Thanks,
Marco

Il giorno mar 11 dic 2018 alle ore 18:31 Jörn Franke <[hidden email]> ha scritto:
I don’t know your exact underlying business problem,  but maybe a graph solution, such as Spark Graphx meets better your requirements. Usually self-joins are done to address some kind of graph problem (even if you would not describe it as such) and is for these kind of problems much more efficient. 

Am 11.12.2018 um 12:44 schrieb Marco Gaido <[hidden email]>:

Hi all,

I'd like to bring to the attention of a more people a problem which has been there for long, ie, self joins. Currently, we have many troubles with them. This has been reported several times to the community and seems to affect many people, but as of now no solution has been accepted for it.

I created a PR some time ago in order to address the problem (https://github.com/apache/spark/pull/21449), but Wenchen mentioned he tried to fix this problem too but so far no attempt was successful because there is no clear semantic (https://github.com/apache/spark/pull/21449#issuecomment-393554552).

So I'd like to propose to discuss here which is the best approach for tackling this issue, which I think would be great to fix for 3.0.0, so if we decide to introduce breaking changes in the design, we can do that.

Thoughts on this?

Thanks,
Marco


--
Ryan Blue
Software Engineer
Netflix
Reply | Threaded
Open this post in threaded view
|

Re: Self join

Marco Gaido
Hi Ryan,

My goal with this email thread is to discuss with the community if there are better ideas (as I was told many other people tried to address this). I'd consider this as a brainstorming email thread. Once we have a good proposal, then we can go ahead with a SPIP.

Thanks,
Marco

Il giorno mer 12 dic 2018 alle ore 19:13 Ryan Blue <[hidden email]> ha scritto:
Marco,

I'm actually asking for a design doc that clearly states the problem and proposes a solution. This is a substantial change and probably should be an SPIP.

I think that would be more likely to generate discussion than referring to PRs or a quick paragraph on the dev list, because the only people that are looking at it now are the ones already familiar with the problem.

rb

On Wed, Dec 12, 2018 at 2:05 AM Marco Gaido <[hidden email]> wrote:
Thank you all for your answers.

[hidden email] sure, let me state the problem more clearly: imagine you have 2 dataframes with a common lineage (for instance one is derived from the other by some filtering or anything you prefer). And imagine you want to join these 2 dataframes. Currently, there is a fix by Reynold which deduplicates the join condition in case the condition is an equality one (please notice that in this case, it doesn't matter which one is on the left and which one on the right). But if the condition involves other comparisons, such as a ">" or a "<", this would result in an analysis error, because the attributes on both sides are the same (eg. you have the same id#3 attribute on both sides), and you cannot deduplicate them blindly as which one is on a specific side matters.

[hidden email] my proposal was to add a dataset id in the metadata of each attribute, so that in this case we can distinguish from which dataframe the attribute is coming from, ie. having the DataFrames `df1` and `df2` where `df2` is derived from `df1`, `df1.join(df2, df1("a") > df2("a"))` could be resolved because we would know that the first attribute is taken from `df1` and so it has to be resolved using it and the same for the other. But I am open to any approach to this problem, if other people have better ideas/suggestions.

Thanks,
Marco

Il giorno mar 11 dic 2018 alle ore 18:31 Jörn Franke <[hidden email]> ha scritto:
I don’t know your exact underlying business problem,  but maybe a graph solution, such as Spark Graphx meets better your requirements. Usually self-joins are done to address some kind of graph problem (even if you would not describe it as such) and is for these kind of problems much more efficient. 

Am 11.12.2018 um 12:44 schrieb Marco Gaido <[hidden email]>:

Hi all,

I'd like to bring to the attention of a more people a problem which has been there for long, ie, self joins. Currently, we have many troubles with them. This has been reported several times to the community and seems to affect many people, but as of now no solution has been accepted for it.

I created a PR some time ago in order to address the problem (https://github.com/apache/spark/pull/21449), but Wenchen mentioned he tried to fix this problem too but so far no attempt was successful because there is no clear semantic (https://github.com/apache/spark/pull/21449#issuecomment-393554552).

So I'd like to propose to discuss here which is the best approach for tackling this issue, which I think would be great to fix for 3.0.0, so if we decide to introduce breaking changes in the design, we can do that.

Thoughts on this?

Thanks,
Marco


--
Ryan Blue
Software Engineer
Netflix
Reply | Threaded
Open this post in threaded view
|

Re: Self join

Ryan Blue
Thanks for the extra context, Marco. I thought you were trying to propose a solution.

On Thu, Dec 13, 2018 at 2:45 AM Marco Gaido <[hidden email]> wrote:
Hi Ryan,

My goal with this email thread is to discuss with the community if there are better ideas (as I was told many other people tried to address this). I'd consider this as a brainstorming email thread. Once we have a good proposal, then we can go ahead with a SPIP.

Thanks,
Marco

Il giorno mer 12 dic 2018 alle ore 19:13 Ryan Blue <[hidden email]> ha scritto:
Marco,

I'm actually asking for a design doc that clearly states the problem and proposes a solution. This is a substantial change and probably should be an SPIP.

I think that would be more likely to generate discussion than referring to PRs or a quick paragraph on the dev list, because the only people that are looking at it now are the ones already familiar with the problem.

rb

On Wed, Dec 12, 2018 at 2:05 AM Marco Gaido <[hidden email]> wrote:
Thank you all for your answers.

[hidden email] sure, let me state the problem more clearly: imagine you have 2 dataframes with a common lineage (for instance one is derived from the other by some filtering or anything you prefer). And imagine you want to join these 2 dataframes. Currently, there is a fix by Reynold which deduplicates the join condition in case the condition is an equality one (please notice that in this case, it doesn't matter which one is on the left and which one on the right). But if the condition involves other comparisons, such as a ">" or a "<", this would result in an analysis error, because the attributes on both sides are the same (eg. you have the same id#3 attribute on both sides), and you cannot deduplicate them blindly as which one is on a specific side matters.

[hidden email] my proposal was to add a dataset id in the metadata of each attribute, so that in this case we can distinguish from which dataframe the attribute is coming from, ie. having the DataFrames `df1` and `df2` where `df2` is derived from `df1`, `df1.join(df2, df1("a") > df2("a"))` could be resolved because we would know that the first attribute is taken from `df1` and so it has to be resolved using it and the same for the other. But I am open to any approach to this problem, if other people have better ideas/suggestions.

Thanks,
Marco

Il giorno mar 11 dic 2018 alle ore 18:31 Jörn Franke <[hidden email]> ha scritto:
I don’t know your exact underlying business problem,  but maybe a graph solution, such as Spark Graphx meets better your requirements. Usually self-joins are done to address some kind of graph problem (even if you would not describe it as such) and is for these kind of problems much more efficient. 

Am 11.12.2018 um 12:44 schrieb Marco Gaido <[hidden email]>:

Hi all,

I'd like to bring to the attention of a more people a problem which has been there for long, ie, self joins. Currently, we have many troubles with them. This has been reported several times to the community and seems to affect many people, but as of now no solution has been accepted for it.

I created a PR some time ago in order to address the problem (https://github.com/apache/spark/pull/21449), but Wenchen mentioned he tried to fix this problem too but so far no attempt was successful because there is no clear semantic (https://github.com/apache/spark/pull/21449#issuecomment-393554552).

So I'd like to propose to discuss here which is the best approach for tackling this issue, which I think would be great to fix for 3.0.0, so if we decide to introduce breaking changes in the design, we can do that.

Thoughts on this?

Thanks,
Marco


--
Ryan Blue
Software Engineer
Netflix


--
Ryan Blue
Software Engineer
Netflix