Caused by: org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans
Error -
Caused by: org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans
RepartitionByExpression [src_db_id#90, ssd_id#83, last_upd_dt#89], 200
+- Filter (isnotnull(ldy#94) && (ldy#94 = 2019))
+- HiveTableRelation `db2`.`t2`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [ssd_id#83, flctr_bu_id#84, group_name#85, eval_level#86, last_chg_actn#87, last_upd_oper#88, last_upd_dt#89, src_db_id#90, across_skill_type_eval_type#91, dlf_batch_id#92, dlf_load_dttm#93], [ldy#94]
and
Aggregate [2019]
+- Project
+- HiveTableRelation `db1`.`t1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [ssd_id#36, flctr_bu_id#37, group_name#38, eval_level#39, last_chg_actn#40, last_upd_oper#41, last_upd_dt#42, src_db_id#43, across_skill_type_eval_type#44]
Join condition is missing or trivial.
Use the CROSS JOIN syntax to allow cartesian products between these relations.;
at org.apache.spark.sql.catalyst.optimizer.CheckCartesianProducts$$anonfun$apply$20.applyOrElse(Optimizer.scala:1066)
at org.apache.spark.sql.catalyst.optimizer.CheckCartesianProducts$$anonfun$apply$20.applyOrElse(Optimizer.scala:1063)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
Analysis -
Caused by: org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans
RepartitionByExpression [src_db_id#90, ssd_id#83, last_upd_dt#89], 200
+- Filter (isnotnull(ldy#94) && (ldy#94 = 2019))
+- HiveTableRelation `db2`.`t2`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [ssd_id#83, flctr_bu_id#84, group_name#85, eval_level#86, last_chg_actn#87, last_upd_oper#88, last_upd_dt#89, src_db_id#90, across_skill_type_eval_type#91, dlf_batch_id#92, dlf_load_dttm#93], [ldy#94]
and
Aggregate [2019]
+- Project
+- HiveTableRelation `db1`.`t1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [ssd_id#36, flctr_bu_id#37, group_name#38, eval_level#39, last_chg_actn#40, last_upd_oper#41, last_upd_dt#42, src_db_id#43, across_skill_type_eval_type#44]
Join condition is missing or trivial.
Use the CROSS JOIN syntax to allow cartesian products between these relations.;
at org.apache.spark.sql.catalyst.optimizer.CheckCartesianProducts$$anonfun$apply$20.applyOrElse(Optimizer.scala:1066)
at org.apache.spark.sql.catalyst.optimizer.CheckCartesianProducts$$anonfun$apply$20.applyOrElse(Optimizer.scala:1063)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
Analysis -
- This is happening because when Spark get the Physical Plan for execution it assumes that an Inner Join specified in code is actually a case of cartesian / cross join.
- Say, we have a source table which has ldy column = 2019
- We, are doing a inner join with Target table on ldy column.
- In this scenario, it means we are trying to select from Target Table where ldy =2019. Thus, a plan can be optimized for same.
Solution -
- Instead of Inner Join select the data directly from Target Table.
- Or, Change it into Subquery or Cross Join.
- Or, Set following Spark property - set spark.sql.crossJoin.enabled=true;
Comments
Post a Comment