Skip to main content

Posts

Hive - Create a Table delimited with Regex expression.

Example - I wanted to create a table with delimiter "~*". That can be done like below  - CREATE external TABLE db1.t1(a string COMMENT '', b string COMMENT '', c string COMMENT '') ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES (   'serialization.format' = '1',   'input.regex' = '(.*?)~\\*(.*?)~\\*(.*)' ) STORED AS   INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'   OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION '\abc' Note - In regex, I have to repeat the expression for as many columns as in table definition. For example, we had 3 columns. It can't be used for serialization that is. If I wanted to insert data in to the table I can't. It will throw exception. Other way to do same is -  create external TABLE `db1`.`t2` (col string) row format delimited location '\abc';

Caused by: org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans

Error - Caused by: org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans RepartitionByExpression [src_db_id#90, ssd_id#83, last_upd_dt#89], 200 +- Filter (isnotnull(ldy#94) && (ldy#94 = 2019))    +- HiveTableRelation `db2`.`t2`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [ssd_id#83, flctr_bu_id#84, group_name#85, eval_level#86, last_chg_actn#87, last_upd_oper#88, last_upd_dt#89, src_db_id#90, across_skill_type_eval_type#91, dlf_batch_id#92, dlf_load_dttm#93], [ldy#94] and Aggregate [2019] +- Project    +- HiveTableRelation `db1`.`t1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [ssd_id#36, flctr_bu_id#37, group_name#38, eval_level#39, last_chg_actn#40, last_upd_oper#41, last_upd_dt#42, src_db_id#43, across_skill_type_eval_type#44] Join condition is missing or trivial. Use the CROSS JOIN syntax to allow cartesian products between these relations.; at org.apache.spark.sql.catalyst.optimizer.CheckCartesianProd

Error - org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.IntWritable

In case your Spark application is failing with below error -  Caused by: java.lang.ClassCastException:  org.apache.hadoop.io .Text cannot be cast to  org.apache.hadoop.io .IntWritable at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:36) at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$5.apply(TableReader.scala:399) at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$5.apply(TableReader.scala:399) Analysis & Cause - This is a data reading error. It may be because ORC Data files have some column as Text (, or String). But, Hive table defined on top of that has column defined as Int. Solution- Either update the datatype in ORC file or Hive Metadata. So, that both are in Sync. Also, to verify above behavior execute the query from -  Hive Shell - It should work fine. Spark Shell - It should fail.

Spark Error : Unsupported data type NullType.

Spark Job failing with exception like -  Caused by: org.apache.spark.SparkException: Task failed while writing rows         at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:270)         at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:189)         at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:188)         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)         at org.apache.spark.scheduler.Task.run(Task.scala:108)         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

Thinking in Hive or Spark SQL - Interview questions

1) Table 1 has 100 records and table 2 has 10 records. If I do LEFT JOIN it is expected to get 100 records. But, 'm getting more records . How? 2) There are 2 data sets Table 1 1,a 2,b 3,c Table 2 1 2 I need to select all the records from Table 1 that are there in Table 2. Hot can I do that? Solution 1 - Inner JOIN Solution 2 - Sub Query (select * from Table 1 where ID in (select ID from Table 2)) Which one is better solution and why ? 3) If a table has too many partitions. Does it impact performance ? If Yes, how can we solve problem of too many partitions in Hive? 4) How will you solve "large number of small files" problem in Hive / Hadoop? What is the impact of having too many files? 5) How will you write a Pipe '|' character  delimited file in Hive ? 6) Difference between ROW_NUMBER and RANK window function ? 7) I have a table as follows - TABLE1 c1,c2,c3 1  ,2  ,3 How will I transpose column to rows? So, that output comes i