Skip to main content

Posts

Showing posts from September, 2019

Error - org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.IntWritable

In case your Spark application is failing with below error -  Caused by: java.lang.ClassCastException:  org.apache.hadoop.io .Text cannot be cast to  org.apache.hadoop.io .IntWritable at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:36) at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$5.apply(TableReader.scala:399) at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$5.apply(TableReader.scala:399) Analysis & Cause - This is a data reading error. It may be because ORC Data files have some column as Text (, or String). But, Hive table defined on top of that has column defined as Int. Solution- Either update the datatype in ORC file or Hive Metadata. So, that both are in Sync. Also, to verify above behavior execute the query from -  Hive Shell - It should work fine. Spark Shell - It should fail.

Spark Error : Unsupported data type NullType.

Spark Job failing with exception like -  Caused by: org.apache.spark.SparkException: Task failed while writing rows         at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:270)         at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:189)         at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:188)         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)         at org.apache.spark.scheduler.Task.run(Task.scala:108)         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

Thinking in Hive or Spark SQL - Interview questions

1) Table 1 has 100 records and table 2 has 10 records. If I do LEFT JOIN it is expected to get 100 records. But, 'm getting more records . How? 2) There are 2 data sets Table 1 1,a 2,b 3,c Table 2 1 2 I need to select all the records from Table 1 that are there in Table 2. Hot can I do that? Solution 1 - Inner JOIN Solution 2 - Sub Query (select * from Table 1 where ID in (select ID from Table 2)) Which one is better solution and why ? 3) If a table has too many partitions. Does it impact performance ? If Yes, how can we solve problem of too many partitions in Hive? 4) How will you solve "large number of small files" problem in Hive / Hadoop? What is the impact of having too many files? 5) How will you write a Pipe '|' character  delimited file in Hive ? 6) Difference between ROW_NUMBER and RANK window function ? 7) I have a table as follows - TABLE1 c1,c2,c3 1  ,2  ,3 How will I transpose column to rows? So, that output comes i