Skip to main content

Posts

Spark Error : Unsupported data type NullType.

Spark Job failing with exception like -  Caused by: org.apache.spark.SparkException: Task failed while writing rows         at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:270)         at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:189)         at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:188)         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)         at org.apache.spark.scheduler.Task.run(Task.scala:108)         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

Thinking in Hive or Spark SQL - Interview questions

1) Table 1 has 100 records and table 2 has 10 records. If I do LEFT JOIN it is expected to get 100 records. But, 'm getting more records . How? 2) There are 2 data sets Table 1 1,a 2,b 3,c Table 2 1 2 I need to select all the records from Table 1 that are there in Table 2. Hot can I do that? Solution 1 - Inner JOIN Solution 2 - Sub Query (select * from Table 1 where ID in (select ID from Table 2)) Which one is better solution and why ? 3) If a table has too many partitions. Does it impact performance ? If Yes, how can we solve problem of too many partitions in Hive? 4) How will you solve "large number of small files" problem in Hive / Hadoop? What is the impact of having too many files? 5) How will you write a Pipe '|' character  delimited file in Hive ? 6) Difference between ROW_NUMBER and RANK window function ? 7) I have a table as follows - TABLE1 c1,c2,c3 1  ,2  ,3 How will I transpose column to rows? So, that output comes i

Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met

Spark method to save as table

Spark 2.2 method to save as table ( def saveAsTable(tableName: String): Unit) can not read and write data to same table i.e. one can not have input source table and output target table as same.  If it is done then Spark throws an exception -  Caused by: org.apache.spark.sql.AnalysisException: Cannot overwrite table XXX that is also being read from;

Snappy ERROR using Spark/ Hive

we received following error using SPARK- ERROR - 1) java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy         at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:62)         at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51) 2) Caused by: java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.2-d5273c94-b734-4a61-b631-b68a9e859151-libsnappyjava.so: /tmp/snappy-1.1.2-d5273c94-b734-4a61-b631-b68a9e859151-libsnappyjava.so: failed to map segment from shared object: Operation not permitted         at java.lang.ClassLoader$NativeLibrary.load(Native Method)         at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)         at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)         at java.lang.Runtime.load0(Runtime.java:809) CAUSE -  It is because that /tmp doesn't have execute permissions. SOLUTION -  Update the tmp dir