Skip to main content

Spark 2 Application Errors & Solutions

Exception - 

Exception in thread "broadcast-exchange-0" java.lang.OutOfMemoryError: Not enough memory to build and broadcast

This is Driver Exception and can be solved by 

  • setting spark.sql.autoBroadcastJoinThreshold to -1
  • Or, increasing --driver-memory

Exception - 
Container  is running beyond physical memory limits.
Current usage: X GB of Y GB physical memory used; X GB of Y GB virtual memory used. Killing container

YARN killed container as it was exceeding memory limits.
  • Increase 
  • --driver-memory
  • --executor-memory
  •  
Exception -
ERROR Executor: Exception in task 600 in stage X.X (TID 12345)
java.lang.OutOfMemoryError: GC overhead limit exceeded

This means that Executor JVM was spending more time in Garbage collection than actual execution. 
  • This JVM feature can be disabled by adding -XX:-UseGCOverheadLimit
  • Increasing Executor memory may help --executor-memory
  • Make data more distributed so that it is not skewed to one executor.
  • Might use parallel GC -XX:+UseParallelGC or -XX:+UseConcMarkSweepGC
Exception -

org.apache.spark.shuffle.FetchFailedException: failed to allocate 65536 byte(s) of direct
memory (used: 1073699840, max: 1073741824)
at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:442)

This means that executor went out of memory .

  • Can increase executor memory --executor-memory
  • Make data more distributed so that it is not skewed to one executor.
  • Increase shuffle partitions  --spark.sql.shuffle.partitions
Exception -
ExecutorLostFailure (executor 525 exited unrelated to the running tasks) Reason: Container container_1495825717937_0056_01_000916 on host: 10.0.0.14 was preempted.

This means you were running above YARN queue capacity assigned to your Job. 
  • Ask for more YARN resources, or schedule Job when resources are available to you.
Exception - 
WARN TaskSetManager: Lost task 49.2 in stage 6.0 (TID xxx, xxx.xxx.xxx.compute.internal): ExecutorLostFailure (executor 16 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 10.4 GB of 10.4 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

This means that executor is running out of memory as assigned by YARN
  • Can increase spark.yarn.executor.memoryOverhead
  • can increase --executor-memory
  • Can try reducing number of cores for an executor --executor-cores
Exception- 
org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 16 tasks (1048.5 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)

This means that total size of results is greater than the Spark Driver Max Result Size value. This not necessarily means that you are doing a collect causing results to be accumulated on driver. It may be the case that your Job is huge and resulting in large number of tasks, as tasks are serialized to executors by driver. 
  • Consider boosting spark.driver.maxResultSize
  • Or, may be break your job in to multiple sub jobs.
Exception - 
Caused by: org.apache.spark.shuffle.FetchFailedException: Too large frame: 5454002341

When the size of the shuffle data blocks exceeds the limit of 2 GB, which spark can handle, 
  • Identify and re-partition the dataframe 
  • increase parallelism spark.sql.shuffle.partitions
Exception - 
Caused by: java.lang.RuntimeException: Unsupported data type NullType.
at scala.sys.package$.error(package.scala:27)

This might be resulted because of SQL in place. In some cases, it is required to select NULL value(s). For example : Select NULL as col1 from Table1. Spark will not be able to determine data type for NULL value. Thus, our job fails with above error.
  • please try to update the query and cast the NULL to appropriate data type . For example – cast (NULL as String) as col1
Exception - 
Caused by: org.apache.spark.sql.AnalysisException: Cannot overwrite table XXX that is also being read from;


Exception - 
java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy
        at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:62)

OR 

Caused by: java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.2-d5273c94-b734-4a61-b631-b68a9e859151-libsnappyjava.so: /tmp/snappy-1.1.2-d5273c94-b734-4a61-b631-b68a9e859151-libsnappyjava.so: failed to map segment from shared object: Operation not permitted
        at java.lang.ClassLoader$NativeLibrary.load(Native Method)

It is because that /tmp doesn't have execute permissions.
  • Set temporary directory: --conf "spark.driver.extraJavaOptions=-Djava.io.tmpdir=/a/b/ctmp" --conf "spark.executor.extraJavaOptions=-Djava.io.tmpdir=/a/b/ctmp"
Exception-
org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans

If source data has a static partition value than Spark will analyze execution plan thinking that it is a case wherein it should be a Cross join instead of Inner join.
  • set property - "set spark.sql.crossJoin.enabled=true;"

Comments

Popular posts

Hive Parse JSON with Array Columns and Explode it in to Multiple rows.

 Say we have a JSON String like below -  { "billingCountry":"US" "orderItems":[       {          "itemId":1,          "product":"D1"       },   {          "itemId":2,          "product":"D2"       }    ] } And, our aim is to get output parsed like below -  itemId product 1 D1 2 D2   First, We can parse JSON as follows to get JSON String get_json_object(value, '$.orderItems.itemId') as itemId get_json_object(value, '$.orderItems.product') as product Second, Above will result String value like "[1,2]". We want to convert it to Array as follows - split(regexp_extract(get_json_object(value, '$.orderItems.itemId'),'^\\["(.*)\\"]$',1),'","') as itemId split(regexp_extract(get_json_object(value, '$.orderItems.product'),'^\\["(.*)\\"]$',1),&




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met




Caused by: java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainIntegerDictionary

Exception -  Caused by: java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainIntegerDictionary at org.apache.parquet.column.Dictionary.decodeToBinary(Dictionary.java:44) at org.apache.spark.sql.execution.vectorized.ColumnVector.getUTF8String(ColumnVector.java:645) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) Analysis - This might occur because of data type mismatch between Hive Table & written Parquet file. Solution - Correct the data type to match between Hive Table & Parquet




Hadoop Distcp Error Duplicate files in input path

  One may face following error while copying data from one cluster to other, using Distcp  Command: hadoop distcp -i {src} {tgt} Error: org.apache.hadoop.toolsCopyListing$DulicateFileException: File would cause duplicates. Ideally there can't be same file names. So, what might be happening in your case is you trying to copy partitioned table from one cluster to other. And, 2 different named partitions have same file name. Your solution is to correct Source path  {src}  in your command, such that you provide path uptil partitioned sub directory, not the file. For ex - Refer below : /a/partcol=1/file1.txt /a/partcol=2/file1.txt If you use  {src}  as  "/a/*/*"  then you will get the error  "File would cause duplicates." But, if you use  {src}  as  "/a"  then you will not get error in copying.




org.apache.spark.sql.AnalysisException: Cannot overwrite a path that is also being read from.;

  Caused by: org.apache.spark.sql.AnalysisException: Cannot overwrite a path that is also being read from.; at org.apache.spark.sql.execution.command.DDLUtils$.verifyNotReadPath(ddl.scala:906) at org.apache.spark.sql.execution.datasources.DataSourceAnalysis$$anonfun$apply$1.applyOrElse(DataSourceStrategy.scala:192) at org.apache.spark.sql.execution.datasources.DataSourceAnalysis$$anonfun$apply$1.applyOrElse(DataSourceStrategy.scala:134) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266) at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256) at org.apache.spark.sql.execution.datasources.DataSourceAnalysis.apply(DataSourceStrategy.scala:134) at org.apache.spark.sql.execution.datasource