Skip to main content

Posts

spark-sftp com.springml.spark.sftp org.apache.spark.sql.AnalysisException: Path does not exist:

  We had been facing issue with using spark-sftp Jar while downloading a remote SFTP file and creating Dataframe. Code being executed: val df = spark.read.format("com.springml.spark.sftp") .option("host", HOST) .option("port", PORT) .option("username", UN) .option("password", PWD) .option("fileType", "csv") .option("inferSchema", "true") .option("header", "true") .load(FILENAME) Error response: org.apache.spark.sql.AnalysisException: Path does not exist: wasb://... at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:612) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:595) at scala.collection.TraversableLike$$anonfun$flatMap$1.ap...

Spark - Distributed Computing and Multi-Threading

Spark on Hadoop works on concept of data locality, such that code is tried to send to data rather then shuffling data to code. We specify following resources when creating a Spark Job -  executor cores executor memory driver cores driver memory number of executors Above property in general determines number of executors and each executor is kind of JVM process with specified cores and memory to use. The executors connects back to your driver program. Now the driver can send them commands.  Data is distributed on HDFS in blocks, and an input split kind of determine size of a data partition. YARN tries to collocate executors as close to data as possible to minimize network transfer of data. A task is a command sent from the driver to an executor by serializing your Function object. The executor deserializes the command, and executes it on a partition. The number of the Spark tasks in a single stage equals to the number of RDD partitions. Now, "executor cores" or " spark.ex...

Spark Error - Caused by: org.apache.spark.SparkException: Could not execute broadcast in 300 secs

  While running Spark applications you would have seen below error -  Caused by: org.apache.spark.SparkException: Could not execute broadcast in 300 secs. You can increase the timeout for broadcasts via spark.sql.broadcastTimeout or disable broadcast join by setting spark.sql.autoBroadcastJoinThreshold to -1 at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:150) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:154) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:150) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:165) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:162) at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:150) at org.apache.spark.s...

Load Balance or dynamic discovery of HiveServer2 Connection from Beeline or Hive Shell

To provide high availability or load balancing for HiveServer2, Hive provides a function called dynamic service discovery where multiple HiveServer2 instances can register themselves with Zookeeper. Instead of connecting to a specific HiveServer2 directly, clients connect to Zookeeper which returns a randomly selected registered HiveServer2 instance. For example -  Below command connects to Hive Server on MachineA beeline -u "jdbc:hive2://machineA: 10000" Below command connects to Zookeeper Node: to determine one of the available Hive Server's to make a connection beeline -u "jdbc:hive2://machineA:2181,machineB:2181,machineC:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-mob-batch?tez.queue.name=myyarnqueue" We can Create ZNode with Zookeeper as follows -  Open Zookeeper command line interface zookeeper-client Connect to Zookeeper Server connect  machineA:2181,machineB:2181,machineC:2181 Create ZNode create / hiveserver2-mob-batch Manually, ...

org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow

  We were running an application which was leading to below error -  Job aborted due to stage failure: Task 137 in stage 5.0 failed 4 times, most recent failure: Lost task 137.3 in stage 5.0 (TID 2090, ncABC.hadoop.com, executor 1): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 59606960. To avoid this, increase spark.kryoserializer.buffer.max value. at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:330) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:456) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0, required: 59606960 at com.esotericsoftware.kryo.io.Output.require(Output.java:167) at com.esotericsoftware.kryo.io.Output.wr...