Skip to main content

SPARK running explained - 1

Spark runtime components
The main Spark components running in a cluster: client, driver, and executors.
The client process starts the driver program. It can be spark-submit or spark-shell or spark-sql or custom application. Client process:-
1.       Prepares the classpath and all configuration options for the Spark application
2.       Passes application arguments to application running in driver.
There is always one driver per Spark application. The driver orchestrates and monitors execution of a application. Subcomponents of driver-
1.       Spark context
2.       Scheduler
These subcomponents are responsible for-
1.       Requesting memory and CPU resources from cluster managers
2.       Breaking application logic into stages and tasks
3.       Sending tasks to executors
4.       Collecting the results

Driver program can run in 2 ways –
1.       Cluster-deploy Mode – Driver runs as separate JVM process in a cluster & cluster manages its resources.


2.       Client-deploy mode – Driver is running in the client’s JVM process and communicates with the executors managed by the cluster.


The executors are the JVM processes that,
1.       Accept tasks from the driver
2.       Execute those tasks
3.       Return the results to the driver.
4.       Each executor has several task slots for running tasks in parallel
5.       Although these task slots are often referred to as CPU cores in Spark, they’re implemented as threads and don’t have to correspond to the number of physical CPU cores on the machine.

Once the driver is started, it starts and configures an instance of SparkContext. There can be only one Spark context per JVM. Although Spark can run in local mode but in production it ran with one of the supported cluster managers i.e. YARN, Mesos, Spark Standalone.
Spark standalone cluster is a Spark-specific cluster.
Spark standalone
YARN
This cluster is built specifically for SPARK applications, Thus it doesn’t support communication with HDFS secured with the Kerberos authentication protocol.
For this use YARN
Provides faster job startup
Slower job startup than standalone cluster

YARN is Hadoop’s resource manager and execution system with pros-
1.       Many organization already have Hadoop clusters with YARN as resource manager.
2.       YARN allows run all kinds of applications, not just SPARK
3.       Provides methods for isolating and prioritizing applications
4.       Supports Kerberos-secured HDFS
5.       Don’t have to install Spark on all nodes in the cluster
Mesos is a scalable and fault-tolerant distributed systems kernel. Unlike other clusters, which only schedule memory, Mesos provides scheduling of other types of resources (CPU, disk, port). It has fine-grained job scheduling. Mesos is a “scheduler of scheduler frameworks” because of its two-level scheduling architecture, for example – With Myriad project you can run YARN on top of Mesos.


Job and resource scheduling
Resources for Spark applications are scheduled as executors (JVM processes) and CPU (task slots) and then memory is allocated to them.
1.       The cluster manager starts the executor processes requested by the driver
2.       Also, starts driver process in case of cluster-deploy mode.
3.       Cluster manger can restart & stop processes
4.       Cluster manger can set maximum CPU‘s that executors can use.
Spark scheduler communicates with driver and executors and decides which executors will run which tasks. This is called job scheduling, and it affects resources usage in the cluster.
There are 2 types of Scheduling –
1.       Cluster resource scheduling
2.       Spark resource scheduling – Set spark.scheduler.mode – { FAIR or FIFO }

Note - SparkContext is thread-safe

Comments

Popular posts

Hive Parse JSON with Array Columns and Explode it in to Multiple rows.

 Say we have a JSON String like below -  { "billingCountry":"US" "orderItems":[       {          "itemId":1,          "product":"D1"       },   {          "itemId":2,          "product":"D2"       }    ] } And, our aim is to get output parsed like below -  itemId product 1 D1 2 D2   First, We can parse JSON as follows to get JSON String get_json_object(value, '$.orderItems.itemId') as itemId get_json_object(value, '$.orderItems.product') as product Second, Above will result String value like "[1,2]". We want to convert it to Array as follows - split(regexp_extract(get_json_object(value, '$.orderItems.itemId'),'^\\["(.*)\\"]$',1),'","') as itemId split(regexp_extract(get_json_object(value, '$.orderItems.product'),'^\\["(.*)\\"]$',1),&




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met




Caused by: java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainIntegerDictionary

Exception -  Caused by: java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainIntegerDictionary at org.apache.parquet.column.Dictionary.decodeToBinary(Dictionary.java:44) at org.apache.spark.sql.execution.vectorized.ColumnVector.getUTF8String(ColumnVector.java:645) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) Analysis - This might occur because of data type mismatch between Hive Table & written Parquet file. Solution - Correct the data type to match between Hive Table & Parquet




Hadoop Distcp Error Duplicate files in input path

  One may face following error while copying data from one cluster to other, using Distcp  Command: hadoop distcp -i {src} {tgt} Error: org.apache.hadoop.toolsCopyListing$DulicateFileException: File would cause duplicates. Ideally there can't be same file names. So, what might be happening in your case is you trying to copy partitioned table from one cluster to other. And, 2 different named partitions have same file name. Your solution is to correct Source path  {src}  in your command, such that you provide path uptil partitioned sub directory, not the file. For ex - Refer below : /a/partcol=1/file1.txt /a/partcol=2/file1.txt If you use  {src}  as  "/a/*/*"  then you will get the error  "File would cause duplicates." But, if you use  {src}  as  "/a"  then you will not get error in copying.




org.apache.spark.sql.AnalysisException: Cannot overwrite a path that is also being read from.;

  Caused by: org.apache.spark.sql.AnalysisException: Cannot overwrite a path that is also being read from.; at org.apache.spark.sql.execution.command.DDLUtils$.verifyNotReadPath(ddl.scala:906) at org.apache.spark.sql.execution.datasources.DataSourceAnalysis$$anonfun$apply$1.applyOrElse(DataSourceStrategy.scala:192) at org.apache.spark.sql.execution.datasources.DataSourceAnalysis$$anonfun$apply$1.applyOrElse(DataSourceStrategy.scala:134) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266) at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256) at org.apache.spark.sql.execution.datasources.DataSourceAnalysis.apply(DataSourceStrategy.scala:134) at org.apache.spark.sql.execution.datasource