Skip to main content

Posts

HBase Bulk Delete Column Qualifiers

Refer below Map Reduce Program that can be used to delete  column qualifier from HBase Table - -------- # Set HBase class path export HADOOP_CLASSPATH=`hbase classpath` #execute MR hadoop jar Test-0.0.1-SNAPSHOT.jar com.test.mymr.DeleteHBaseColumns --------- Refer - https://github.com/dinesh028/engineering/blob/master/src/com/test/mymr/DeleteHBaseColumns.java

HBase Phoenix Cause and Solution for dummy column "_0" or "Column Family:_0"

Cause - This dummy column is added by Phoenix- if someone created Phoenix table on top of existing HBase table. Solutions-  Following solutions can be used to delete _0 column from each row -  Execute Unix command like below -  echo "scan 'ns:tbl1', {COLUMNS => 'cf:_0'}" |hbase shell | grep "column=cf:_0" | cut -d' ' -f 2 | awk '{$1=$1};1'|sed -e 's/^/delete '"'"'ns:tbl1'"'"', '"'"'/' -e 's/$/'"'"', '"'"'cf:_0'"'"'/'  | hbase shell Above command will scan rows which has these columns and prepare delete statements and execute them to remove _0 column. But, above will not give a good performance in case of bigger tables. ------------------------------------------------------------------------------------------------------------------------- Other Solution can b...

Hive - Create a Table delimited with Regex expression.

Example - I wanted to create a table with delimiter "~*". That can be done like below  - CREATE external TABLE db1.t1(a string COMMENT '', b string COMMENT '', c string COMMENT '') ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES (   'serialization.format' = '1',   'input.regex' = '(.*?)~\\*(.*?)~\\*(.*)' ) STORED AS   INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'   OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION '\abc' Note - In regex, I have to repeat the expression for as many columns as in table definition. For example, we had 3 columns. It can't be used for serialization that is. If I wanted to insert data in to the table I can't. It will throw exception. Other way to do same is -  create external TABLE `db1`.`t2` (col string) row format delimited location '\abc'; ...

Caused by: org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans

Error - Caused by: org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans RepartitionByExpression [src_db_id#90, ssd_id#83, last_upd_dt#89], 200 +- Filter (isnotnull(ldy#94) && (ldy#94 = 2019))    +- HiveTableRelation `db2`.`t2`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [ssd_id#83, flctr_bu_id#84, group_name#85, eval_level#86, last_chg_actn#87, last_upd_oper#88, last_upd_dt#89, src_db_id#90, across_skill_type_eval_type#91, dlf_batch_id#92, dlf_load_dttm#93], [ldy#94] and Aggregate [2019] +- Project    +- HiveTableRelation `db1`.`t1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [ssd_id#36, flctr_bu_id#37, group_name#38, eval_level#39, last_chg_actn#40, last_upd_oper#41, last_upd_dt#42, src_db_id#43, across_skill_type_eval_type#44] Join condition is missing or trivial. Use the CROSS JOIN syntax to allow cartesian products between these relations.; at org.apache.spark.sql.catalyst.optimizer...

Error - org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.IntWritable

In case your Spark application is failing with below error -  Caused by: java.lang.ClassCastException:  org.apache.hadoop.io .Text cannot be cast to  org.apache.hadoop.io .IntWritable at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:36) at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$5.apply(TableReader.scala:399) at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$5.apply(TableReader.scala:399) Analysis & Cause - This is a data reading error. It may be because ORC Data files have some column as Text (, or String). But, Hive table defined on top of that has column defined as Int. Solution- Either update the datatype in ORC file or Hive Metadata. So, that both are in Sync. Also, to verify above behavior execute the query from -  Hive Shell - It should work fine. Spark Shell - It should fail.