Wednesday, July 27, 2016

Interview Support for all Technologies with 100% success rate

We are providing Hadoop , DevOps, Informatica, Qlikview, BA, Testing all, Oracle Sql Developer, Java, MSBI, UNIX Developer, Project training, Interview and Certification Support for exports and Beginners. If any one interested please replay back to my Email : ratrainings@gmail.com or call me +91 9640156134. Whatsapp id: +91 9640156134

Tuesday, May 24, 2016

Free Hadoop Online Training and Job Support , Proxy ( Interview Support ) @+91 9640156134

We are Providing BigData Hadoop Online training with real time Project and Certification and interview support. Interested folks contact me to my id:hadooptoall@gmail.com ( or ) +91 9640156134 for more details.

Saturday, May 14, 2016

Hadoop proxy for male and female availabe

Hi,

We are providing Hadoop Developer and Admin male and female proxy support, please contact : hadooptoall@gmail.com  ( or ) : +91 9640156134 for more details.

Regards,
BigData Hadoop Team,
+91 9640156134

Wednesday, November 5, 2014

Performance tuning of hive queries

Performance tuning of hive queries


Hive performance optimization is a larger topic on its own and is very specific to the queries you are using. Infact each query in a query file needs separate performance tuning to get the most robust results.

I'll try to list a few approaches in general used for performance optimization
Limit the data flow down the queries
When you are on a hive query the volume of data that flows each level down is the factor that decides performance. So if you are executing a script that contains a sequence of hive QL, make sure that the data filtration happens on the first few stages rather than bringing unwanted data to bottom. This will give you significant performance numbers as the queries down the lane will have very less data to crunch on.

This is a common bottle neck when some existing SQL jobs are ported to hive, we just try to execute the same sequence of SQL steps in hive as well which becomes a bottle neck on the performance. Understand the requirement or the existing SQL script and design your hive job considering data flow 

Use hive merge files
Hive queries are parsed into map only and map reduce job. In a hive script there will lots of hive queries. Assume one of your queries is parsed to a mapreduce job and the output files from the job are very small, say 10 mb. In such a case the subsequent query that consumes this data may generate more number of map tasks and would be inefficient. If you have more jobs on the same data set then all the jobs will get inefficient. In such scenarios if you enable merge files in hive, the first query would run a merge job at the end there by merging small files into  larger ones. This is controlled
using the following parameters

hive.merge.mapredfiles=true
hive.merge.mapfiles=true (true by default in hive)

For more control over merge files you can tweak these properties as well
hive.merge.size.per.task (the max final size of a file after the merge task)
hive.merge.smallfiles.avgsize (the merge job is triggered only if the average output filesizes is less than the specified value)

The default values for the above properties are
hive.merge.size.per.task=256000000
hive.merge.smallfiles.avgsize=16000000

When you enable merge an extra map only job is triggered, whether this job gets you anoptimization or an over head is totally dependent on your use case or the queries.

Join Optimizations
Joins are very expensive.Avoid it if possible. If it is required try to use join optimizations as map joins, bucketed map joins etc


There is still more left on hive query performance optimization, take this post as the baby step. More tobe added on to this post and will be addded soon . :)

Monday, September 9, 2013

HADOOP IMPORTANT LINKS

http://www.techspritz.com/hadoop-single-node-cluster-setup/

http://atbrox.com/2011/05/16/mapreduce-hadoop-algorithms-in-academic-papers-4th-update-may-2011/

http://hortonworks.com/blog/hadoop-hadoop-hurrah-hdp-for-windows-is-now-ga/

http://bigdatastudio.com/2013/05/19/big-data-jobs/

http://hadoopblog.blogspot.in/2010/05/facebook-has-worlds-largest-hadoop.html?goback=.gde_4244719_member_243018706

http://www.youtube.com/watch?v=A02SRdyoshM

http://jugnu-life.blogspot.com/2012/03/installing-pig-apache-hadoop-pig.html

http://www.aptibook.com/Technical/Hadoop-interview-questions-and-answers?id=2

http://www.pappupass.com/class/index.php/hadoop/hadoop-interview-questions

http://www.rohitmenon.com/index.php/cloudera-certified-hadoop- developer-ccd-410/

http://kickstarthadoop.blogspot.in/2011/04/word-count-hadoop-map- reduce-example.html

Partitions

Partition : means to categorize the data in a table.
Ø Whenever we request a piece of data we use Partitions  by default  it is a Non-Partitioned Table.
Ø
Types:  1. Partitioned
              2. Non – Partitioned  (by Default)
EX: Non-Partitioned:
 Syntax: create table <table name>(col1 data type,col2 data type, …………) row format  delimited
                                  fields  terminated  by  ‘,’
Loading:  load data local inpath ‘<local file name>’ into table  <table name>;
EX: Partitioned:


Syntax  EX:   hive> create table sales_day(prid int,prname string,quantity int,price double,branch string) partitioned by (day int,month int,year int) row format delimited fields terminated by ',';                          


hive> load data local inpath 'sales' into table sales_day partition(day=12,month=2,year=2013);

Hive> load data local inpath ‘sales2’ into table sales_day partition(day=13,month=2,year=2013);

Hive> select * from sales_day;

Note : 
ØIn hive Partitioned  are logical  in RDBMS the partitions are Physical;

ØWe use the technique of partitions  to  manage  incremental   loads;

Managed Tables and External Tables

When you create a table in Hive, by default Hive will manage the data, which means that Hive moves the data into its warehouse directory.
Alternatively, you may create an external table, which tells Hive to refer to the data that is at an existing location outside the warehouse directory.

The difference between the two types of table is seen in the LOAD and DROP  Semantics.

CREATE TABLE managed_table(dummy STRING);
LOAD DATA INPATH   '/user/tom/data.txt' INTO table managed_table;

CREATE EXTERNAL TABLE external_table(dummy STRING)
            LOCATION   '/user/tom/external_table';

LOAD DATA INPATH '/user/tom/data.txt' INTO TABLE external_table;


Which one to use?
As a rule of thumb, if you are doing all your processing with Hive, then use managed tables, but if you wish to use Hive and other tools on the same dataset, then use external tables. A common pattern is to use an external table to access an initial dataset stored in HDFS (created by another process), then use a Hive transform to move the data into a managed Hive table. This works the other way around, too—an external table (not necessarily on HDFS) can be used to export data from Hive for other applications to use.
Another reason for using external tables is when you wish to associate multiple schemas with the same dataset.