Pages

Friday, July 31, 2015

Reg : Need Hadoop Developer

Hi Partners, 
 
Please go through the requirement And Send Updated Resume to

Title:                     Hadoop Developer.
Location:             Atlanta – GA
Duration:         12 months.
Interview : Phone amd Skype .


The Hadoop Developer will:
Design, develop, and deliver solutions based on Big Data applications that fulfill the strategic vision for enterprise applications to successfully support our business.   
Perform the full deployment lifecycle, from on-premises to the cloud, including installation, configuration, initial production deployment, recovery, security and data governance for Hadoop.
Evaluate and provide technical solutions to design, develop, and support business units wishing to implement an information technology solution.  
Refine raw data into actionable insight using visualization and statistics with innovative analytics applications and systems.
Develop applications that can interact with data in the most appropriate way using latest tools - Hortonworks Data Platform (HDP) preferred.
Lead the implementation (installation and Configuration) of HDP 2.2 with a complete cluster deployment layout (replication factors, setup NFS Gateway to access HDFS data, resource managers, node managers & various phases of Map Reduce Jobs)  
Participate in the design, development, validation, and maintenance of the Big Data platform and associated applications.  
Provide architectural oversight to how the platform is built to ensure that it supports high volume / high velocity data streams and is scalable to meet growth expectations. 
Monitor workflows and job execution using the Ambari UI, Ganglia or any equivalent tools.  
Assist administration in commission and decommission of nodes.
Back up and recover Hadoop data using snapshots & high availability. .
Develop, implement, and participate in designing column family schemas of Hive and Hbase within HDFS. 
Develop the data layer for performance critical reporting systems.   
Recommend and assist with the development and design of HDFS (hive data partitioning, Vectorization and bucketing with Horton works Big Insights query tools)
Perform Day to Day operational tasks using flume and Sqoop insight data to different RDBMS. 
Develop guidelines and plans for Performance tuning of a Hadoop/NoSQL environment with underlying impact analysis of Map-reduce jobs using CBO and analytical conversions 
Implement a mixed batch / near-real time architecture to analyze, index, and publish data for applications. 
Write a custom reducer that reduces the number of underlying Map Reduce jobs generated from a Hive query. 
Develop efficient Hive scripts  with joins on datasets using a variety of techniques (Map-side and Sort-Merge joins with various analytical functions)  
Develop jobs to capture CDC (Change Data Capture) from Hive based internal, external and managed systems.
Partners with key internal teams to ensure that the Big Data solution is identifying all the data points in upstream systems and classifying them appropriately to support analytic objectives.  
Identify and implement appropriate information delivery mechanisms that improve decision-making capability for our customers.
Design, Develop and troubleshoot transformations to ingest and manipulate data from various sources within the company and their extended environment using native Hadoop tools / other ETL tools 
Design and set up exception handling jobs
Write Oracle scripts, functions, stored procedures, complex SQL queries, PL/SQL Analytical functions, hierarchical parent-child queries to support application systems. 

Preferred Background :

A degree in computer science or similar discipline.
3+ years development experience in technologies such as Hadoop (HDP preferred), Pentaho (or any ETL tools), and Oracle databases 
A very strong SQL/data analysis or data mining background
Experience with Business Intelligence, Data Warehousing
Comprehension of large scale data management environments (relational and/or NoSQL), Audit Controls, ETL Framework is expected.
Prior experience in building scalable distributed data processing solutions with Hadoop using tools such as HBase (NoSQL), Hive, Flume, and Scoop.
Proficiency with MapReduce / HDFS architecture, Linux or Unix OS system management and a variety of scripting languages
Hortonworks certified developers strongly preferred, but Cloudera is ok.
Experience with configuring workflows and deployment using tools such as Apache Oozie.
Comprehension of rack awareness and topology is preferred
Experience in designing Hadoop flat and Star models with Map Reduce impact analysis.
Experience with real time big data reporting systems.
Experience with advanced Hive features like windowing, CBO, views and ORC files and compression techniques.

Thanks & Regards
 
Steve
 
Competent Systems, Inc
Desk: (678)-885-9500, Ext: 34

--
You received this message because you are subscribed to the Google Groups "Jobs for USA consultants (Citizen G.C and H1B)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jobs-for-usa-consultants-citizen-gc-and-h1b+unsubscribe@googlegroups.com.
To post to this group, send email to jobs-for-usa-consultants-citizen-gc-and-h1b@googlegroups.com.
Visit this group at http://groups.google.com/group/jobs-for-usa-consultants-citizen-gc-and-h1b.
For more options, visit https://groups.google.com/d/optout.

0 comments:

Post a Comment