freiberufler Big Data Developer auf freelance.de

Big Data Developer

offline
  • auf Anfrage
  • 60326 Frankfurt am Main
  • Weltweit
  • ta  |  en  |  ml
  • 06.11.2018

Kurzvorstellung

Nearly 10 years of experience in software development life cycle design, development, and support of systems application architecture. Cloudera Certified Hadoop Developer

Auszug Referenzen (3)

"Mr. [...] [...] is a very sincere and competant data engineer , he has worked in capacity in our organisation."
Data Engineer (Festanstellung)
Kundenname anonymisiert
Tätigkeitszeitraum

8/2017 – 1/2018

Tätigkeitsbeschreibung

Analyse the data available with the client and give a meaningful solution or prediction with the present data. Design a system to bring data available in various vectors to one common point for analysis.

Eingesetzte Qualifikationen

Apache Hadoop

"He was sincere, hardworking, prompt while working for us. I am happy with the performance and highly recommend and happy work with him in future"
Software Developer
Kundenname anonymisiert
Tätigkeitszeitraum

6/2014 – 5/2015

Tätigkeitsbeschreibung

• Installed and configured Hadoop MapReduce, HDFS and developed multiple MapReduce jobs in Java for data cleansing and preprocessing.
• Importing and exporting data into HDFS and Hive using Sqoop.
• Involved in defining job flows, managing and reviewing log files.
• Extracted files from MySql through Sqoop and placed in HDFS and processed.
• Load and transform large sets of structured, semi structured and unstructured data.
• Responsible to manage data coming from different sources.
• Supported Map Reduce Programs those are running on the cluster.
• Involved in loading data from UNIX file system to HDFS.
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Design of MySQL tables and creation of Stored Procedures for various operations.

Eingesetzte Qualifikationen

Apache Hadoop

"Mr. [...] [...] M was working in our firm as Software Developer (Big Data). He was sincere,hardworking competent in his duties, and efficient."
Senior Developer
Mahesh Kumar
Tätigkeitszeitraum

4/2013 – 5/2013

Tätigkeitsbeschreibung

Project 1: Analysis of Products on various e-Commerce sites
Analysis of products of various e-commerce sites and comparing their prices, so customer can choose the best priced option.
Environment: Pig, MapReduce, Linux, Flume, Custom Spider, Hive.
Roles and Responsibility:
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Writing of Pig UDF’s for analysis.

Project 2: Twitter Tweet Analysis.
Analysis of Twitter Tweets on various hashtags and give a report on the latest trend based on the data collected.
Environment: Pig, MapReduce, Linux, Flume, Hive.
Roles and Responsibility:
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Writing of Pig UDF’s for analysis.
• Connecting to Twitter using Flume, collect data and move it to Hdfs.

Project 3: Patient Appointment Management
Designing and creating a web based Patient Appointment and management system for patients and doctors for scheduling, cancelling, rescheduling online appointments.
Environment: Java, Struts, Java Script, MySQL 10.3 technologies.
Roles and Responsibility:
• Responsible and active in the analysis, design, implementation and deployment of full Software Development Lifecycle (SDLC) of the project.
• Extensively used Struts framework as the controller to handle subsequent client requests and invoke the model based upon user requests.
• Defined the search criteria and pulled out the record of the customer from the database. Make the required changes and save the updated record back to the database.

Eingesetzte Qualifikationen

Apache Hadoop

Qualifikationen

  • Apache Hadoop3 J.

Projekt‐ & Berufserfahrung

Data Engineer (Festanstellung)
Kundenname anonymisiert, Kuala Lampur
8/2017 – 1/2018 (6 Monate)
Öffentliche Verwaltung
Tätigkeitszeitraum

8/2017 – 1/2018

Tätigkeitsbeschreibung

Analyse the data available with the client and give a meaningful solution or prediction with the present data. Design a system to bring data available in various vectors to one common point for analysis.

Eingesetzte Qualifikationen

Apache Hadoop

Hadoop Developer
Datalynx Sdn Bhd, Kuala Lampur
7/2016 – 8/2017 (1 Jahr, 2 Monate)
Gesundheitswesen
Tätigkeitszeitraum

7/2016 – 8/2017

Tätigkeitsbeschreibung

• Handled importing of data (ETL) from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS. Extracted the data from SapHana into HDFS using Sqoop on regular time intervals (automated).
• Analyzed the data by performing Hive queries to know user behaviour using Drill.
• Continuous monitoring and managing the Hadoop cluster through HDPManager.
• Installed Quartz Automation engine to run multiple Hive and Sqoop jobs.
• Developed Hive queries to process the data and generate the data cubes for visualizing by moving the result to ElasticSearch using Kafka

Eingesetzte Qualifikationen

Apache Hadoop

BigData Hadoop Developer
IDFC Bank Pvt Ltd, Mumbai
6/2015 – 6/2016 (1 Jahr, 1 Monat)
Banken
Tätigkeitszeitraum

6/2015 – 6/2016

Tätigkeitsbeschreibung

• Handled importing of data from various vendor systems to common platform, and validate the data for missing information and make it into same type or order before sending it to Recon calculation.
• Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and troubleshooting, manage and review data backups, manage and review Hadoop log files.
• Optimisation of existing process for reducing the time taken for the process.

Eingesetzte Qualifikationen

Apache Hadoop

Software Developer
Vuelogix Technologies Pvt Ltd, Banglore
6/2014 – 5/2015 (1 Jahr)
Banken
Tätigkeitszeitraum

6/2014 – 5/2015

Tätigkeitsbeschreibung

• Installed and configured Hadoop MapReduce, HDFS and developed multiple MapReduce jobs in Java for data cleansing and preprocessing.
• Importing and exporting data into HDFS and Hive using Sqoop.
• Involved in defining job flows, managing and reviewing log files.
• Extracted files from MySql through Sqoop and placed in HDFS and processed.
• Load and transform large sets of structured, semi structured and unstructured data.
• Responsible to manage data coming from different sources.
• Supported Map Reduce Programs those are running on the cluster.
• Involved in loading data from UNIX file system to HDFS.
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Design of MySQL tables and creation of Stored Procedures for various operations.

Eingesetzte Qualifikationen

Apache Hadoop

Senior Developer
LysInfotech, Bangalore
4/2013 – 5/2013 (2 Monate)
Soziale Einrichtungen
Tätigkeitszeitraum

4/2013 – 5/2013

Tätigkeitsbeschreibung

Project 1: Analysis of Products on various e-Commerce sites
Analysis of products of various e-commerce sites and comparing their prices, so customer can choose the best priced option.
Environment: Pig, MapReduce, Linux, Flume, Custom Spider, Hive.
Roles and Responsibility:
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Writing of Pig UDF’s for analysis.

Project 2: Twitter Tweet Analysis.
Analysis of Twitter Tweets on various hashtags and give a report on the latest trend based on the data collected.
Environment: Pig, MapReduce, Linux, Flume, Hive.
Roles and Responsibility:
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Writing of Pig UDF’s for analysis.
• Connecting to Twitter using Flume, collect data and move it to Hdfs.

Project 3: Patient Appointment Management
Designing and creating a web based Patient Appointment and management system for patients and doctors for scheduling, cancelling, rescheduling online appointments.
Environment: Java, Struts, Java Script, MySQL 10.3 technologies.
Roles and Responsibility:
• Responsible and active in the analysis, design, implementation and deployment of full Software Development Lifecycle (SDLC) of the project.
• Extensively used Struts framework as the controller to handle subsequent client requests and invoke the model based upon user requests.
• Defined the search criteria and pulled out the record of the customer from the database. Make the required changes and save the updated record back to the database.

Eingesetzte Qualifikationen

Apache Hadoop

Zertifikate

Cloudera Certified Developer for Apache Hadoop
2015
Oracle Certified Java Programmer
2013

Weitere Kenntnisse

Cloudera Certified BigData Developer with more than 10 years experience.

Persönliche Daten

Sprache
  • Englisch (Fließend)
  • Tamil (Muttersprache)
  • Malayalam (Fließend)
  • Hindi (Gut)
Reisebereitschaft
Weltweit
Arbeitserlaubnis
  • Europäische Union
Profilaufrufe
1600
Alter
40
Berufserfahrung
16 Jahre und 6 Monate (seit 06/2008)
Projektleitung
2 Jahre

Kontaktdaten

Nur registrierte PREMIUM-Mitglieder von freelance.de können Kontaktdaten einsehen.

Jetzt Mitglied werden