Big Data Developer
- Verfügbarkeit einsehen
- 3 Referenzen
- auf Anfrage
- 60326 Frankfurt am Main
- Weltweit
- ta | en | ml
- 06.11.2018
Kurzvorstellung
Auszug Referenzen (3)
"Mr. [...] [...] is a very sincere and competant data engineer , he has worked in capacity in our organisation."
8/2017 – 1/2018
TätigkeitsbeschreibungAnalyse the data available with the client and give a meaningful solution or prediction with the present data. Design a system to bring data available in various vectors to one common point for analysis.
Eingesetzte QualifikationenApache Hadoop
"He was sincere, hardworking, prompt while working for us. I am happy with the performance and highly recommend and happy work with him in future"
6/2014 – 5/2015
Tätigkeitsbeschreibung
• Installed and configured Hadoop MapReduce, HDFS and developed multiple MapReduce jobs in Java for data cleansing and preprocessing.
• Importing and exporting data into HDFS and Hive using Sqoop.
• Involved in defining job flows, managing and reviewing log files.
• Extracted files from MySql through Sqoop and placed in HDFS and processed.
• Load and transform large sets of structured, semi structured and unstructured data.
• Responsible to manage data coming from different sources.
• Supported Map Reduce Programs those are running on the cluster.
• Involved in loading data from UNIX file system to HDFS.
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Design of MySQL tables and creation of Stored Procedures for various operations.
Apache Hadoop
"Mr. [...] [...] M was working in our firm as Software Developer (Big Data). He was sincere,hardworking competent in his duties, and efficient."
4/2013 – 5/2013
Tätigkeitsbeschreibung
Project 1: Analysis of Products on various e-Commerce sites
Analysis of products of various e-commerce sites and comparing their prices, so customer can choose the best priced option.
Environment: Pig, MapReduce, Linux, Flume, Custom Spider, Hive.
Roles and Responsibility:
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Writing of Pig UDF’s for analysis.
Project 2: Twitter Tweet Analysis.
Analysis of Twitter Tweets on various hashtags and give a report on the latest trend based on the data collected.
Environment: Pig, MapReduce, Linux, Flume, Hive.
Roles and Responsibility:
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Writing of Pig UDF’s for analysis.
• Connecting to Twitter using Flume, collect data and move it to Hdfs.
Project 3: Patient Appointment Management
Designing and creating a web based Patient Appointment and management system for patients and doctors for scheduling, cancelling, rescheduling online appointments.
Environment: Java, Struts, Java Script, MySQL 10.3 technologies.
Roles and Responsibility:
• Responsible and active in the analysis, design, implementation and deployment of full Software Development Lifecycle (SDLC) of the project.
• Extensively used Struts framework as the controller to handle subsequent client requests and invoke the model based upon user requests.
• Defined the search criteria and pulled out the record of the customer from the database. Make the required changes and save the updated record back to the database.
Apache Hadoop
Qualifikationen
Projekt‐ & Berufserfahrung
8/2017 – 1/2018
TätigkeitsbeschreibungAnalyse the data available with the client and give a meaningful solution or prediction with the present data. Design a system to bring data available in various vectors to one common point for analysis.
Eingesetzte QualifikationenApache Hadoop
7/2016 – 8/2017
Tätigkeitsbeschreibung
• Handled importing of data (ETL) from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS. Extracted the data from SapHana into HDFS using Sqoop on regular time intervals (automated).
• Analyzed the data by performing Hive queries to know user behaviour using Drill.
• Continuous monitoring and managing the Hadoop cluster through HDPManager.
• Installed Quartz Automation engine to run multiple Hive and Sqoop jobs.
• Developed Hive queries to process the data and generate the data cubes for visualizing by moving the result to ElasticSearch using Kafka
Apache Hadoop
6/2015 – 6/2016
Tätigkeitsbeschreibung
• Handled importing of data from various vendor systems to common platform, and validate the data for missing information and make it into same type or order before sending it to Recon calculation.
• Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and troubleshooting, manage and review data backups, manage and review Hadoop log files.
• Optimisation of existing process for reducing the time taken for the process.
Apache Hadoop
6/2014 – 5/2015
Tätigkeitsbeschreibung
• Installed and configured Hadoop MapReduce, HDFS and developed multiple MapReduce jobs in Java for data cleansing and preprocessing.
• Importing and exporting data into HDFS and Hive using Sqoop.
• Involved in defining job flows, managing and reviewing log files.
• Extracted files from MySql through Sqoop and placed in HDFS and processed.
• Load and transform large sets of structured, semi structured and unstructured data.
• Responsible to manage data coming from different sources.
• Supported Map Reduce Programs those are running on the cluster.
• Involved in loading data from UNIX file system to HDFS.
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Design of MySQL tables and creation of Stored Procedures for various operations.
Apache Hadoop
4/2013 – 5/2013
Tätigkeitsbeschreibung
Project 1: Analysis of Products on various e-Commerce sites
Analysis of products of various e-commerce sites and comparing their prices, so customer can choose the best priced option.
Environment: Pig, MapReduce, Linux, Flume, Custom Spider, Hive.
Roles and Responsibility:
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Writing of Pig UDF’s for analysis.
Project 2: Twitter Tweet Analysis.
Analysis of Twitter Tweets on various hashtags and give a report on the latest trend based on the data collected.
Environment: Pig, MapReduce, Linux, Flume, Hive.
Roles and Responsibility:
• Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
• Writing of Pig UDF’s for analysis.
• Connecting to Twitter using Flume, collect data and move it to Hdfs.
Project 3: Patient Appointment Management
Designing and creating a web based Patient Appointment and management system for patients and doctors for scheduling, cancelling, rescheduling online appointments.
Environment: Java, Struts, Java Script, MySQL 10.3 technologies.
Roles and Responsibility:
• Responsible and active in the analysis, design, implementation and deployment of full Software Development Lifecycle (SDLC) of the project.
• Extensively used Struts framework as the controller to handle subsequent client requests and invoke the model based upon user requests.
• Defined the search criteria and pulled out the record of the customer from the database. Make the required changes and save the updated record back to the database.
Apache Hadoop
Zertifikate
Weitere Kenntnisse
Persönliche Daten
- Englisch (Fließend)
- Tamil (Muttersprache)
- Malayalam (Fließend)
- Hindi (Gut)
- Europäische Union
Kontaktdaten
Nur registrierte PREMIUM-Mitglieder von freelance.de können Kontaktdaten einsehen.
Jetzt Mitglied werden