0-2 Years Experience2018 Batch2019 Batch2020 Batch2021 Batch2022 BatchLatest JobsOtherPrivate JobsTrending

DHL Hiring Data Engineers | 2018-2022 Batch | 9 LPA Salary | Wingineers

DHL is Hiring 2018-2022 Batch Pass outs Students

DHL Data Engineer Jobs

About Company


DHL  is a logistics company providing courier, package delivery and express mail service, delivering over 1.7 billion parcels per year

 

The founders of DHL, Adrian Dalsey, Larry Hillblom, and Robert Lynn, had no idea that their company would transform the logistics industry when it was created in 1969. DHL is currently the top logistics firm in the world. Every day, our 600,000 employees in more than 220 countries and territories strive to support you as you expand your company, enter new markets, and transcend borders. Or just write your loved ones a letter.

DHL Data Engineer Recruitment Details

Company DHL
Post Data Engineer
Degree BE/B.Tech/M.Tech
Branch CS/IT/DS/DA
Batch 2018/2019/2020/2021/2022
Batch
CTC Rs 9 LPA (Expected)
Location Pan India


Job Description

You will be initially responsible for:

 

• Designing, developing and maintaining near-real time ingestion pipelines through Qlik Replicate (or alternative technology) replicating data from transactional Databases to our Data Eco-system powered by Azure Data Lake and Snowflake.

• Monitoring and supporting batch data pipelines from transactional Databases to our Data Eco-system powered by Azure Data Lake and Snowflake.

• Setting up new and monitoring of existing metrics, analyzing data, performing root cause analysis and proposing issue resolutions.

• Managing the lifecycle of all incidents, ensuring that normal service operation is restored as quickly as possible and minimizing the impact on business operations.

• Document data pipelines, data models, and data integration processes to facilitate knowledge sharing and maintain data lineage.

• Cooperate with other Data Platform & Operations team members and our stakeholders to identify and implement system and process improvements.

• Leveraging DevOps framework and CI/CD.

• Supporting and promoting Agile way of working using SCRUM framework.

Who Can Apply


You must have:

 • Bachelor’s degree in Engineering/Technology;• Minimum 2-3 years of experience in the Data Engineer role;• Expertise using relational Database systems such as Oracle, MS/Azure SQL, MySQL, etc.;• Expert SQL knowledge. It’s great it you have experience with Snowflake SaaS data warehouse or alternative solutions.• Practical experience developing and/or supporting CDC data pipelines – we use Qlik Replicate but any other technology is welcome!• Excellent problem-solving, communication, and collaboration skills to provide effective support and assistance in data engineering projects;• Experience with development and/or support of Lakehouse architectures – we use Parquet / Delta, Synapse Serverless and Databricks/Databricks SQL;• Proficiency in Python programming and experience with PySpark libraries and APIs;• Very good understanding of Software Development Lifecycle, source code management, code reviews, etc.;• Experience in; managing of Incident life-cycle from ticket creation till closure (we use Service Now and JIRA)• Experience in performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement;• Experience in building processes supporting data transformation, data structures, metadata, dependency and workload management; You should have:• Experience with Data Lake/Big Data Projects implementation in Cloud (preferably MS Azure) and/or On-premise platforms: o Cloud – Azure technology stack: ADLS Gen2, Databricks (proven experience is a big plus!), EventHub, Stream Analytics, Synapse Analytics, AKS, Key Vault; o On Premise: Spark, HDFS, Hive, Hadoop distributions (Cloudera or MapR), Kafka, Airflow (or any other scheduler)• Ability to develop, maintain and distribute the code in modularized fashion; • Working experience with DevOps framework; • Ability to collaborate across different teams/geographies/stakeholders/levels of seniority;• Energetic, enthusiastic and results-oriented personality;• Customer focus with an eye on continuous improvement;• Motivation and ability to perform as a consultant in data engineering projects;• Ability to work independently but also within a Team – you must be a team player!• Strong will to overcome the complexities involved in developing and supporting data pipelines;• Agile mindset; Language requirements:• English – Fluent spoken and written (C1 level)


Other Jobs :- Click Here

2024 Batch Passouts Jobs :- Click Here

Internships :- Click Here

Google  Jobs :- Click Here

Placement Materials

TCS Study Material :- Coming Soon

Wipro Study Material :- Coming Soon

Infosys Study Material :- Coming Soon

Accenture Study Material :- Coming Soon

Apptitude Study Material :- Coming Soon

Selection Process

Round 1

In first round there will Aptitude Round where question related to check capabilities of the candidate through variety of questions like

  • Verbal reasoning questions. …
  • Verbal comprehension questions. …
  • English language questions. …
  • Questions testing abstract reasoning.


Round 2

The second round is generally a Technical Round in which all the technical related question were asked to candidate according to job role. In this generally the interviewer checks candidate technical ability the depth and breadth of candidaate knowledge

Round 3

Round 3 is a HR round in some cases TR and HR round are occur concurrently. This round is conducted by Human Resource Professional who ensures that you are the right candidate for the role


Join Whatsapp Group

Apply Now






If You want to Know how to Apply for this Job Click Here


Click here

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button