AWS Data Engineer (python)

Job Description

AWS Data Engineer (python)

Type : Full Time

Location : Gurgaon

Experience Required : 3 Year(s)

Industry : Banking/Financial Services/Broking

Preferred Skills : Programming and Designing

Job Description :

Job Description

At Sun Life, we work together, share common values and encourage growth and achievement. We offer many career paths that attract a wide variety of talent and skills. Follow a path that lets your talents shine. 


Designation: Sr. Specialist / Specialist

Experience: 2.5 -10 years


Minimum Qualification

Bachelors in Technology (B.Tech) or Masters in computer application (MCA)


Mandate Skills

Aware of the Software Development Life Cycle and Quality concepts

Good experience in Python programming language for data analysis.

Good experience in Spark framework.

Good to have experience of AWS Glue (ETL tool).

Good understanding of AWS S3, Redshift and PostgreSQL.

Good analytical and problem solving skills.

Ability to work using an agile approach, comfortable with ambiguous requirements.

Should be able to understand existing design (HLD and DLD) documents and create/alter design documents when required

Excellent verbal and written communications skills; Strong interpersonal skills

Demonstrated problem solving skills with ability to analyze situations/problems systematically and deliver effective right-sized solutions in a timely manner

Strong organizational, multi-tasking and time management skills to work effectively in a changing environment balancing operations and project delivery

Ability to communicate effectively to technical and non-technical audiences.

Knowledge of the financial and insurance industries

Experience with developing, publishing and supporting Tableau dashboards


Role Summary

The incumbent should have,

Strong working experience in Python for data engineering (ETL) and Spark.

Strong working experience on relational databases such as Oracle, SQL Server, DB2.

Working experience of Informatica and associated tools.

Working knowledge of AWS environment (S3, CLI, RDS).

The incumbent would be responsible to lead ETL development using Python and Spark. S/he would work towards creating a positive and innovation friendly environment.


Core Responsibility

Strong experience in delivering projects in using Python, Spark and Hive.

Exposure of working in Global environment and have delivered at-least 1-2 projects on Big Data.

Delivery collaboration & coordination with multiple business partners.

Must have good experience in leading projects.

Eligibility


Preferred skills 

Experience with System Development Life Cycle methodologies (CMMI)

Basic understanding of insurance data concepts

Basic data warehouse and data modelling concepts

Experience working in a multi-site, multi-time zone environment

Hands on experience on Business Intelligence Technologies like Tableau and Qliksense would be a plus


Salary: Not Disclosed by Recruiter

Industry: IT-Software / Software Services

Functional Area: IT Software - Application Programming, Maintenance

Role Category: Programming & Design

Role: Software Developer

Employment Type: Permanent Job, Full Time