Description
5+ years of relevant work experience showing growth as a Data Engineer.
Hands On programming experience
Implementation Experience on Kafka, Kinesis, Spark, AWS Glue, AWS LakeFormation.
Experience of performance optimization in Batch and Real time processing applications
Expertise in Data Governance and Data Security Implementation
Worked on Scheduling tools like Airflow
Good hands-on design and programming skills building reusable tools and products
Experience developing in AWS or similar cloud platforms. Preferred:, ECS, EKS, S3, EMR, DynamoDB, Aurora, Redshift, QuickSight or similar.
Good knowledge of Python. Good to have Java/Scala.
Familiarity with systems with very high volume of transactions, micro service design, or data processing pipelines (Spark).
Experience in efficient data processing using open table format Delta, Iceberg.
Knowledge and hands-on experience with server-less technologies such as Lambda, MSK, MWAA, Kinesis Analytics a plus.
Expertise in practices like Agile, Peer reviews, Continuous Integration,
Capable of planning and executing on both short-term and long-term goals individually and with the team.
Design and Architecture experience (High Level and Low Level Design )
Implemented data catalog solution, data observability framework, Audit Balance control framework
Certification on Data Engineering, AWS etc.
Implemented Micro-service API to process thousands of events per second
Primary Location
Pune, Maharashtra, India
Job Type
Experienced
Skill
Data-AWS Glue with EMR, Spark, Redshift,Kinesis,S3,AWS Native Data Services.
Qualification
5+ years of relevant work experience showing growth as a Data Engineer.
Hands On programming experience
Implementation Experience on Kafka, Kinesis, Spark, AWS Glue, AWS LakeFormation.
Experience of performance optimization in Batch and Real time processing applications
Expertise in Data Governance and Data Security Implementation
Worked on Scheduling tools like Airflow
Good hands-on design and programming skills building reusable tools and products
Experience developing in AWS or similar cloud platforms. Preferred:, ECS, EKS, S3, EMR, DynamoDB, Aurora, Redshift, QuickSight or similar.
Good knowledge of Python. Good to have Java/Scala.
Familiarity with systems with very high volume of transactions, micro service design, or data processing pipelines (Spark).
Experience in efficient data processing using open table format Delta, Iceberg.
Knowledge and hands-on experience with server-less technologies such as Lambda, MSK, MWAA, Kinesis Analytics a plus.
Expertise in practices like Agile, Peer reviews, Continuous Integration,
Capable of planning and executing on both short-term and long-term goals individually and with the team.
Design and Architecture experience (High Level and Low Level Design )
Implemented data catalog solution, data observability framework, Audit Balance control framework
Certification on Data Engineering, AWS etc.
Implemented Micro-service API to process thousands of events per second