Qualification Required: B.Tech/B.E. – CSE,IT.
Relevant Experience Required
1. 6+ years of experience in data engineering field, focused on Azure stack
2. Good experience in Pyspark - Including Dataframe core functions and Spark SQL
3. Good experience in SQL DBs - Be able to write queries including fair complexity.
4. Should have experience in Big Data programming for data transformation and aggregations
5. Good at ETL architecture. Business rules processing and data extraction from Data Lake into
data streams for business consumption.
6. Good customer communication.
7. Good Analytical skills
8. Technical Skills:- PySpark, Python, Spark and SQL
Job Responsibilities:
1. Responsible for developing and maintaining applications with PySpark - Databricks
2. Contribute to the overall design and architecture of the application developed and deployed.
3. Performance Tuning wrt to executor sizing and other environmental parameters, code
optimization, partitions tuning, etc.
4. Interact with business users to understand requirements and troubleshoot issues.
5. Implement Projects based on functional specifications.