Job description
Specialist - Big Data Engineering - IN
Professionals in this group design and implement high-performance, scalable and optimized data solutions for large enterprise wide data mining and processing.
Key Responsibilities and Duties
Educational Requirements
Work Experience
Physical Requirements
Career Level
7IC
Job Description Summary
Job Description:
Educational Qualification : BE / Masters / Bachelor with Computer Science background
Experience : 4 to 7 years for Associate
Career Level: 6IC/7IC
Primary Technical Skills: Snowflake, AWS, Python, Datastage/Talend/Snaplogic, Teradata, Oracle, UNIX
Job Profile:
1. Extend Production Support in L2 / L3 capacity. Being able to efficiently debug and troubleshoot production issues.
2. Evaluate existing data solutions, write scalable ETLs, develop documentation, and train/help team members.
3. Collaborate and work with business/development teams and infrastructure teams on L3 issues and follow the task to completion.
4. Participate & provide support for releases, risks, mitigation plan, and regular DR exercise for project roll out.
5. Drive Automation, Permanent Fixes to prevent issues from reoccurring.
6. Manage Service Level Agreements.
7. Bring continuous improvements to reduce time to resolve for production incidents.
8. Perform root cause analysis and identify and implement corrective and preventive measures.
9. Document standards, processes and procedures relating to best practices, issues and resolutions.
10. Constantly upskill with tools & technologies to meet organization’s future needs.
11. Be available on call (on rotation) in a support role.
12. Effectively manage multiple, competing priorities.
Primary:
1. Extensive experience with Python & UNIX scripting.
2. Extensive experience developing/supportingcomplex ETL solutions with DataStage/Talend/Snaplogic.
3. Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe.
4. Experience in Snowflake performance optimization.
5. Experience to administer and monitor Snowflake computing platform.
6. Solid knowledge of relational database platforms (Teradata, Oracle) and languages (SQL, PL/SQL).
7. Proficient in Unix & Shell Scripting.
Secondary:
1. Experience configuring big data solutions in a cloud environment AWS.
2. Good understanding of AWS Concepts.
3. Hands on experience in Hadoop ecosystem (Kafka, Hive, Spark etc.)
4. Experience/knowledge with at least one mainstream distributed system Kafka
5. Experience in building logical and physical data models for enterprise data warehouse (EDW) and deep expertise in dimensional modelling, ETL, API integrations, and data governance.
Soft Skills:
1. Team Player – ability to work in global team environment. Partner with Development, Infrastructure teams and business users.
2. Flexible to pick up conflicting priorities.
3. Ability to work under pressure to deal with Production Priorities.
4. Upskill & Upgrade self as per strategic roadmaps.
5. Good Communication skills.
6. Demonstrate ownership and great sense of urgency for production issues.
7. Ready to work during weekends as well as in shifts as per needs (no night shifts).
Related Skills
Collaboration, Continuous Improvement Mindset, Data-Driven Business Intelligence, Data Engineering/Analytics, Data Visualization, Predictive Modeling, Problem Solving, Programming, Resourcefulness, Statistics, Story Telling