Job description
Summary of Position
As part of the Analytics team, work closely with Business stakeholders to understand Analytics needs, build data pipeline, perform data cleaning and transformation, support or enhance Analytics products that align with Alcon’s Data and Analytics strategy and standards.
Key Responsibilities:
· Deliver business solution on Alcon’s analytics platform through end-to-end implementation that includes data security, governance, cataloging, preparation, automated testing, and data quality metrics.
· Participate in Backlog grooming, Spring Planning and effort estimation
· Build data pipeline using Python, PySpark, SparkSQL
· Automate, optimize, migrate and enhance existing solutions.
· Adopt AWS data lake and data related services to implement end-to-end solution
· Perform data modeling, data analysis and providing insights using various tools.
Key Requirements/Minimum Qualifications:
· Education: B.Tech/B.E/M.Tech/MCA in computer or data science
· Experience: Between 5 to 7 years of experience in Big data Hadoop or data lake in AWS cloud environment
· 3+ years of experience in writing code in spark engine using python, scala or java
· Good Experience in RDBMS – MS SQL, Postgres etc.
· Good working knowledge in AWS Services like Glue, EMR, S3, SNS, SQS, Athena, Redshift, Lambda, and Step functions
· Experience with perform data modeling, data analysis and providing insights using various tools.
· Experience in working as part of agile teams and using agile and collaboration tools like Jira, Confluence.
· Ability to work with the business to capture, groom, prioritize, plan and demo User Stories
· Strong collaboration skills for effective communication across multiple teams and stakeholders, both internal and external
· Data savvy individual with hands on experience in preparing, analyzing and deriving insights from data
· Problem solver
· High learning agility