Associate Data Engineer
Zscaler
Bengaluru, Karnataka, India
About Zscaler Serving thousands of enterprise customers around the world including 40% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world’s largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced Associate Data Engineer, Enterprise Data Platform to join our IT/Data Strategy team. Reporting to the Staff Data Engineer, you will be responsible for : Designing constructing and maintaining efficient data pipelines and integrations to address the organization's analytics and reporting needs Partnering with architects, integration, and engineering teams to gather requirements and deliver impactful and reliable data solutions Identifying and source data from multiple systems, profiling datasets to ensure they support informed decision-making processes Optimizing existing pipelines and data models while creating new features aligned with business objectives and improved functionality Implementing data standards, leverage cloud and big data advancements, and use tools like Snowflake, DBT, and AWS to deliver innovative data solutions What We're Looking for (Minimum Qualifications) 0-2 years in data warehouse design, development, SQL, and data modeling with proficiency in efficient query writing for large datasets Skilled in Python for API integration, data workflows, and hands-on expertise with ELT tools (e.g., Matillion, Fivetran, DBT) Extensive experience with AWS services (EC2, S3, Lambda, Glue) and CI/CD processes, Git, and Snowflake concepts Knowledge of CI/CD processes, & Git. Eagerness to learn GenAI technologies Strong analytical skills and an ability to manage multiple projects simultaneously. What Will Make You Stand Out (Preferred Qualifications) Experience with Data Mesh architecture Knowledge of foundational concepts of machine learning models