The Job logo

What

Where

Associate Data Engineer I

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

About the role:

The Performance Data Management (PDM) team is at the core of the Performance Management Hub (PMH) and focuses on providing Data assets for analyzing and providing insights on financial performance of the Swiss Re Group. Core objective is enabling of the performance analytics through sourcing various internal and external data sets, maintaining a data environment, and gradually building out the scope of the analytics performed.

The core activities of the Data Engineer in the PDM team includes:

Build Data Lake leveraging Palantir & BI tools by performing Data Modelling, applying transformations to shape the data suitable for reporting and analysis.

Building executive dashboards, creating visually appealing & storytelling reports for senior management.

Contributing to ad-hoc projects, POC's and supporting other team members as required

Performing change management activities as necessary

About you:

Passionate about Data Engineering & Programming with 0-1 year relevant experience

Basic understanding of PySpark & SQL

Able to convert the business problem into technical implementation.

Willingness to upskill on Advanced scripting skills viz: Pyspark, Python, JavaScript

Knowledge in Reinsurance industry and Finance domain is a Plus.

Utilize and apply best practices on projects based on experience and in consultation with experts; appropriately tailored for the client and their culture.

University degree (equivalent) in quantitative field (e.g. Mathematics, Statistics, Computer Science Engineering, Information Technology or Electronics & Communications)

Specific soft skills:

Excellent command of spoken and written English and ability to present to senior management.

Standout colleague with ability to build proactive, collaborative working relationships with peers and key partners based on respect and teamwork

Inquisitive, proactive and willing to learn new technologies, understand insurance business and its economics

Process and delivery mind-set aspiring for methodological and operational improvements

Ability to drive the reporting aspects of multiple ad-hoc projects and have good expectation management of partners

Set alert for similar jobsAssociate Data Engineer I role in Bengaluru, India
Swiss Re Logo

Company

Swiss Re

Job Posted

3 months ago

Job Type

Full-time

WorkMode

On-site

Experience Level

0-2 Years

Category

Data & Analytics

Locations

Bengaluru, Karnataka, India

Qualification

Bachelor

Applicants

50 applicants

Related Jobs

Swiss Re Logo

Data Analyst

Swiss Re

Bengaluru, Karnataka, India

Posted: 9 months ago

As a Data Analyst at Swiss Re in Bengaluru, you will enhance internal auditing efficiency by delivering technology-driven solutions. You will collaborate with internal teams, utilize existing applications, and ensure compliance with internal policies. This role involves working in an agile environment and supporting the Group's internal control systems.

Zebra Technologies Logo

Data Engineer, I

Zebra Technologies

Bengaluru, Karnataka, India

Posted: 4 months ago

Job Description Remote Work: Hybrid Overview: At Zebra, we are a community of innovators who come together to create new ways of working to make everyday life better. United by curiosity and care, we develop dynamic solutions that anticipate our customer’s and partner’s needs and solve their challenges. Being a part of Zebra Nation means being seen, heard, valued, and respected. Drawing from our diverse perspectives, we collaborate to deliver on our purpose. Here you are a part of a team pushing boundaries to redefine the work of tomorrow for organizations, their employees, and those they serve. You have opportunities to learn and lead at a forward-thinking company, defining your path to a fulfilling career while channeling your skills toward causes that you care about – locally and globally. We’ve only begun reimaging the future – for our people, our customers, and the world. Let’s create tomorrow together. A Data Engineer will be responsible for understanding the client's technical requirements, design and build data pipelines to support the requirements. In this role, the Data Engineer, besides developing the solution, will also oversee other Engineers' development. This role requires strong verbal and written communication skills and effectively communicate with the client and internal team. A strong understanding of databases, SQL, cloud technologies, and modern data integration and orchestration tools like Azure Data Factory (ADF), Informatica, and Airflow are required to succeed in this role. Responsibilities:   • Play a critical role in the design and implementation of data platforms for the AI products. • Develop productized and parameterized data pipelines that feed AI products leveraging GPUs and CPUs. • Develop efficient data transformation code in spark (in Python and Scala) and Dask. • Build workflows to automate data pipeline using python and Argo. • Develop data validation tests to assess the quality of the input data. • Conduct performance testing and profiling of the code using a variety of tools and techniques. • Build data pipeline frameworks to automate high-volume and real-time data delivery for our data hub. • Operationalize scalable data pipelines to support data science and advanced analytics. • Optimize customer data science workloads and manage cloud services costs/utilization. Qualifications:   • Minimum Education: o Bachelors, Master's or Ph.D. Degree in Computer Science or Engineering. • Minimum Work Experience (years): o 1+ years of experience programming with at least one of the following languages: Python, Scala, Go. o 1+ years of experience in SQL and data transformation o 1+ years of experience in developing distributed systems using open source technologies such as Spark and Dask. o 1+ years of experience with relational databases or NoSQL databases running in Linux environments (MySQL, MariaDB, PostgreSQL, MongoDB, Redis). • Key Skills and Competencies: o Experience working with AWS / Azure / GCP environment is highly desired. o Experience in data models in the Retail and Consumer products industry is desired. o Experience working on agile projects and understanding of agile concepts is desired. o Demonstrated ability to learn new technologies quickly and independently. o Excellent verbal and written communication skills, especially in technical communications. o Ability to work and achieve stretch goals in a very innovative and fast-paced environment. o Ability to work collaboratively in a diverse team environment. o Ability to telework o Expected travel: Not expected.

Zscaler Logo

Associate Data Engineer

Zscaler

Bengaluru, Karnataka, India

Posted: 8 days ago

About Zscaler Serving thousands of enterprise customers around the world including 40% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler.  Our Engineering team built the world’s largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced  Associate Data Engineer, Enterprise Data Platform to join our IT/Data Strategy team. Reporting to the Staff Data Engineer, you will be responsible for : Designing constructing and maintaining efficient data pipelines and integrations to address the organization's analytics and reporting needs Partnering with architects, integration, and engineering teams to gather requirements and deliver impactful and reliable data solutions Identifying and source data from multiple systems, profiling datasets to ensure they support informed decision-making processes Optimizing existing pipelines and data models while creating new features aligned with business objectives and improved functionality Implementing data standards, leverage cloud and big data advancements, and use tools like Snowflake, DBT, and AWS to deliver innovative data solutions What We're Looking for (Minimum Qualifications) 0-2 years in data warehouse design, development, SQL, and data modeling with proficiency in efficient query writing for large datasets Skilled in Python for API integration, data workflows, and hands-on expertise with ELT tools (e.g., Matillion, Fivetran, DBT) Extensive experience with AWS services (EC2, S3, Lambda, Glue) and CI/CD processes, Git, and Snowflake concepts Knowledge of CI/CD processes, & Git. Eagerness to learn GenAI technologies Strong analytical skills and an ability to manage multiple projects simultaneously. What Will Make You Stand Out (Preferred Qualifications) Experience with Data Mesh architecture Knowledge of foundational concepts of machine learning models

66degrees Logo

Associate Data Engineer, Gradient Specialist

66degrees

Bengaluru, Karnataka, India

Posted: a month ago

Role Overview Gradient Specialist Program is our differentiator training program that focuses on preparing recent graduates for their careers in technology consulting.  Learning from our very own Google Cloud Certified experts, we nurture your talent and accelerate your learning through structured training, hands-on building, and mentorship.  Completing this program means you are a GCP Certified 66degrees Specialist prepared for your career unlocking business value for our clients while accelerating AI adoption, and delivering with speed and scale. This exciting opportunity is based out of our Bengaluru office. Responsibilities  A Gradient Specialist’s responsibilities and duties are as follows: Complete the Gradient Development Program training.  Pursue and obtain Google Cloud Platform Certifications based on your matched career track.   Work with technical and business leads to transfer global business requirements into sound solutions.  Expected to work 2-3 days in the office each week as part of a hybrid work arrangement. Qualifications  Currently pursuing a Bachelor’s Degree in Computer Science, Computer Engineering, Data or similar, 2024 graduate or graduating in March-Mary 2025. Master’s degree preferred. Python and SQL experience required. . Experience working with structured, semi-structured, and unstructured data sources. Strong interpersonal, verbal, and written communication skills. Strong problem solving, logical reasoning, and analytical skills to tackle complex problems Strong organizational skills including the ability to prioritize, handle multiple projects simultaneously, and meet deadlines. Self-motivated and able to work independently or as part of a team. Ability to commit to our Gradient Development Program and the technical career that follows.  

Capgemini Logo

Data Engineer

Capgemini

Bengaluru, Karnataka, India

Posted: 8 months ago

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Job Description:                Expert knowledge in Python                Expert knowledge in popular machine learning libraries and frameworks, such as TensorFlow, Keras, scikit-learn.                Proficient understanding and application of clustering algorithms (e.g., K-means, hierarchical clustering) for grouping similar data points.                Expertise in classification algorithms (e.g., decision trees, support vector machines, random forests) for tasks such as image recognition                Natural language Processing and recommendation systems.                Proficiency in working with databases, both relational and non-relational like MySQL with experience in designing database schemas                And optimizing queries for Efficient data retrieval.                Strong knowledge in areas like Object Oriented Analysis and Design, Multi-threading, Multi process handling and Memory management.                Good knowledge model evaluation metrics and techniques.                Experience in deploying machine learning models to production environments.                Currently working in an Agile scrum team and proficient in using version control systems (e.g., Git) for collaborative development.  Primary Skill:              Excellent in Python Coding              Excellent in Communication Skills              Good in Data modelling, popular machine learning libraries and framework