The Job logo

What

Where

Data Engineer

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

Smart SummaryPowered by Roshi
Join Magna Powertrain as a Data Engineer to design, build, and maintain data pipelines using Databricks. Collaborate with teams to ensure data availability and scalability. Troubleshoot data-related issues and recommend improvements. Full-time On-site role in Bengaluru, Karnataka.

About us

We see a future where everyone can live and move without limitations. That’s why we are developing technologies, systems and concepts that make vehicles safer and cleaner, while serving our communities, the planet and, above all, people.

Forward. For all.

 

Group Summary

Transforming mobility. Making automotive technology that is smarter, cleaner, safer and lighter. That’s what we’re passionate about at Magna Powertrain, and we do it by creating world-class powertrain systems. We are a premier supplier for the global automotive industry with full capabilities in design, development, testing and manufacturing of complex powertrain systems. Our name stands for quality, environmental consciousness, and safety. Innovation is what drives us and we drive innovation. Dream big and create the future of mobility at Magna Powertrain.

 

About the Role

We are seeking a highly skilled and motivated Data Engineer with expertise in Databricks to join our team. The Data Engineer will be responsible for designing, developing, and maintaining our data infrastructure and systems using Databricks. This role will involve working closely with cross-functional teams to ensure the availability, reliability, and scalability of our data pipelines and analytics platforms

Your Responsibilities

•    Design, build, and maintain scalable data pipelines and ETL processes using Databricks to collect, transform, and load data from various sources into our data warehouse.
•    Collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and implement solutions to meet their needs using Databricks.
•    Prepare data for the advanced analysis e.g. ML modeling
•    Optimize and tune data pipelines and ETL processes in Databricks for performance and efficiency.
•    Develop and maintain data models, schemas, and data dictionaries in Databricks.
•    Implement data quality checks and monitoring in Databricks to ensure data accuracy and integrity.
•    Troubleshoot and resolve data-related issues and incidents in Databricks.
•    Stay up-to-date with the latest trends and technologies in data engineering, specifically related to Databricks, and recommend improvements to our data infrastructure and systems
 

Who we are looking for


•    Bachelor's degree in Computer Science, Engineering, or a related field.
•    Proven experience as a Data Engineer or similar role, with expertise in Databricks.
•    Technical expertise with data models, data mining, and segmentation techniques
•    Strong programming skills in languages such as Python.
•    Experience with big data technologies such as Hadoop, Spark, or Kafka.
•    Proficiency in SQL and database technologies (e.g., MySQL, PostgreSQL, or Oracle).
•    Familiarity with cloud platforms (e.g., AWS, Azure, or GCP) and their data services.
•    Knowledge of data warehousing concepts and dimensional modeling.
•    Excellent problem-solving and analytical skills.
•    Strong communication and collaboration skills.
•    Data engineering certification (Databricks Data Engineer Associate) is a plus

Set alert for similar jobsData Engineer role in Bengaluru, India
Magna International Logo

Company

Magna International

Job Posted

9 months ago

Job Type

Full-time

WorkMode

On-site

Experience Level

0-2 Years

Category

Engineering

Locations

Bengaluru, Karnataka, India

Qualification

Bachelor

Applicants

44 applicants

Related Jobs

Advarra Logo

Data Engineer

Advarra

Bengaluru, Karnataka, India

Posted: 8 months ago

Description   Company Information At Advarra, we are passionate about making a difference in the world of clinical research and advancing human health. With a rich history rooted in ethical review services combined with innovative technology solutions and deep industry expertise, we are at the forefront of industry change. A market leader and pioneer, Advarra breaks the silos that impede clinical research, aligning patients, sites, sponsors, and CROs in a connected ecosystem to accelerate trials.   Company Culture Our employees are the heart of Advarra. They are the key to our success and the driving force behind our mission and vision. Our values (Patient-Centric, Ethical, Quality Focused, Collaborative) guide our actions and decisions. Knowing the impact of our work on trial participants and patients, we act with urgency and purpose to advance clinical research so that people can live happier, healthier lives.   At Advarra, we seek to foster an inclusive and collaborative environment where everyone is treated with respect and diverse perspectives are embraced. Treating one another, our clients, and clinical trial participants with empathy and care are key tenets of our culture at Advarra; we are committed to creating a workplace where each employee is not only valued but empowered to thrive and make a meaningful impact. Job Duties & Responsibilities   Develop and optimize complex SQL queries, stored procedures, and user-defined functions within Snowflake. Develop efficient data transformation pipelines using Snowflake's native SQL capabilities and CTEs. Leverage Snowflake's unique features, such as Time Travel, Zero-Copy Cloning, and Data Sharing to build scalable and robust data solutions. Develop and maintain data models using dbt, ensuring they are scalable, reliable, and optimized for performance. Monitor and manage Fivetran connections to ensure data is being extracted and loaded into Snowflake accurately and on schedule. Troubleshoot and resolve issues related to SQL queries, data syncs, connection errors, and performance bottlenecks. Design, develop, and deploy AWS Lambda functions to support data processing, ETL workflows, and real-time data streaming.     Location This role is open to candidates working in Bengaluru, Ind (hybrid).   Basic Qualifications   Bachelor’s degree or equivalent combination of education and related work experience 0 - 1 year plus of experience in writing and optimizing complex SQL queries, stored procedures, and user-defined functions. Experience designing and implementing efficient data transformation pipelines Experience in writing data transformation with tools such as dbt & Matillion Experience in building and managing data pipeline Experience in writing data transformation test automation scripts Working experience with version control platforms, e.g. GitHub and Agile methodologies and supporting tools JIRA   Preferred Qualifications   Snowflake certifications are a plus Dbt certifications are a plus AWS certifications are a plus     Physical and Mental Requirements   Sit or stand for extended periods of time at stationary workstation Regularly carry, raise, and lower objects of up to 10 Lbs. Learn and comprehend basic instructions Focus and attention to tasks and responsibilities Verbal communication; listening and understanding, responding, and speaking  

Capgemini Logo

Data Engineer

Capgemini

Bengaluru, Karnataka, India

Posted: 7 months ago

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Job Description:                Expert knowledge in Python                Expert knowledge in popular machine learning libraries and frameworks, such as TensorFlow, Keras, scikit-learn.                Proficient understanding and application of clustering algorithms (e.g., K-means, hierarchical clustering) for grouping similar data points.                Expertise in classification algorithms (e.g., decision trees, support vector machines, random forests) for tasks such as image recognition                Natural language Processing and recommendation systems.                Proficiency in working with databases, both relational and non-relational like MySQL with experience in designing database schemas                And optimizing queries for Efficient data retrieval.                Strong knowledge in areas like Object Oriented Analysis and Design, Multi-threading, Multi process handling and Memory management.                Good knowledge model evaluation metrics and techniques.                Experience in deploying machine learning models to production environments.                Currently working in an Agile scrum team and proficient in using version control systems (e.g., Git) for collaborative development.  Primary Skill:              Excellent in Python Coding              Excellent in Communication Skills              Good in Data modelling, popular machine learning libraries and framework

Capgemini Logo

Data Engineer

Capgemini

Bengaluru, Karnataka, India

Posted: 7 months ago

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Job Description:                Expert knowledge in Python                Expert knowledge in popular machine learning libraries and frameworks, such as TensorFlow, Keras, scikit-learn.                Proficient understanding and application of clustering algorithms (e.g., K-means, hierarchical clustering) for grouping similar data points.                Expertise in classification algorithms (e.g., decision trees, support vector machines, random forests) for tasks such as image recognition                Natural language Processing and recommendation systems.                Proficiency in working with databases, both relational and non-relational like MySQL with experience in designing database schemas                And optimizing queries for Efficient data retrieval.                Strong knowledge in areas like Object Oriented Analysis and Design, Multi-threading, Multi process handling and Memory management.                Good knowledge model evaluation metrics and techniques.                Experience in deploying machine learning models to production environments.                Currently working in an Agile scrum team and proficient in using version control systems (e.g., Git) for collaborative development.  Primary Skill:              Excellent in Python Coding              Excellent in Communication Skills              Good in Data modelling, popular machine learning libraries and framework

NatWest Group Logo

Data Engineer

NatWest Group

Bengaluru, Karnataka, India

Posted: 22 days ago

Job description Join us as a Data Engineer This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank by developing innovative data driven solutions, using insight to be commercially successful, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support the bank’s strategic direction while building your network across the bank We're offering this role at associate level What you'll do As a Data Engineer, you’ll play a key role in driving value for our customers by building data solutions. You’ll be carrying out data engineering tasks to build, maintain, test and optimise a scalable data architecture, as well as carrying out data extractions, transforming data to make it usable to data analysts and scientists, and loading data into data platforms. You’ll also be: Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Practicing DevOps adoption in the delivery of data engineering, proactively performing root cause analysis and resolving issues Collaborating closely with core technology and architecture teams in the bank to build data knowledge and data solutions Developing a clear understanding of data platform cost levers to build cost effective and strategic solutions Sourcing new data using the most appropriate tooling and integrating it into the overall solution to deliver for our customers The skills you'll need To be successful in this role, you’ll need a good understanding of data usage and dependencies with wider teams and the end customer, as well as experience of extracting value and features from large scale data. You'll have experience of data warehouse and data lake projects and strong knowledge of data engineering tech stack like Spark architecture, SQL, Python, Pyspark. You'll also need a good understanding of cloud tech stack and AWS services like EMR, IAM, S3 and Devops like Ci/CD, gitlab, gitlab runners. You’ll also demonstrate: Experience of ETL technical design, including data quality testing, cleansing and monitoring, and data warehousing and data modelling capabilities Experience of using programming languages alongside knowledge of data and software engineering fundamentals Good knowledge of modern code development practices Strong communication skills with the ability to proactively engage with a wide range of stakeholders