The Job logo

What

Where

Senior Data Engineer

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

DESCRIPTION

North America Consumer is one of Amazon’s largest businesses, including North America retail and third party sellers, as well as an array of innovative new programs including Alexa Shopping and Amazon Business. The NA Consumer Finance organization is looking for an outstanding Data Engineer to build a robust and scalable data platform to support analytical and data capability for the team.

As a Data Engineer, you will be working in one of the world's largest and most complex data warehouse environments using the latest suite of AWS toolsets. You should have deep expertise in the design, creation, management, and business use of extremely large datasets. You should be expert at designing, implementing, and maintaining stable, scalable, low cost data infrastructure. In this role, you will build datasets that analysts and BIE use to generate actionable insights. You should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change.

The successful candidate will be an expert with SQL, ETL (and general data wrangling) and have exemplary communication skills. The candidate will need to be a self-starter, comfortable with ambiguity in a fast-paced and ever-changing environment, and able to think big while paying careful attention to detail.

Responsibilities

You know and love working with data engineering tools, can model multidimensional datasets, and can understand how to make appropriate data trade-offs. You will also have the opportunity to display your skills in the following areas:
· Design, implement, and support a platform providing access to large datasets
· Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL
· Own, maintain and scale a Redshift cluster and manage other AWS Resources
· Model data and metadata for ad hoc and pre-built reporting
· Collaborate with Business Intelligence Engineers and Analysts to deliver high quality data architecture and pipelines
· Recognize and adopt best practices in engineering and operational excellence: data integrity, validation, automation and documentation
· Create coherent Logical Data Models that drive physical design

We are open to hiring candidates to work out of one of the following locations:

Bangalore, KA, IND

BASIC QUALIFICATIONS

- 5+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience mentoring team members on best practices

PREFERRED QUALIFICATIONS

- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience operating large data warehouses

Set alert for similar jobsSenior Data Engineer role in Bengaluru, India
Amazon Logo

Company

Amazon

Job Posted

a year ago

Job Type

Full-time

WorkMode

On-site

Experience Level

3-7 years

Category

Data & Analytics

Locations

Bengaluru, Karnataka, India

Qualification

Bachelor

Applicants

Be an early applicant

Related Jobs

KPIT Logo

Data Engineer

KPIT

Bengaluru, Karnataka, India

Posted: a year ago

Implement data pipelines that are efficient, scalable, and maintainable. Implement best practices including source control, code reviews, validation and testing. Act as mentor and help junior data engineers. Experience in designing big data processing pipelines. Familiarity with tools like Hadoop, Spark, Kafka, and databases like PostgresSQL, MYSQL. Strong programming skills in Python, Java, Scala. Familiarity with data visualization tools like Tableau, Power BI. Experience in working with AWS, Azure, or GCP. Knowledge of CI/CD, machine learning, deep learning, and optimization in automotive domain.

Caterpillar Inc. Logo

Data Engineer

Caterpillar Inc.

Bengaluru, Karnataka, India

Posted: a month ago

Job Description: Your Work Shapes the World at Caterpillar Inc. When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other.  We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it. Your Impact Shapes the World at Caterpillar Inc When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other. We are the makers, problem solvers and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it.   Job Summary We are seeking a skilled Data Scientist (Data Engineer) to join our Pricing Analytics Team . The incumbent would be responsible for building scalable, high-performance infrastructure and data driven analytics applications that provide actionable insights. The position will be part of Caterpillar’s fast-moving Global Parts Pricing organization, driving action and tackling challenges and problems that are critical to realizing superior business outcomes. The data engineer will work with data scientists, business intelligence analysts, and others as part of a team that assembles large, complex data sets, pipelines, apps, and data infrastructure that provide competitive advantage.   The preference for this role is to be based out of Bangalore – Whitefield office   What you will do Job Roles and Responsibilities Build infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using AWS tools  Design, develop, and maintain performant and scalable applications   Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability   Perform debugging, troubleshooting, modifications and testing of integration solutions  Operationalize developed jobs and processes  Create/Maintain database infrastructure to process data at scale  Create solutions and methods to monitor systems and processes  Automate code testing and pipelines  Engage directly with business partners to participate in design and development of data integration/transformation solutions.  Engage and actively seek industry best practice for continuous improvement  What you will have    BS in Computer Science, Data Science, Computer Engineering, or related quantitative field  Development experience, preferably using Python and/or PySpark  Understanding of data structures, algorithms, profiling & optimization   Understanding of SQL, ETL/ELT design, and data modeling techniques  Great verbal and written communication skills to collaborate cross functionally and drive action  Thrive in a fast-paced environment that delivers results . Passion for acquiring, analyzing, and transforming data to generate insights  Strong analytical ability, judgment and problem analysis techniques  This position may require 10% travel.  Shift Timing-01:00PM -10:00PM IST(EMEA Shift)  Desired Skills:  MS in Computer Science, Data Science, Computer Engineering, or related quantitative field  Experience with AWS cloud services (e.g. EMR, EC2, Lambda, Glue, CloudFormation, CloudWatch/EventBridge, ECR, ECS, Athena, Fargate, etc.)  Experience administering and/or developing in Snowflake  Masters Degree in Computer Science, Data Science, Computer Engineering, or related quantitative field  Strong background working with version control systems (Git, etc.)  Experience managing continuous integrations systems, Azure pipelines is a plus  Advanced level of experience with programming, data structures and algorithms   Working knowledge of Agile Software development methodology  Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement   A successful history of manipulating, processing and extracting value from large disconnected datasets  Skills desired: Business Statistics : Knowledge of the statistical tools, processes, and practices to describe business results in measurable scales; ability to use statistical tools and processes to assist in making business decisions. Level Working Knowledge: • Explains the basic decision process associated with specific statistics. • Works with basic statistical functions on a spreadsheet or a calculator. • Explains reasons for common statistical errors, misinterpretations, and misrepresentations. • Describes characteristics of sample size, normal distributions, and standard deviation. • Generates and interprets basic statistical data. Accuracy and Attention to Detail : Understanding the necessity and value of accuracy; ability to complete tasks with high levels of precision. Level Working Knowledge: • Accurately gauges the impact and cost of errors, omissions, and oversights. • Utilizes specific approaches and tools for checking and cross-checking outputs. • Processes limited amounts of detailed information with good accuracy. • Learns from mistakes and applies lessons learned. • Develops and uses checklists to ensure that information goes out error-free. Analytical Thinking : Knowledge of techniques and tools that promote effective analysis; ability to determine the root cause of organizational problems and create alternative solutions that resolve these problems. Level Working Knowledge: • Approaches a situation or problem by defining the problem or issue and determining its significance. • Makes a systematic comparison of two or more alternative solutions. • Uses flow charts, Pareto charts, fish diagrams, etc. to disclose meaningful data patterns. • Identifies the major forces, events and people impacting and impacted by the situation at hand. • Uses logic and intuition to make inferences about the meaning of the data and arrive at conclusions. What you will get: Work Life Harmony Earned and medical leave. Flexible work arrangements Relocation assistance Holistic Development Personal and professional development through Caterpillar ‘s employee resource groups across the globe Career developments opportunities with global prospects Health and Wellness Medical coverage -Medical, life and personal accident coverage Employee mental wellness assistance program   Financial Wellness Employee investment plan Pay for performance -Annual incentive Bonus plan.

Amazon Logo

Senior Data Engineer

Amazon

Bengaluru, Karnataka, India

Posted: a year ago

DESCRIPTION North America Consumer is one of Amazon’s largest businesses, including North America retail and third party sellers, as well as an array of innovative new programs including Alexa Shopping and Amazon Business. The NA Consumer Finance organization is looking for an outstanding Data Engineer to build a robust and scalable data platform to support analytical and data capability for the team. As a Data Engineer, you will be working in one of the world's largest and most complex data warehouse environments using the latest suite of AWS toolsets. You should have deep expertise in the design, creation, management, and business use of extremely large datasets. You should be expert at designing, implementing, and maintaining stable, scalable, low cost data infrastructure. In this role, you will build datasets that analysts and BIE use to generate actionable insights. You should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. The successful candidate will be an expert with SQL, ETL (and general data wrangling) and have exemplary communication skills. The candidate will need to be a self-starter, comfortable with ambiguity in a fast-paced and ever-changing environment, and able to think big while paying careful attention to detail. Responsibilities You know and love working with data engineering tools, can model multidimensional datasets, and can understand how to make appropriate data trade-offs. You will also have the opportunity to display your skills in the following areas: · Design, implement, and support a platform providing access to large datasets · Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL · Own, maintain and scale a Redshift cluster and manage other AWS Resources · Model data and metadata for ad hoc and pre-built reporting · Collaborate with Business Intelligence Engineers and Analysts to deliver high quality data architecture and pipelines · Recognize and adopt best practices in engineering and operational excellence: data integrity, validation, automation and documentation · Create coherent Logical Data Models that drive physical design We are open to hiring candidates to work out of one of the following locations: Bangalore, KA, IND BASIC QUALIFICATIONS - 5+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience mentoring team members on best practices PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience operating large data warehouses

Amazon Logo

Senior Data Scientist, AWS

Amazon

Bengaluru, Karnataka, India

Posted: a year ago

DESCRIPTION The Generative AI Innovation Center team at AWS provides opportunities to innovate in a fast-paced organization that contributes to game-changing projects and technologies leveraging cutting-edge generative AI algorithms. As a Data Scientist, you'll partner with technology and business teams to build solutions that surprise and delight our customers. We’re looking for Data Scientists capable of using generative AI and other ML techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. Here at AWS, we welcome all builders. We believe that technology should be built in a way that’s inclusive, accessible, and equitable. We’re committed to putting in the work for more equal representation Key job responsibilities * Collaborate with scientists and engineers to research, design and develop cutting-edge generative AI algorithms to address real-world challenges * Work across customer engagement to understand what adoption patterns for generative AI are working and rapidly share them across teams and leadership * Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths for generative AI * Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder * Provide customer and market feedback to Product and Engineering teams to help define product direction. About the team You will work with a diverse team of Architects, ML Scientists, and Strategists to help and guide AWS customers across Asia Pacific, Japan, China and India in their journey to adopt generative AI. We are open to hiring candidates to work out of one of the following locations: Bangalore, KA, IND BASIC QUALIFICATIONS - 10+ years of data scientist or similar role involving data extraction, analysis, statistical modeling and communication experience - Bachelor's degree - Experience working with data engineers and business intelligence engineers collaboratively - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience in communicating across technical and non-technical audiences, including executive level stakeholders or clients - Proven knowledge of deep learning and experience hosting and deploying ML solutions (e.g., for training, tuning, and inferences) PREFERRED QUALIFICATIONS - Experience managing data pipelines - Experience as a leader and mentor on a data science team - Masters in engineering, technology, computer science, machine learning, robotics, operations research, statistics, mathematics or equivalent quantitative field - Working knowledge of generative AI and hands on experience in deploying and hosting Large Foundational Models - Hands on experience building models with deep learning frameworks like Tensorflow, PyTorch, or MXNet