The Job logo

What

Where

Data Engineer

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

Smart SummaryPowered by Roshi
As a Data Engineer at Nielsen, you will be responsible for accelerating data acquisition, ensuring data reliability, enhancing data accessibility, and collaborating with teams. This role involves designing and automating data pipelines, monitoring data systems, and implementing data governance processes. The ideal candidate should have a Bachelor's degree in a relevant field with 3-5 years of experience in Python, PySpark, SQL, AWS, EC2, GitLab, and Airflow. Strong problem-solving skills, attention to detail, and effective communication are essential for this full-time hybrid opportunity in Bengaluru, Karnataka, India.

Job description 

Role Details

  • Analytics team specific: This role drives the acceleration of data acquisition, ensures the reliability and accuracy of data outputs and enhances the accessibility and discovery of data.  They offer technical guidance and mentorship, uphold data integrity and reusability, and collaborate with diverse teams to meet our extensive data needs.
  • Meters team specific: In this role, you will learn and become an expert in Nielsen’s systems and TV panel data, with a focus on underlying meter hardware and infrastructure used for data collection and crediting. You will develop and test software and related database infrastructure used to support both research and production pipelines, all while working within the Software Development Life Cycle framework and applying software development best practices.

Responsibilities

  • Design and automate essential data pipelines and inputs, ensuring seamless integration with downstream analytics and production systems. 
  • Collaborate with cross-functional teams to integrate new functionalities into existing data pipelines, including lower test environments to help validate and assess impact prior to Production integration, where applicable. 
  • Implement data governance and quality processes to ensure the integrity and accuracy of data throughout its lifecycle.
  • Monitor data systems and processes to identify issues and proactively implement improvements to prevent future problems.
  • Participate in code reviews with senior developers prior to pushing the code into production to ensure meeting accuracy and best practice standards.
  • Implement data pipeline Directed Acyclic Graphs (DAGs) and maintenance DAGs. Configure and setup DAGs based on the data to run Spark commands in parallel and sequential. 
  • Perform unit testing using test cases and fix any bugs. 
  • Optimize code to meet product SLAs
  • Support multiple projects and communicate with stakeholders in various organizations. This includes regularly providing status updates, developing timelines, providing insights, etc.

Key Skills

  • Bachelor’s Degree in Computer Science, Data Science, Analytics or related field 
  • 3-5 years of experience with the following:
  • Coding in Python, PySpark, and SQL
  • Hive data storage technologies
  • Working within cloud-based infrastructures and tools such as AWS, EC2, GitLab, and Airflow. 
  • Working within the Software Development Life Cycle framework and applying software development best practices
  • Building monitoring checks and tools to ensure infrastructure and related processes are working as expected
  • Solid understanding of system design, data structures and performance optimization techniques
  • Excellent problem solving skills and attention to detail
  • Well-organized and able to handle and prioritize multiple assignments
  • Able to communicate effectively both orally and in writing
  • (Preferred) 2+ years experience with visualization and reporting tools, e.g. Tableau
  • (Preferred) Experience deploying and maintaining Machine Learning models within Production environments
  • (Preferred) Experience working with Jira, Confluence, and Smartsheets
  • (Preferred) Experience with Alteryx, Databricks platforms
Set alert for similar jobsData Engineer role in Bengaluru, India
Nielsen Logo

Company

Nielsen

Job Posted

4 months ago

Job Type

Full-time

WorkMode

Hybrid

Experience Level

3-7 Years

Category

Data & Analytics

Locations

Bengaluru, Karnataka, India

Qualification

Bachelor or Master

Applicants

Be an early applicant

Related Jobs

Nielsen Logo

Data Scientist and Business Analyst, Content Analytics

Nielsen

Bengaluru, Karnataka, India

Posted: 4 months ago

Join Nielsen as a Data Scientist and Business Analyst focused on audience measurement and media behavior analytics. Design and deliver global analytics solutions and insights for big brands. Requires expertise in SQL, Python, and Spark, with a focus on data synthesis, storytelling, and innovation.

Caterpillar Inc. Logo

Data Engineer

Caterpillar Inc.

Bengaluru, Karnataka, India

Posted: a month ago

Job Description: Your Work Shapes the World at Caterpillar Inc. When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other.  We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it. Your Impact Shapes the World at Caterpillar Inc When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other. We are the makers, problem solvers and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it.   Job Summary We are seeking a skilled Data Scientist (Data Engineer) to join our Pricing Analytics Team . The incumbent would be responsible for building scalable, high-performance infrastructure and data driven analytics applications that provide actionable insights. The position will be part of Caterpillar’s fast-moving Global Parts Pricing organization, driving action and tackling challenges and problems that are critical to realizing superior business outcomes. The data engineer will work with data scientists, business intelligence analysts, and others as part of a team that assembles large, complex data sets, pipelines, apps, and data infrastructure that provide competitive advantage.   The preference for this role is to be based out of Bangalore – Whitefield office   What you will do Job Roles and Responsibilities Build infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using AWS tools  Design, develop, and maintain performant and scalable applications   Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability   Perform debugging, troubleshooting, modifications and testing of integration solutions  Operationalize developed jobs and processes  Create/Maintain database infrastructure to process data at scale  Create solutions and methods to monitor systems and processes  Automate code testing and pipelines  Engage directly with business partners to participate in design and development of data integration/transformation solutions.  Engage and actively seek industry best practice for continuous improvement  What you will have    BS in Computer Science, Data Science, Computer Engineering, or related quantitative field  Development experience, preferably using Python and/or PySpark  Understanding of data structures, algorithms, profiling & optimization   Understanding of SQL, ETL/ELT design, and data modeling techniques  Great verbal and written communication skills to collaborate cross functionally and drive action  Thrive in a fast-paced environment that delivers results . Passion for acquiring, analyzing, and transforming data to generate insights  Strong analytical ability, judgment and problem analysis techniques  This position may require 10% travel.  Shift Timing-01:00PM -10:00PM IST(EMEA Shift)  Desired Skills:  MS in Computer Science, Data Science, Computer Engineering, or related quantitative field  Experience with AWS cloud services (e.g. EMR, EC2, Lambda, Glue, CloudFormation, CloudWatch/EventBridge, ECR, ECS, Athena, Fargate, etc.)  Experience administering and/or developing in Snowflake  Masters Degree in Computer Science, Data Science, Computer Engineering, or related quantitative field  Strong background working with version control systems (Git, etc.)  Experience managing continuous integrations systems, Azure pipelines is a plus  Advanced level of experience with programming, data structures and algorithms   Working knowledge of Agile Software development methodology  Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement   A successful history of manipulating, processing and extracting value from large disconnected datasets  Skills desired: Business Statistics : Knowledge of the statistical tools, processes, and practices to describe business results in measurable scales; ability to use statistical tools and processes to assist in making business decisions. Level Working Knowledge: • Explains the basic decision process associated with specific statistics. • Works with basic statistical functions on a spreadsheet or a calculator. • Explains reasons for common statistical errors, misinterpretations, and misrepresentations. • Describes characteristics of sample size, normal distributions, and standard deviation. • Generates and interprets basic statistical data. Accuracy and Attention to Detail : Understanding the necessity and value of accuracy; ability to complete tasks with high levels of precision. Level Working Knowledge: • Accurately gauges the impact and cost of errors, omissions, and oversights. • Utilizes specific approaches and tools for checking and cross-checking outputs. • Processes limited amounts of detailed information with good accuracy. • Learns from mistakes and applies lessons learned. • Develops and uses checklists to ensure that information goes out error-free. Analytical Thinking : Knowledge of techniques and tools that promote effective analysis; ability to determine the root cause of organizational problems and create alternative solutions that resolve these problems. Level Working Knowledge: • Approaches a situation or problem by defining the problem or issue and determining its significance. • Makes a systematic comparison of two or more alternative solutions. • Uses flow charts, Pareto charts, fish diagrams, etc. to disclose meaningful data patterns. • Identifies the major forces, events and people impacting and impacted by the situation at hand. • Uses logic and intuition to make inferences about the meaning of the data and arrive at conclusions. What you will get: Work Life Harmony Earned and medical leave. Flexible work arrangements Relocation assistance Holistic Development Personal and professional development through Caterpillar ‘s employee resource groups across the globe Career developments opportunities with global prospects Health and Wellness Medical coverage -Medical, life and personal accident coverage Employee mental wellness assistance program   Financial Wellness Employee investment plan Pay for performance -Annual incentive Bonus plan.