The Job logo

What

Where

Data integration engineer

Direct ApplyJoin for More Updates

You must Sign In before continuing.

Smart SummaryPowered by Roshi
Join our team as a Senior Data Integration Engineer at RevonextSoft Technologies. We are looking for someone with 5 to 7 years of experience in integrating complex systems using AppFlow, PySpark, Python, AWS Lambda, API ingestions, TerraForm, and Informatica Cloud or SnapLogic.

REQUIREMENTS :

 We are seeking a highly skilled and experienced Senior Data Integration Engineer to join our dynamic team.  

 The ideal candidate will have a strong background in AppFlow, PySpark, Python, AWS Lambda, API ingestions, TerraForm, and experience with Informatica Cloud or SnapLogic. 

This role requires a professional with 5 to 7 years of relevant experience who is passionate about integrating complex systems and driving efficiency through innovative solutions.

Set alert for similar jobsData integration engineer role in Bengaluru, India

Company

REVONEXTSOFT TECHNOLOGIES

Job Posted

2 months ago

Job Type

Contract

WorkMode

Remote

Experience Level

3-7 Years

Career Level

Senior Level

Category

Data & Analytics

Locations

Bengaluru, Karnataka, India

Qualification

Bachelor

Applicants

Be an early applicant

Related Jobs

Test Engineer

REVONEXTSOFT TECHNOLOGIES

Bengaluru, Karnataka, India

Posted: 2 months ago

The Test Engineer position at Revonextsoft Technologies requires 3-5 years of experience in SWIFT messages of Investment banking products and automation testing using Selenium. The role involves testing Payments, defect tracking, and Agile methodologies. This is a remote contract opportunity based in Bengaluru, Karnataka, India.

Baker Hughes Logo

Data Engineer-Data Analytics

Baker Hughes

Bengaluru, Karnataka, India

+4 more

Posted: a month ago

Data Engineer   Are you passionate to work with the team on commercially-facing development projects, typically involving large, complex data sets?   Are you interested in driving business analytics to a new level of predictive analytics while leveraging big data tools and technologies.   Join our Data Engineering Team!   The Data Engineering team helps solve our customers' toughest challenges; making flights safer, power cheaper, and oil & gas production safer for people and the environment by leveraging data and analytics. The Data Engineer will work with the team to create state-of-the-art data and analytics driven solutions, working across Baker Hughes to drive business analytics to a new level of predictive analytics while leveraging big data tools and technologies.   Partner with the best   As a Data Engineer, you will be part of a data engineering or cross-disciplinary team on commercially-facing development projects, typically involving large, complex data sets. These teams typically include statisticians, computer scientists, software developers, engineers, product managers, and end users, working in concert with partners in Baker Hughes business units. Potential application areas include remote monitoring and diagnostics across infrastructure and industrial sectors, financial portfolio risk assessment, and operations optimization.   As a Data Engineer, you will be responsible for:   Performing a variety of data loads & data transformations. Working knowledge of methods for parsing, formatting, & transforming data into units consistent with analytical needs. Demonstrating proficiency in implementation of logical/physical data models that support MDM best practices. Performing integration of multiple data source-formats into master data load. Being Proficient in the use of at least one ETL tool. Must have good communication skills - both oral and written Fuel your passion   To be successful in this role you will:   Have a Bachelors Degree with minimum of 2 years of working experience. Have Exposure to Scripting (Pig, Python, Perl, etc) experience Be able to translates analytics problems into data requirements. Be able to understands logical and physical data models, big data storage architecture, data modeling methodologies, metadata management, master data management, data lineage & data profiling. Be able to understand the technology landscape, up to date on current technology trends and new technology, brings new ideas to the team. Be able to demonstrated awareness of how to leverage curiosity and creativity to drive business impact Can ask follow-up questions when presented with new data/projects. Sees the broader implications of an idea. Be able to Presents new ideas and concepts. Makes connections among previously unrelated ideas. Have Deep passion for learning. Work in a way that works for you   We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer the following flexible working patterns:   Working remotely from home or any other work location Working with us   Our people are at the heart of what we do at Baker Hughes. We know we are better when all of our people are developed, engaged and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent and develop leaders at all levels to bring out the best in each other.

Caterpillar Inc. Logo

Data Engineer

Caterpillar Inc.

Bengaluru, Karnataka, India

Posted: 5 months ago

Job Description: Your Work Shapes the World at Caterpillar Inc. When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other.  We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it. Your Impact Shapes the World at Caterpillar Inc When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other. We are the makers, problem solvers and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it.   Job Summary We are seeking a skilled Data Scientist (Data Engineer) to join our Pricing Analytics Team . The incumbent would be responsible for building scalable, high-performance infrastructure and data driven analytics applications that provide actionable insights. The position will be part of Caterpillar’s fast-moving Global Parts Pricing organization, driving action and tackling challenges and problems that are critical to realizing superior business outcomes. The data engineer will work with data scientists, business intelligence analysts, and others as part of a team that assembles large, complex data sets, pipelines, apps, and data infrastructure that provide competitive advantage.   The preference for this role is to be based out of Bangalore – Whitefield office   What you will do Job Roles and Responsibilities Build infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using AWS tools  Design, develop, and maintain performant and scalable applications   Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability   Perform debugging, troubleshooting, modifications and testing of integration solutions  Operationalize developed jobs and processes  Create/Maintain database infrastructure to process data at scale  Create solutions and methods to monitor systems and processes  Automate code testing and pipelines  Engage directly with business partners to participate in design and development of data integration/transformation solutions.  Engage and actively seek industry best practice for continuous improvement  What you will have    BS in Computer Science, Data Science, Computer Engineering, or related quantitative field  Development experience, preferably using Python and/or PySpark  Understanding of data structures, algorithms, profiling & optimization   Understanding of SQL, ETL/ELT design, and data modeling techniques  Great verbal and written communication skills to collaborate cross functionally and drive action  Thrive in a fast-paced environment that delivers results . Passion for acquiring, analyzing, and transforming data to generate insights  Strong analytical ability, judgment and problem analysis techniques  This position may require 10% travel.  Shift Timing-01:00PM -10:00PM IST(EMEA Shift)  Desired Skills:  MS in Computer Science, Data Science, Computer Engineering, or related quantitative field  Experience with AWS cloud services (e.g. EMR, EC2, Lambda, Glue, CloudFormation, CloudWatch/EventBridge, ECR, ECS, Athena, Fargate, etc.)  Experience administering and/or developing in Snowflake  Masters Degree in Computer Science, Data Science, Computer Engineering, or related quantitative field  Strong background working with version control systems (Git, etc.)  Experience managing continuous integrations systems, Azure pipelines is a plus  Advanced level of experience with programming, data structures and algorithms   Working knowledge of Agile Software development methodology  Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement   A successful history of manipulating, processing and extracting value from large disconnected datasets  Skills desired: Business Statistics : Knowledge of the statistical tools, processes, and practices to describe business results in measurable scales; ability to use statistical tools and processes to assist in making business decisions. Level Working Knowledge: • Explains the basic decision process associated with specific statistics. • Works with basic statistical functions on a spreadsheet or a calculator. • Explains reasons for common statistical errors, misinterpretations, and misrepresentations. • Describes characteristics of sample size, normal distributions, and standard deviation. • Generates and interprets basic statistical data. Accuracy and Attention to Detail : Understanding the necessity and value of accuracy; ability to complete tasks with high levels of precision. Level Working Knowledge: • Accurately gauges the impact and cost of errors, omissions, and oversights. • Utilizes specific approaches and tools for checking and cross-checking outputs. • Processes limited amounts of detailed information with good accuracy. • Learns from mistakes and applies lessons learned. • Develops and uses checklists to ensure that information goes out error-free. Analytical Thinking : Knowledge of techniques and tools that promote effective analysis; ability to determine the root cause of organizational problems and create alternative solutions that resolve these problems. Level Working Knowledge: • Approaches a situation or problem by defining the problem or issue and determining its significance. • Makes a systematic comparison of two or more alternative solutions. • Uses flow charts, Pareto charts, fish diagrams, etc. to disclose meaningful data patterns. • Identifies the major forces, events and people impacting and impacted by the situation at hand. • Uses logic and intuition to make inferences about the meaning of the data and arrive at conclusions. What you will get: Work Life Harmony Earned and medical leave. Flexible work arrangements Relocation assistance Holistic Development Personal and professional development through Caterpillar ‘s employee resource groups across the globe Career developments opportunities with global prospects Health and Wellness Medical coverage -Medical, life and personal accident coverage Employee mental wellness assistance program   Financial Wellness Employee investment plan Pay for performance -Annual incentive Bonus plan.

Amazon Logo

Senior Data Engineer

Amazon

Bengaluru, Karnataka, India

Posted: a year ago

DESCRIPTION North America Consumer is one of Amazon’s largest businesses, including North America retail and third party sellers, as well as an array of innovative new programs including Alexa Shopping and Amazon Business. The NA Consumer Finance organization is looking for an outstanding Data Engineer to build a robust and scalable data platform to support analytical and data capability for the team. As a Data Engineer, you will be working in one of the world's largest and most complex data warehouse environments using the latest suite of AWS toolsets. You should have deep expertise in the design, creation, management, and business use of extremely large datasets. You should be expert at designing, implementing, and maintaining stable, scalable, low cost data infrastructure. In this role, you will build datasets that analysts and BIE use to generate actionable insights. You should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. The successful candidate will be an expert with SQL, ETL (and general data wrangling) and have exemplary communication skills. The candidate will need to be a self-starter, comfortable with ambiguity in a fast-paced and ever-changing environment, and able to think big while paying careful attention to detail. Responsibilities You know and love working with data engineering tools, can model multidimensional datasets, and can understand how to make appropriate data trade-offs. You will also have the opportunity to display your skills in the following areas: · Design, implement, and support a platform providing access to large datasets · Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL · Own, maintain and scale a Redshift cluster and manage other AWS Resources · Model data and metadata for ad hoc and pre-built reporting · Collaborate with Business Intelligence Engineers and Analysts to deliver high quality data architecture and pipelines · Recognize and adopt best practices in engineering and operational excellence: data integrity, validation, automation and documentation · Create coherent Logical Data Models that drive physical design We are open to hiring candidates to work out of one of the following locations: Bangalore, KA, IND BASIC QUALIFICATIONS - 5+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience mentoring team members on best practices PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience operating large data warehouses