The Job logo

What

Where

Data Engineer-II-SUPPORT SERVICES-CTO Head

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

Smart SummaryPowered by Roshi
Join Kotak Group as a Data Engineer II to help build a cutting-edge data platform for Kotak Bank's digital transformation journey. Work on revamping the data platform to a scalable AWS cloud-based solution. This role involves software development in Python on AWS, data engineering with Spark, advanced SQL, and data modeling for analytics. Opportunity to work on greenfield projects and be part of the fintech domain.

Data Engineer -2 (Experience – 2-5 years)

 

What we offer

Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many.

 

About our team

DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics.

The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter.

As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies.

 

The data platform org is divided into 3 key verticals:

Data Platform

This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak.

 

Data Engineering

This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases.

 

Data Governance

The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform.

If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you.

 

You day to day role will include

  • Drive business decisions with technical input and lead the team.
  • Design, implement, and support an data infrastructure from scratch.
  • Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA.
  • Extract, transform, and load data from various sources using SQL and AWS big data technologies.
  • Explore and learn the latest AWS technologies to enhance capabilities and efficiency.
  • Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis.
  • Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers.
  • Build data platforms, data pipelines, or data management and governance tools.

 

BASIC QUALIFICATIONS for Data Engineer/ SDE in Data

  • Bachelor's degree in Computer Science, Engineering, or a related field
  • Experience in data engineering
  • Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR
  • Experience with data pipeline tools such as Airflow and Spark
  • Experience with data modeling and data quality best practices
  • Excellent problem-solving and analytical skills
  • Strong communication and teamwork skills
  • Experience in at least one modern scripting or programming language, such as Python, Java, or Scala
  • Strong advanced SQL skills

 

PREFERRED QUALIFICATIONS

  • AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow
  • Prior experience in Indian Banking segment and/or Fintech is desired.
  • Experience with Non-relational databases and data stores
  • Building and operating highly available, distributed data processing systems for large datasets
  • Professional software engineering and best practices for the full software development life cycle
  • Designing, developing, and implementing different types of data warehousing layers
  • Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions
  • Building scalable data infrastructure and understanding distributed systems concepts
  • SQL, ETL, and data modelling
  • Ensuring the accuracy and availability of data to customers
  • Proficient in at least one scripting or programming language for handling large volume data processing
  • Strong presentation and communications skills.
Set alert for similar jobsData Engineer-II-SUPPORT SERVICES-CTO Head role in Bengaluru, India
Kotak Mahindra Bank Logo

Company

Kotak Mahindra Bank

Job Posted

6 months ago

Job Type

Full-time

WorkMode

On-site

Experience Level

3-7 Years

Category

Data & Analytics

Locations

Bengaluru, Karnataka, India

Qualification

Bachelor

Applicants

Be an early applicant

Related Jobs

Uber Logo

Data Engineer II

Uber

Bengaluru, Karnataka, India

Posted: 3 months ago

What the Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Code : Writes high-quality code (i.e., reliable, readable, efficient, testable), provides quality code reviews, and creates comprehensive tests and quality documentation using software engineering principles. This includes knowledge of data structures, algorithms, programming and associated programming languages and frameworks, and major phases/activities of the software research and development life cycle (e.g., requirements, design, build, experiment, test, debug, deploy, monitor). Identifies, reports, and solves technical problems according to standards and best practices. Design : Uses software design principles and methods, knowledge of the capabilities and limitations of existing software solutions at Uber, and an understanding of own work's impact on other areas to reuse, extend, or, when needed, build effective architectures that are integrated with existing solutions and in alignment with needs and goals across areas. Learns and anticipates current and future design requirements and evaluates trade-off decisions to design systems that meet current needs and can be extended for future needs Execute : Executes work with drive and appropriate sense of urgency to deliver technical and business impact. Plans, organizes, and manages tasks, resources, and relationships to accomplish work accurately and on time. Defines and diagnoses problems and determines an appropriate solution, recommendation, or decision while logically evaluating alternatives and factors (e.g., resources, costs, tradeoffs). Accepts responsibility and accounts for own actions and work. Collaborate : Builds trusting and collaborative relationships and rapport with different types of people and teams. Respects the unique backgrounds and contributions of others. Recognizes conflict or disputes among people and situations, learns and works to understand different points of view. Resolves and aligns discrepancies across teams and organizations to accomplish team-, organization-, or Uberwide goals. Provides constructive feedback to others in a tactful and impactful manner. ---- Basic Qualifications ---- Bachelor's degree in Computer Science or related technical field or equivalent practical experience Experience coding using general purpose programming language (eg. Java, Python, Go, Javascript, Fusion) Understanding and exposure to data tech stack eg: Spark, Hive. ---- Preferred Qualifications ---- Coding chops, clean, elegant, bug-free code in languages like JS, React, nodejs, open to new stack as needed Provide technical direction and hands-on solving of technical problems in the area of distributed systems Strong desire to learn and grow, while building the outstanding systems Work in close collaboration with other specialists, product managers, designers, and operations to ship useful and fabulous experiences to our customers Passionate about helping teams grow by inspiring and mentoring engineers. Ability to Identify and resolve performance and scalability issues

Caterpillar Inc. Logo

Data Engineer

Caterpillar Inc.

Bengaluru, Karnataka, India

Posted: 4 months ago

Job Description: Your Work Shapes the World at Caterpillar Inc. When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other.  We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it. Your Impact Shapes the World at Caterpillar Inc When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other. We are the makers, problem solvers and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it.   Job Summary We are seeking a skilled Data Scientist (Data Engineer) to join our Pricing Analytics Team . The incumbent would be responsible for building scalable, high-performance infrastructure and data driven analytics applications that provide actionable insights. The position will be part of Caterpillar’s fast-moving Global Parts Pricing organization, driving action and tackling challenges and problems that are critical to realizing superior business outcomes. The data engineer will work with data scientists, business intelligence analysts, and others as part of a team that assembles large, complex data sets, pipelines, apps, and data infrastructure that provide competitive advantage.   The preference for this role is to be based out of Bangalore – Whitefield office   What you will do Job Roles and Responsibilities Build infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using AWS tools  Design, develop, and maintain performant and scalable applications   Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability   Perform debugging, troubleshooting, modifications and testing of integration solutions  Operationalize developed jobs and processes  Create/Maintain database infrastructure to process data at scale  Create solutions and methods to monitor systems and processes  Automate code testing and pipelines  Engage directly with business partners to participate in design and development of data integration/transformation solutions.  Engage and actively seek industry best practice for continuous improvement  What you will have    BS in Computer Science, Data Science, Computer Engineering, or related quantitative field  Development experience, preferably using Python and/or PySpark  Understanding of data structures, algorithms, profiling & optimization   Understanding of SQL, ETL/ELT design, and data modeling techniques  Great verbal and written communication skills to collaborate cross functionally and drive action  Thrive in a fast-paced environment that delivers results . Passion for acquiring, analyzing, and transforming data to generate insights  Strong analytical ability, judgment and problem analysis techniques  This position may require 10% travel.  Shift Timing-01:00PM -10:00PM IST(EMEA Shift)  Desired Skills:  MS in Computer Science, Data Science, Computer Engineering, or related quantitative field  Experience with AWS cloud services (e.g. EMR, EC2, Lambda, Glue, CloudFormation, CloudWatch/EventBridge, ECR, ECS, Athena, Fargate, etc.)  Experience administering and/or developing in Snowflake  Masters Degree in Computer Science, Data Science, Computer Engineering, or related quantitative field  Strong background working with version control systems (Git, etc.)  Experience managing continuous integrations systems, Azure pipelines is a plus  Advanced level of experience with programming, data structures and algorithms   Working knowledge of Agile Software development methodology  Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement   A successful history of manipulating, processing and extracting value from large disconnected datasets  Skills desired: Business Statistics : Knowledge of the statistical tools, processes, and practices to describe business results in measurable scales; ability to use statistical tools and processes to assist in making business decisions. Level Working Knowledge: • Explains the basic decision process associated with specific statistics. • Works with basic statistical functions on a spreadsheet or a calculator. • Explains reasons for common statistical errors, misinterpretations, and misrepresentations. • Describes characteristics of sample size, normal distributions, and standard deviation. • Generates and interprets basic statistical data. Accuracy and Attention to Detail : Understanding the necessity and value of accuracy; ability to complete tasks with high levels of precision. Level Working Knowledge: • Accurately gauges the impact and cost of errors, omissions, and oversights. • Utilizes specific approaches and tools for checking and cross-checking outputs. • Processes limited amounts of detailed information with good accuracy. • Learns from mistakes and applies lessons learned. • Develops and uses checklists to ensure that information goes out error-free. Analytical Thinking : Knowledge of techniques and tools that promote effective analysis; ability to determine the root cause of organizational problems and create alternative solutions that resolve these problems. Level Working Knowledge: • Approaches a situation or problem by defining the problem or issue and determining its significance. • Makes a systematic comparison of two or more alternative solutions. • Uses flow charts, Pareto charts, fish diagrams, etc. to disclose meaningful data patterns. • Identifies the major forces, events and people impacting and impacted by the situation at hand. • Uses logic and intuition to make inferences about the meaning of the data and arrive at conclusions. What you will get: Work Life Harmony Earned and medical leave. Flexible work arrangements Relocation assistance Holistic Development Personal and professional development through Caterpillar ‘s employee resource groups across the globe Career developments opportunities with global prospects Health and Wellness Medical coverage -Medical, life and personal accident coverage Employee mental wellness assistance program   Financial Wellness Employee investment plan Pay for performance -Annual incentive Bonus plan.

Amazon Logo

Senior Data Engineer

Amazon

Bengaluru, Karnataka, India

Posted: a year ago

DESCRIPTION North America Consumer is one of Amazon’s largest businesses, including North America retail and third party sellers, as well as an array of innovative new programs including Alexa Shopping and Amazon Business. The NA Consumer Finance organization is looking for an outstanding Data Engineer to build a robust and scalable data platform to support analytical and data capability for the team. As a Data Engineer, you will be working in one of the world's largest and most complex data warehouse environments using the latest suite of AWS toolsets. You should have deep expertise in the design, creation, management, and business use of extremely large datasets. You should be expert at designing, implementing, and maintaining stable, scalable, low cost data infrastructure. In this role, you will build datasets that analysts and BIE use to generate actionable insights. You should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. The successful candidate will be an expert with SQL, ETL (and general data wrangling) and have exemplary communication skills. The candidate will need to be a self-starter, comfortable with ambiguity in a fast-paced and ever-changing environment, and able to think big while paying careful attention to detail. Responsibilities You know and love working with data engineering tools, can model multidimensional datasets, and can understand how to make appropriate data trade-offs. You will also have the opportunity to display your skills in the following areas: · Design, implement, and support a platform providing access to large datasets · Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL · Own, maintain and scale a Redshift cluster and manage other AWS Resources · Model data and metadata for ad hoc and pre-built reporting · Collaborate with Business Intelligence Engineers and Analysts to deliver high quality data architecture and pipelines · Recognize and adopt best practices in engineering and operational excellence: data integrity, validation, automation and documentation · Create coherent Logical Data Models that drive physical design We are open to hiring candidates to work out of one of the following locations: Bangalore, KA, IND BASIC QUALIFICATIONS - 5+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience mentoring team members on best practices PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience operating large data warehouses