The Job logo

What

Where

Data Integration Engineer

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

Smart SummaryPowered by Roshi
The Data Integration Engineer at Syndigo is responsible for architecting and implementing data ingestion, validation, and transformation pipelines. They should have experience in big data and cloud architecture, and be able to effectively communicate ideas. The ideal candidate should have experience with code deployment and troubleshooting data issues. They should also have knowledge of ETL/ELT patterns and distributed data processing. As a Data Integration Engineer, you will design and develop scalable software applications using best practices and design patterns. You will also be responsible for designing and enhancing REST APIs, writing automated unit tests, and collaborating with cross-functional teams. Proficiency in Javascript and experience with REST API and SQL are required.

Data Integration Engineer

The Data Integration Engineer is responsible for Architecting and implementing data ingestion, validation, and transformation pipelines at Syndigo.  In collaboration with the Product, Development, and Enterprise Data teams, the Data Integration Engineer will design and maintain batch and streaming integrations across a variety of data domains and platforms.  The ideal candidate is experienced in big data, cloud architecture, and is excited to advance innovative analytics solutions! The Data Integration Engineer should be able to effectively communicate ideas and concepts to peers and have experience leading projects that support the business objectives and goals.

 

 

ESSENTIAL DUTIES AND RESPONSIBILITIES:

  • Take ownership in building solutions and proposing architectural designs related to building efficient and timely data ingestion and transformation processes geared towards analytics work loads
  • Manage code deployment to various environments
  • Be proficient at positively critiquing and suggesting improvements via code reviews
  • Work with stakeholders to define and develop data ingest, validation, and transform pipelines
  • Troubleshoot data pipelines and resolve issues in alignment with SDLC
  • Ability to diagnose and troubleshoot data issues, recognizing common data integration and transformation patterns
  • Estimate, track, and communicate status of assigned items to a diverse group of stakeholders

REQUIREMENTS:

  • 2-4 years of experience in developing and architecting large scale data pipelines in a cloud environment
  • Demonstrated expertise in Scala (Object Oriented Programming) / Python (Scala preferred), SPARK SQL
  • Experience with Databricks, including Delta Lake
  • Experience with Azure and cloud environments, including Azure Data Lake Storage (Gen2), Azure Blob Storage, Azure Tables, Azure SQL Database, Azure Data Factory
  • Experience with ETL/ELT patterns, preferably using Azure Data Factory and Databricks jobs
  • Fundamental knowledge of distributed data processing and storage
  • Fundamental knowledge of working with structured, unstructured, and semi structured data
  • Excellent analytical and problem-solving skills
  • Ability to effectively manage time and adjust to changing priorities
  • Bachelor’s degree preferred, but not required.
  • Design, Develop and test highly scalable software applications using software best practices and applying various design patterns
  • Develop innovative capabilities to add new and robust reporting capabilities using various insights/analytics to the platform.
  • Design, Develop and Enhance REST API along with testing using tools such as Postman.
  • Design and architect end to end solutions and be responsible for the deployment and maintenance of these solutions in our AWS cloud.
  • Implement configurable data mappings to allow custom data transformations.
  • Troubleshoot and issue resolution of software.
  • Write automated unit tests and conduct code reviews.
  • Collaborate with Product, DevOps and QA teams on requirements, operations, and automation.
  • Agile software development using Jira/Confluence.
  • Collaborate with team members across multiple geographic locations as well as time zones.

Technical skill and Experience:

  • 2+ years of experience in designing and developing software using Microsoft technologies
  • Required:
    • Proficient in Javascript
    • Experience with REST API
    • Experience with GitHub and Azure DevOps
    • Experience with SQL
  • Plus:
    • Dev Ops experience in AWS
    • Knowledge of e-commerce platforms.
    • Knowledge of data analysis.
    • Experience with Elasticsearch.
Set alert for similar jobsData Integration Engineer role in Gurgaon, India
Syndigo Logo

Company

Syndigo

Job Posted

a year ago

Job Type

Full-time

WorkMode

Remote

Experience Level

3-7 Years

Category

Engineering

Locations

Gurgaon, Haryana, India

Qualification

Bachelor

Applicants

Be an early applicant

Related Jobs

Ciena Logo

Lead, Enterprise Integration Developer

Ciena

Gurgaon, Haryana, India

Posted: a year ago

Join Ciena's Enterprise Data Integration team and make an impact in how the world connects. As the Lead MuleSoft Integration Developer, you will be responsible for handling sophisticated technical and business challenges, mentoring a team, and overseeing solutions to technical issues. Bring your expertise in design and development of integrations and contribute to our Integration Competency Center. Must have 8+ years of relevant experience and knowledge of MuleSoft platform components.