The Job logo

What

Where

Software Engineer for Data ( Azure Databricks)

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

Smart SummaryPowered by Roshi
Seeking a Software Engineer for Data with expertise in Azure Databricks. The ideal candidate should be proficient in software development, testing, data modeling, data engineering, data architecture, cloud computing, big data technologies, scripting, Databricks, Jupyter notebooks, Spark clusters, data ingestion, security, governance, and possess strong problem-solving, communication, collaboration, leadership, and management skills. This full-time on-site role at Philips, Chennai, India, requires 8-12 years of experience.

Job description 

#DIW

#PHILI

Experience: 8-14years

 

Software Engineer with Data Skills:

 

 

An extremely hands-on individual, with minimal knowledge in the Azure or AWS area on software architecting and engineer

Hard Skills:

Strong software development skills:

Have a solid understanding of software development principles, and be able to write code in multiple programming languages. This skill is crucial in order to create practical and realistic architectural designs.

 

Knowledge of software design patterns:

Deep understanding of software design patterns and how they can be applied in different contexts. This skill allows them to identify potential problems and provide effective solutions.

 

Knowledge of software testing, testing frameworks, and automated testing:

Deep understanding of testing, testing frameworks and automated testing applied in different contexts.

Software testing:

Have a solid understanding of testing methodologies, including manual testing, automated testing, and test-driven development. They should be able to create comprehensive test plans and execute tests to ensure the quality of the software.

 

Testing frameworks:

Testing frameworks such as Selenium, JUnit, TestNG, NUnit, and PyTest are widely used in software testing. Sshould be familiar with these frameworks and have experience implementing them to automate tests, validate code changes, and ensure software quality.

 

Automated testing:

Automated testing involves using tools and frameworks to automate the testing process. Should have experience with continuous integration and continuous testing, which involves automatically running tests after each code change to ensure the software remains functional. They should also be familiar with test automation frameworks such as Cucumber, Behave, and Robot Framework.

 

 

Experience with data modeling, data engineering, and data architecture:

A strong understanding of data modeling and data architecture is essential. Be able to design and develop data models that accurately reflect the needs of the business and ensure that the data architecture supports the company's goals.

Data processing and transformation:

Responsible for designing and implementing data processing pipelines to extract, transform, and load (ETL) data from various sources. Have experience with tools such as Apache Spark, Apache Kafka, Azure Data Factory, and other tools to build data processing pipelines.

 

Database design and management:

Have a deep understanding of database design principles and be able to design and manage databases such as MySQL, Oracle, and PostgreSQL. They should also be familiar with NoSQL databases such as MongoDB and Cassandra.

 

Cloud computing:

Be familiar with cloud computing platforms such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) to leverage cloud services for data storage, processing, and analysis.

 

Big data technologies:

Expertise in big data technologies such as Hadoop, Hive, Pig, and Spark to manage and process large volumes of data.

 

Data warehousing:

Have experience designing and building data warehousing solutions on top of Azure and Databricks: Azure Synapse and DataBricks Data Warehousing concepts.

 

Scripting and programming:

Have strong scripting and programming skills to automate data processes and build custom tools. This involves using languages such as Python, Java, Scala, and SQL.

 

 

Knowledge of data lakes, ETL/ELT, and Data-as-a-Service:

Have extensive experience with data technologies such as data lakes, ETL/ELT, and Data-as-a-Service. This knowledge enables them to make informed decisions about the company's data architecture and how it can be used to provide value.

Databricks knowledge:

Expertise in data lakes should have a strong understanding of Databricks, which is a cloud-based platform used for data engineering, machine learning, and analytics. They should be able to use Databricks to build and deploy data pipelines, create data models, and perform advanced analytics.

 

Jupyter notebooks:

Be familiar with Jupyter notebooks, which are interactive coding environments that allow users to create and share code, visualizations, and data analyses. They should be able to use Jupyter notebooks to develop and test data pipelines, create and execute machine learning models, and perform exploratory data analysis.

 

Spark clusters:

Have extensive experience working with Spark clusters, which are high-performance computing clusters used for big data processing. They should be able to configure and manage Spark clusters, optimize their performance, and use them to process large volumes of data.

 

Data ingestion and processing:

Deep understanding of data ingestion and processing techniques used in data lakes. They should be able to design and develop data pipelines that can ingest data from various sources, transform and process data using tools such as Apache Spark, and store data in data lakes such as Amazon S3, Azure Data Lake Storage, or Google Cloud Storage.

 

Data security:

Be familiar with data security best practices, including access control, data encryption, and network security. They should be able to design and implement data security measures to protect data in transit and at rest.

 

Data governance:

Have a good understanding of data governance principles and practices. They should be able to create and enforce data policies, establish data quality standards, and ensure compliance with relevant data regulations. Have experience with data quality and governance practices to ensure the accuracy, completeness, and reliability of data. This involves implementing data quality checks, defining data policies, and ensuring compliance with relevant data regulations.

 

Cloud computing:

Have experience working with cloud computing platforms such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform. They should be able to leverage cloud services to create scalable and cost-effective data lake architectures.

 

Big data technologies:

Be familiar with other big data technologies such as Hadoop, Hive, Pig, and Kafka. They should be able to use these technologies in conjunction with data lakes to create robust and scalable data processing solutions.

 

 

 

Soft Skills:

Excellent problem-solving skills:

Have excellent problem-solving skills and be able to identify and address potential problems with the software architecture before they become major issues.

 

Communication and collaboration:

Effective communication and collaboration skills are critical. Be able to clearly communicate complex technical concepts to non-technical stakeholders and collaborate effectively with cross-functional teams.

 

Leadership and management:

As a key member of the technical team, have strong leadership and management skills. Be able to lead technical teams and manage projects effectively, ensuring that the software architecture is delivered on time and to a high standard.

 

Set alert for similar jobsSoftware Engineer for Data ( Azure Databricks) role in Chennai, India
Philips Logo

Company

Philips

Job Posted

9 months ago

Job Type

Full-time

WorkMode

On-site

Experience Level

8-12 Years

Category

Software Engineering

Locations

Chennai, Tamil Nadu, India

Qualification

Bachelor or Master

Applicants

Be an early applicant

Related Jobs

HP Logo

Software Data Engineer ( Azure and SQL )

HP

Chennai, Tamil Nadu, India

Posted: 8 months ago

Software Data Engineer role at HP in Chennai, Tamil Nadu, India. Responsible for data collection, analysis, and ensuring data integrity. Lead project teams for designing secure and performant data solutions. Requires 7-10 years of work experience in data analytics or engineering field.

Freshworks Logo

Lead Software Engineer - Test (Performance)

Freshworks

Chennai, Tamil Nadu, India

Posted: a year ago

Job Description About the Role Involved in every phase of SDLC, Lead Software Engineer in Test (Performance) at Freshworks, are in complete ownership of the ensuring performance and scalability of web applications and microservices by doing performance testing of the organization’s cutting-edge projects.  A performance test engineer’s primary responsibility is creating and maintaining performance test plans, utilizing load testing tools to inject load, analyzing metrics from application and system logs, and simulating system behavior to improve the performance and reliability of the applications. The candidate should also have enthusiasm for troubleshooting, analyzing, and resolving complex problems, must demonstrate strong problem-solving and communication skills, and be prepared to be an expert performance engineering resource on multiple initiatives of diverse scopes. This position offers the candidate several opportunities to learn and test world-class B2B SaaS products that are built using cutting-edge technologies.   Responsibilities Gathering Performance Testing Requirements, Analyzing and designing performance specifications, defining performance test strategy, creating performance test plans, developing performance scripts for both Web (Front End & Backend) and Microservices Execute performance tests for benchmarking, identifying bottlenecks, and determining limits of critical factors Identify and isolate performance issues on all layers of the application stack including Network, OS, Application, and Database and analyze root causes of performance issues and provide corrective actions. Identifying memory level and thread level issues using heap/thread dumps and analyzing Garbage collection logs using GC analysis tools. Deep knowledge in SRE activities for business function’s health, Alerting, Notification, and Monitoring by continuous engagement with Architects, Product Engineering & DevOps. Analyzing system memory, CPU, and Run Queue and Identifying performance bottlenecks and remedies. Set up Performance test infrastructure by understanding systems environments like shared resources, components, services, CPU, memory, storage, network, etc. Create Continuous Integration, Continuous Delivery (CI/CD) infrastructure, and processes to run QA performance scripts. Analyze Performance  test results and provide clear and concise reports with recommendations and improvement plans and generate performance test summary reports for every release Work closely with development teams, architects, and engineers to test their products under load and make recommendations to improve performance, reliability, and scalability Suggest new tools and techniques to improve performance testing efficiency and Implement best-in-class practices in performance testing for Freshworks Coordinate with cross-products and provide solutions based on their performance testing requirements Qualifications Qualification 7 to 10 Years of strong experience in Performance testing/Engineering with a good understanding of performance testing concepts Solid experience in assessing the performance, scalability, and resiliency of large-scale web applications, APIs, and backend services with an understanding of multi-tiered and microservice architecture Extensive knowledge and hands-on experience in any of the Performance testing and monitoring tools (JMeter, HP LoadRunner, Gatling) Experience in APM toolset for monitoring, profiling, and tuning like AppDynamics, New Relic, Grafana, ELK, and similar ones Understanding of various performance metrics (CPU, Memory, Disk, and Network) Good knowledge in Cloud computing platforms (AWS in specific), Containers (Docker), Kubernetes, Web/UI JavaScript frameworks (e.g. AngularJS, NodeJS, ReactJS), REST, JSON, XML Good to have experience in creating monitoring dashboards in Grafana. Experience in Databases / SQL (e.g. MySQL, RDS, Elastic Search, Postgres, MongoDB, DynamoDB) Experience in message brokers (e.g. Kafka, RabbitMQ) Experience testing with containers, cloud, virtualization, and configuration management. Experience in setting up a high-volume load model by understanding the product architecture. Solid data analysis and problem-solving skills Strong self-driven collaborator with the ability to work in diverse teams as a contributing member