The Job logo

What

Where

Consultant - Snowflake- T&H

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

Smart SummaryPowered by Roshi
As a Data Scientist, you will solve impactful business problems using AI and ML technologies. You will collaborate with partners to design innovative solutions, develop custom models, and communicate insights effectively.

Job Summary

As a Data scientist you will solve some of the most impactful business problems for our clients using a variety of AI and ML technologies. You will collaborate with business partners and domain experts to design and develop innovative solutions on the data to achieve predefined outcomes.

Job Location

Roles & Responsibilities

  • Engage with clients to understand current and future business goals and translate business problems into analytical frameworks
  • Develop custom models based on in-depth understanding of underlying data, data structures, and business problems to ensure deliverables meet client needs
  • Create repeatable, interpretable and scalable models
  • Effectively communicate the analytics approach and insights to a larger business audience
  • Collaborate with team members, peers and leadership at Tredence and client companies

Qualification & Experience

  1. Bachelor's or Master's degree in a quantitative field (CS, machine learning, mathematics, statistics) or equivalent experience.
  2. 2-4  years of experience in data science, building hands-on ML models
  3. Experience leading the end-to-end design, development, and deployment of predictive modeling solutions.
  4. Excellent programming skills in Python. Strong working knowledge of Python’s numerical, data analysis, or AI frameworks such as NumPy, Pandas, Scikit-learn, Jupyter, etc.
  5.  Advanced SQL skills with SQL Server and Spark experience.
  6. Knowledge of predictive/prescriptive analytics including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks
  7. Experience with Natural Language Processing (NLTK) and text analytics for information extraction, parsing and topic modeling.
  8. Excellent verbal and written communication. Strong troubleshooting and problem-solving skills. Thrive in a fast-paced, innovative environment
  9. Experience with data visualization tools — PowerBI, Tableau, R Shiny, etc. preferred
  10. Experience with cloud platforms such as Azure, AWS is preferred but not required
Set alert for similar jobsConsultant - Snowflake- T&H role in Gurgaon, India, Pune, India, Bengaluru, India, or Chennai, India
Tredence Inc. Logo

Company

Tredence Inc.

Job Posted

a year ago

Job Type

Full-time

WorkMode

On-site

Experience Level

3-7 Years

Category

Consulting

Locations

Gurgaon, Haryana, India

Pune, Maharashtra, India

Bengaluru, Karnataka, India

Chennai, Tamil Nadu, India

Qualification

Bachelor

Applicants

Be an early applicant

Related Jobs

Tredence Inc. Logo

Consultant-DE

Tredence Inc.

Bengaluru, Karnataka, India

+3 more

Posted: a year ago

We are seeking an experienced data scientist who possesses mathematical and statistical expertise, as well as a curiosity and creative mind to uncover hidden opportunities in data. You will be responsible for developing data engineering solutions, building ETL pipelines, and fulfilling reporting needs. Technical skills in Databricks, AWS/Azure, SQL, Python, and Spark are necessary. Experience with batch processing, streaming, and other big data technologies is desired. Strong communication, analytical, and problem-solving skills are important.

Tredence Inc. Logo

Associate Manager - Lead DE

Tredence Inc.

Gurgaon, Haryana, India

+3 more

Posted: a year ago

Job Summary Developing Modern Data Warehouse solutions using Databricks and  Azure Stack ● Ability to provide solutions that are forward-thinking in data engineering and analytics space ● Collaborate with DW/BI leads to understand new ETL pipeline development requirements. ● Triage issues to find gaps in existing pipelines and fix the issues ● Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs ● Help joiner team members to resolve issues and technical challenges. ● Drive technical discussion with client architect and team members ● Orchestrate the data pipelines in scheduler via Airflow Job Location Bangalore / Chennai / Gurgaon / Pune Roles & Responsibilities Bachelor's and/or master’s degree in computer science or equivalent experience. ● Must have total 4+ yrs. of IT experience & Data warehouse/ETL projects. ● Deep understanding of Star and Snowflake dimensional modelling. ● Strong knowledge of Data Management principles ● Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture ● Should have hands-on experience in SQL, Python and Spark (PySpark) ● Candidate must have experience in Azure stack ● Desirable to have ETL with batch and streaming (Kinesis). ● Experience in building ETL / data warehouse transformation processes ● Experience with Apache Kafka for use with streaming data / event-based data ● Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) ● Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) ● Experience working with structured and unstructured data including imaging & geospatial data. ● Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. ● Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot ● Databricks Certified Data Engineer Associate/Professional Certification (Desirable). ● Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects ● Should have experience working in Agile methodology ● Strong verbal and written communication skills. ● Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks