Data Engineer
Advarra
Bengaluru, Karnataka, India
Description Company Information At Advarra, we are passionate about making a difference in the world of clinical research and advancing human health. With a rich history rooted in ethical review services combined with innovative technology solutions and deep industry expertise, we are at the forefront of industry change. A market leader and pioneer, Advarra breaks the silos that impede clinical research, aligning patients, sites, sponsors, and CROs in a connected ecosystem to accelerate trials. Company Culture Our employees are the heart of Advarra. They are the key to our success and the driving force behind our mission and vision. Our values (Patient-Centric, Ethical, Quality Focused, Collaborative) guide our actions and decisions. Knowing the impact of our work on trial participants and patients, we act with urgency and purpose to advance clinical research so that people can live happier, healthier lives. At Advarra, we seek to foster an inclusive and collaborative environment where everyone is treated with respect and diverse perspectives are embraced. Treating one another, our clients, and clinical trial participants with empathy and care are key tenets of our culture at Advarra; we are committed to creating a workplace where each employee is not only valued but empowered to thrive and make a meaningful impact. Job Duties & Responsibilities Develop and optimize complex SQL queries, stored procedures, and user-defined functions within Snowflake. Develop efficient data transformation pipelines using Snowflake's native SQL capabilities and CTEs. Leverage Snowflake's unique features, such as Time Travel, Zero-Copy Cloning, and Data Sharing to build scalable and robust data solutions. Develop and maintain data models using dbt, ensuring they are scalable, reliable, and optimized for performance. Monitor and manage Fivetran connections to ensure data is being extracted and loaded into Snowflake accurately and on schedule. Troubleshoot and resolve issues related to SQL queries, data syncs, connection errors, and performance bottlenecks. Design, develop, and deploy AWS Lambda functions to support data processing, ETL workflows, and real-time data streaming. Location This role is open to candidates working in Bengaluru, Ind (hybrid). Basic Qualifications Bachelor’s degree or equivalent combination of education and related work experience 0 - 1 year plus of experience in writing and optimizing complex SQL queries, stored procedures, and user-defined functions. Experience designing and implementing efficient data transformation pipelines Experience in writing data transformation with tools such as dbt & Matillion Experience in building and managing data pipeline Experience in writing data transformation test automation scripts Working experience with version control platforms, e.g. GitHub and Agile methodologies and supporting tools JIRA Preferred Qualifications Snowflake certifications are a plus Dbt certifications are a plus AWS certifications are a plus Physical and Mental Requirements Sit or stand for extended periods of time at stationary workstation Regularly carry, raise, and lower objects of up to 10 Lbs. Learn and comprehend basic instructions Focus and attention to tasks and responsibilities Verbal communication; listening and understanding, responding, and speaking