Sr. Cloud Optimization Engineer
Thermo Fisher Scientific
Bengaluru, Karnataka, India
What will you do? Responsible for building standards of operations and process workflows to approve, trigger and validate automation activities. Establish and maintain key performance indicators in alignment with department and product supply goals, maintain and report applicable area or organizational metrics. Partner with operations, engineering and technical teams to support quality and business objectives Participate in the configuration, tuning, and implementation of the FinOps tools. Participate in deep architectural discussions to help IT teams are making the most cost-efficient choices. Run, develop, and deploy automation technologies in AWS and other public and private clouds (Azure, Google Cloud, VMware) Run code and infrastructure as code deployments using Python, Terraform, Ansible, and various SCM tools. Provide domain expertise on the technical details for our products and automation hardware/systems including leading and participating in design reviews. Participate in Agile methodologies including the Usgae of Jira / Confluence. Find opportunities to improve the corporate network by implementing the network standards that have been established by the Global Networking Architecture team. Validated ability to deliver high quality product designs. Show innovation and creativity in a fast paced, global environment. Ability to work with senior engineers to understand and solve complex design challenges. Be accountable and responsible for ensuring all personal activities are effectively closed on time, in line with agreed targets and timelines (e.g. PPM, Deviations, Change Controls, Training, etc.) Participate in PPI (Practical Process Improvement) and projects to support achievement of business metrics to drive improvement of Quality within Thermo Fisher Scientific but also at our key suppliers. Identify and raise technical, business, and operational risks and opportunities. Build awareness and impact change in overall cloud infrastructure costs. How will you get here? Education: Master’s degree (or Bachelor + proven experience) in biology, forensic sciences or equivalent; human identification, application customer support for regulated market or equivalent experience. Knowledge, Skills, Abilities, and Expertise Minimum of 7-10 years of meaningful experience in a related science, engineering and/or customer facing role around application of analytical technology. AWS Certifications (SysOps and/or Developer and/or Architecture) preferred. Experience in participating and leading in Continues Improvement- Practical Process Improvement (PPI) events and problem solving. Experience in driving the PPI business system by encouraging employees to put JDIs in and using the PPI process to drive improvements. Experience in some or all the following platforms and technologies: Expertise in Python / JSON / RESTful services Amazon AWS Azure Google Cloud Atlassian Confluence/Jira and Bitbucket Hashicorp’s Suite (Vault, Terraform, Consul, Packer) Ansible or other configuration management tools Container platforms (Examples: Docker, Kubernetes, EKS) NoSQL platforms (Examples: DynamoDB) Excellent understanding of change management, testing requirements, techniques, and tools to ensure high availability of systems and automation. Excellent verbal and written communication skills for a wide range of audiences including executives, business stakeholders and IT teams. Excellent analytical, troubleshooting, and problem-solving skills. Ability to translate business requirements into technical designs to meet business needs. Experience in cost management tactics such as enterprise-wide discount programs and dedication plans (Reserved Instances, Savings Plans, Spot, Sustained Use/Committed Use Discounts)