Job description
Responsible for design, development, modification, debug and/or maintenance of software systems
What will your job look like?
• You will design, develop, modify, debug and/or maintain software code according to functional, non-functional and technical design specifications.
• You will follow Amdocs software engineering standards, applicable software development methodology and release processes, to ensure code is maintainable, scalable, and supportable, and demo the software products to stakeholders.
• You will investigate issues by reviewing/debugging code, provide fixes and workarounds, and review changes for operability to maintain existing software solutions.
• You will work within a team, collaborate and add value through participation in peer code reviews, provide comments and suggestions, and work with cross functional teams to achieve goals.
• You will assume technical accountability for your specific work products within an application and provide technical support during solution design for new requirements.
• You will be encouraged to actively look for innovation, continuous improvement, and efficiency in all assigned tasks.
All you need is...
Experience with:
• 2+ or 4+ years of experience in the role of implementation of high end software products.
• Sound knowledge of Kubernetes and deployment methodologies
• Bigdata (Hdfs/hive/HBASE/KAFka), either of Spark/Pyspark is mandatory , with strong problem-solving skills, and able to thrive with minimal supervision.
• Knowledge of database principles, SQL, and experience working with large databases.
• Basic Unix command.
Key responsibilities:
• Perform Development & Support activities for Data warehousing domain using Big Data Technologies
• Understand High Level Design, Application Interface Design & build Low Level Design. Perform application analysis & propose technical solution for application enhancement or resolve production issues
• Perform Development & Deployment. Should be able to Code, Unit Test & Deploy
• Creation necessary documentation for all project deliverable phases
• Handle Production Issues (Tier 2 Support, weekend on-call rotation) to resolve production issues & ensure SLAs are met
Technical Skills:
Mandatory
• Either of Spark/Pyspark is mandatory.
• Should have a good programming background with expertise in Scala/ Java or Python.
• Should have worked on Kafka-Spark streaming framework.
• Experience with Big Data technologies such as Hadoop and related eco system - Cloudera & Hortonworks
• Experience on complete SDLC and exposure to Build and release management.
• Experience, interest and adaptability to working in an Agile delivery environment.
• Ability to select the right tool for the job.
Good to have
• Cloud Skills – should have knowledge on either AWS or Azure or GLP.
• Should have worked on REST API’s using kafka Cluster.
Behavioral skills :
• Eagerness & Hunger to learn
• Good problem solving & decision making skills
• Good communication skills within the team, site and with the customer
• Ability to stretch respective working hours when necessary, to support business needs
• Ability to work independently and drive issues to closure
• Consult when necessary with relevant parties, raise timely risks
• Effectively handle multiple and complex work assignments, while consistently deliver high quality work