Lilac Infotech logo
Job Description
Job Description

Vacant position features

We are looking for Cloud DevOps Engineers for our Managed Services and Projects Delivery teams with experience in supporting containers and monolithic applications. The role demands to be responsible for managing DevOps (CI/CD) components on the cloud. These components could be CloudNative such as Code Pipeline, Code Build, Code Deploy, or on native tools such as Jenkins. You will be part of a highly-skilled cloud operations team responsible for overall IaaS and PaaS management at a global scale. This pivotal role will provide you the opportunity to leverage your diverse IT background including Linux, scripting, security, networking, and cloud knowledge to deliver our next-gen IT platform services to a broad set of internal and external customers. Take your technical skills to the next level working on diverse and complex technical challenges in a dynamic and fast-paced environment. 

Responsibilities
Responsibilities

Possessed functions of the right candidate

  • Demonstrate technical leadership with incident handling and troubleshooting.
  • Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.
  • Build automated deployments for consistent software releases with zero downtime
  • Deploy new modules, upgrades and fixes to the production environment.
  • Participate in the development of contingency plans including reliable backup and restore procedures.
  • Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery
  • Participate in development of advanced CI / CD processes such as Canary deployments
  • Work on implementing DevSecOps practices
  • Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions and comprehensive customer-centric testing
  • Participate in internal team meetings, scoping, decision making and technical documentation
  • Build platform tools that rest of the engineering teams can use
  • Analyze current configuration and deployment of technology infrastructure with both performance and security awareness
  • Develop, build, test, and implement ways to automate and improve key systems
  • Actively troubleshoot any infrastructure issues that arise during testing and production and assist other engineers with corrective actions
  • Automate manual activities with scripting and other tools
  • Monitor all server environments, ensure each are healthy and have the latest patches
  • Provide system documentation as needed: including architectural diagrams, process and data flowcharts
  • Assist in maintenance of the application environments
  • Build system automation to automated deployment, to scale our engineering delivery for Cloud data platform in Azure, Kafka and Snowflake Technologies.
  • Experience in Terraforms is MUST
  • Bring in Continuous deployment practices to enhance our Agile posture.
  • Design and implement Dependency Management strategy and tools
  • Maintain documentation to describe functionality, configuration, testing and changes as applicable
  • Work closely with key stakeholders to capture, analyze, and derive DevOps and DataOps requirements.
  • Language proficiency in Python or other scripting tools on Linux platform
  • Hands on experience in Jenkins, Git and any one artifactory tools
  • Experience in implementing containerized solution using Docker, Kubernetes etc.
  • Takeover the ownership of our existing infrastructure (in AWS) following the Infrastructure as code philosophy.
  • Maintain and improve our Card Data Environment (PCI DSS certified). Help us clear PCI DSS & other security (re)certifications with flying colors.
  • Adopt new architectures; re-design the infrastructure to solve problems we'll be facing as we scale.
  • Handle our infrastructure security. Be the 1st hacker to break our network defences, and fix it. Leave no door open.
  • Make our existing data stores (Kafka / Elasticsearch / Redis / Postgres / Redshift etc. ) more scalable and reliable.
  • Implement new clusters as we scale up and are in need of them. It could be Cassandra or Druid or Aerospace & a lot more


Join For The Revolution

We’re multiplying each day with passion

Eligibility Criteria
Eligibility Criteria

Benchmark for selection

  • MCA, BE/B.TECH, BCA, BSC IT or Equivalent, Any Diploma in CS.
  • Minimum 2 years’ experience is mandatory.
Skills
Skills

Competence necessary for flowing our vacancy

  • 2+ years of software development/technical support experience
  • 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
  • 2+ years of experience in public cloud services: AWS (preferred) / GCP / Azure
  • Ability to review deployment and operational environments, i.e., execute initiatives to reduce failure, troubleshoot issues across the entire infrastructure stack, expand monitoring capabilities, and manage technical operations.
  • Experience managing any distributed NoSQL system (Kafka/Cassandra/etc)
  • Experience with Containers, Microservices, deployment, and service orchestration using Kubernetes, EKS (preferred), AKS, or GKE.
  • Experience and a deep understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies (Envoy preferred), etc.
  • Effective cross-functional leadership skills: working with engineering and operational teams to ensure systems are secure, scalable, and reliable.
  • Experience in creating and using REST APIs of Jenkins to create pipelines on hybrid environments & creating and maintaining the Kubernetes environments.
  • Experience on containers (Docker) & comfortable in Python and Ansible for scripting.
  • Experience in working with JIRA workflows and sonarqube
  • Knowledge in GIT.
  • Strong operational experience in Linux/Unix environment and scripting languages: Shell scripting & experience in working with Artifact Management systems like JFrog Artifactory or others
  • Knowledge about tools like Prometheus or others & microservices architecture patterns
  • Knowledge of Operation activity for the infrastructure management
  • Automation friendly. If it can be codified, it can be automated
  • Experience in Agile development methodologies and release management techniques
  • Ability to continuously learn and make decisions with minimal supervision.
  • You understand that making mistakes means that you are learning
  • Excellent analytical and troubleshooting


Join For The Revolution

We’re multiplying each day with passion