Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 6.0 years
3 - 6 Lacs
Hyderabad
Work from Office
Responsibilities Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. Build reusable, maintainable frontend components using modern state management practices. Develop backend services in Node.js or Python, adhering to clean-architecture principles. Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. Containerize applications and configure CI/CD pipelines for automated builds and deployments. Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills 1+ years of experience building full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. Hands-on with state management patterns (Redux, MobX, or custom solutions). Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). Expertise in designing REST and/or GraphQL APIs and integrating with backend services. Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus
Posted 1 week ago
6.0 - 12.0 years
12 - 24 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 1200000 - Rs 2400000 (ie INR 12-24 LPA) Min Experience: 6 years Location: Hyderabad JobType: full-time We are looking for a seasonedAzure DevOps Engineerto lead the design, implementation, and management of DevOps practices within theMicrosoft Azureecosystem. The ideal candidate will bring deep expertise in automation, CI/CD pipelines, infrastructure as code (IaC), cloud-native tools, and security best practices. This position will collaborate closely with cross-functional teams to drive efficient, secure, and scalable DevOps workflows. Requirements Key Responsibilities: DevOps & CI/CD Implementation Build and maintain scalable CI/CD pipelines using Azure DevOps, GitHub Actions, or Jenkins. Automate software build, testing, and deployment processes to improve release cycles. Integrate automated testing, security scanning, and code quality checks into the pipeline. Infrastructure as Code (IaC) & Cloud Automation Develop and maintain IaC templates usingTerraform,Bicep, orARM templates. Automate infrastructure provisioning, scaling, and monitoring across Azure environments. Ensure cloud cost optimization and resource efficiency. Monitoring, Logging & Security Configure monitoring tools likeAzure Monitor,App Insights, andLog Analytics. Apply Azure security best practices in CI/CD workflows and cloud architecture. ImplementRBAC,Key Vaultusage, and ensure policy and compliance adherence. Collaboration & Continuous Improvement Work with development, QA, and IT teams to enhance DevOps processes and workflows. Identify and resolve bottlenecks in deployment and infrastructure automation. Stay informed about industry trends and the latest features in Azure DevOps and IaC tooling. Required Skills & Experience: 57 years of hands-on experience inAzure DevOpsandcloud automation Strong knowledge of: Azure DevOps Services(Pipelines, Repos, Boards, Artifacts, Test Plans) CI/CD tools: YAML Pipelines, GitHub Actions, Jenkins Version control: Git (Azure Repos, GitHub, Bitbucket) IaC: Terraform, Bicep, ARM templates Containerization & Orchestration: Docker, Kubernetes (AKS) Monitoring: Azure Monitor, App Insights, Prometheus, Grafana Security: Azure Security Center, RBAC, Key Vault, Compliance Policy Management Familiarity with configuration management tools like Ansible, Puppet, or Chef (optional) Strong analytical and troubleshooting skills Excellent communication skills and ability to work in Agile/Scrum environments Preferred Certifications: Microsoft Certified:Azure DevOps Engineer Expert (AZ-400) Microsoft Certified:Azure Administrator Associate (AZ-104) Certified Kubernetes Administrator (CKA) optional Skills: Azure | DevOps | CI/CD | GitHub Actions | Terraform | Infrastructure as Code | Kubernetes | Docker | Monitoring | Cloud Security
Posted 1 week ago
5.0 - 9.0 years
12 - 22 Lacs
Hyderabad, Bengaluru
Hybrid
Position : PySpark Data Engineer Location : Bangalore / Hyderabad Experience : 5 to 9 Yrs Job Type : On Role Job Description: PySpark Data Engineer:- 1. API Development : Design, develop, and maintain robust APIs using FastAPI and RESTful principles for scalable backend systems. 2. Big Data Processing : Leverage PySpark to process and analyze large datasets efficiently, ensuring optimal performance in big data environments. 3. Full-Stack Integration : Develop seamless backend-to-frontend feature integrations, collaborating with front-end developers for cohesive user experiences. 4. CI/CD Pipelines : Implement and manage CI/CD pipelines using GitHub Actions and Azure DevOps to streamline deployments and ensure system reliability. 5. Containerization : Utilize Docker for building and deploying containerized applications in development and production environments. 6. Team Leadership : Lead and mentor a team of developers, providing guidance, code reviews, and support to junior team members to ensure high-quality deliverables. 7. Code Optimization : Write clean, maintainable, and efficient Python code, with a focus on scalability, reusability, and performance. 8. Cloud Deployment : Deploy and manage applications on cloud platforms like Azure , ensuring high availability and fault tolerance. 9. Collaboration : Work closely with cross-functional teams, including product managers and designers, to translate business requirements into technical solutions. 10. Documentation : Maintain thorough documentation for APIs, processes, and systems to ensure transparency and ease of maintenance. Highlighted Skillset:- Big Data : Strong PySpark skills for processing large datasets. DevOps : Proficiency in GitHub Actions , CI/CD pipelines , Azure DevOps , and Docker . Integration : Experience in backend-to-frontend feature connectivity. Leadership : Proven ability to lead and mentor development teams. Cloud : Knowledge of deploying and managing applications in Azure or other cloud environments. Team Collaboration : Strong interpersonal and communication skills for working in cross-functional teams. Best Practices : Emphasis on clean code, performance optimization, and robust documentation. Interested candidates kindly share your CV and below details to usha.sundar@adecco.com 1) Present CTC (Fixed + VP) - 2) Expected CTC - 3) No. of years experience - 4) Notice Period - 5) Offer-in hand - 6) Reason of Change - 7) Present Location -
Posted 1 week ago
3.0 - 5.0 years
1 - 3 Lacs
Chennai
Work from Office
**AWS Infrastructure Management:** Design, implement, and maintain scalable, secure cloud infrastructure using AWS services (EC2, Lambda, S3, RDS, Cloud Formation/Terraform, etc.) Monitor and optimize cloud resource usage and costs **CI/CD Pipeline Automation:** Set up and maintain robust CI/CD pipelines using tools such as GitHub Actions, GitLab CI, Jenkins, or AWS Code Pipeline Ensure smooth deployment processes for staging and production environments **Git Workflow Management:** Implement and enforce best practices for version control and branching strategies (Gitflow, trunk-based development, etc.) Support development teams in resolving Git issues and improving workflows **Twilio Integration & Support:** Manage and maintain Twilio-based communication systems (SMS, Voice, WhatsApp, Programmable Messaging) Develop and deploy Twilio Functions and Studio Flows for customer engagement Monitor communication systems and troubleshoot delivery or quality issues **Infrastructure as Code & Automation:** Use tools like Terraform, Cloud Formation, or Pulumi for reproducible infrastructure Create scripts and automation tools to streamline routine DevOps tasks **Monitoring, Logging & Security:** Implement and maintain monitoring/logging tools (Cloud Watch, Datadog, ELK, etc.) Ensure adherence to best practices around IAM, secrets management, and compliance **Requirements** 3-5+ years of experience in DevOps or a similar role Expert-level experience with **Amazon Web Services (AWS)** Strong command of **Git** and Git-based CI/CD practices Experience building and supporting solutions using **Twilio APIs** (SMS, Voice, Programmable Messaging, etc.) Proficiency in scripting languages (Bash, Python, etc.) Hands-on experience with containerization (Docker) and orchestration tools (ECS, EKS, Kubernetes) Familiarity with Agile/Scrum workflows and collaborative development environments **Preferred Qualifications** AWS Certifications (e.g., Solutions Architect, DevOps Engineer) Experience with serverless frameworks and event-driven architectures Previous work with other communication platforms (e.g., SendGrid, Nexmo) a plus Knowledge of RESTful API development and integration Experience working in high-availability, production-grade systems
Posted 1 week ago
2.0 - 4.0 years
4 - 6 Lacs
Pune
Work from Office
What You'll Do Job Title: Security Automation Engineer Integrated Engineering Systems Location: #LI-Hybrid Eligibility: 23years of software engineering experience Avalara is looking for a Security Automation Engineer to join our Integrated Engineering Systems team. In this role, youll build and scale automated security tooling and integrate scanning pipelines into Avalaras core engineering systems. You will work closely with platform engineers, product teams, and DevSecOps to design scalable services and analytics dashboards that detect, track, and remediate vulnerabilities. This role is perfect for engineers who are passionate about security through automation , scaling secure development practices, and enabling teams to build safer software faster. What Your Responsibilities Will Be Design, develop, and maintain microservices and security automation pipelines that integrate into Avalaras CI/CD and engineering systems. Build tools and services in GoLang to automate SAST, DAST, and SCA scanning workflows. Build internal tooling to identify gaps in security coverage, automate remediation recommendations. Partner with service owners to provide secure development guidance, build remediation playbooks, and enforce policy via automation. Implement dashboards using Snowflake, Hex, and Grafana to ingest and analyse security data, monitor pipeline health and provide real-time visibility into scan reliability and security metrics for both engineering teams and leadership. What You'll Need to be Successful Core Qualifications B.Tech or B.E in Computer Science, Engineering, Math, or a related technical discipline. 25 years of software engineering experience, with 2 years of direct experience in platform security or DevSecOps teams. Proficiency in Golang, Python, Java, or .NET, with ability to write clean, secure, and maintainable code. Understanding of OWASP Top 10, CWE Top 25, and secure software development practices. Experience with integrating and operating SAST, DAST, and SCA tools in CI/CD pipelines (e.g., GitHub Actions, Jenkins, GitLab). Knowledge of AWS or GCP security services and infrastructure-as-code best practices. Preferred Bonus Qualifications: Proven hands-on experience with Snowflake, Hex, and Grafana to build observability dashboards with alerts and SLA tracking. Security certifications.
Posted 1 week ago
2.0 - 4.0 years
4 - 6 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Job Summary: We are looking for a skilled and proactive DevOps Engineer with 2+ years of experience in managing and automating cloud infrastructure, ensuring deployment security, and supporting CI/CD pipelines. The ideal candidate is proficient in tools like Docker, Kubernetes, Terraform, and has hands-on experience with observability stacks such as Prometheus and Grafana. You will work closely with engineering teams to maintain uptime for media services, support ML model pipelines, and drive full-cycle Dev & Ops best practices. Key Responsibilities: Design, deploy, and manage containerized applications using Docker and Kubernetes. Automate infrastructure provisioning and management using Terraform on AWS or GCP. Implement and maintain CI/CD pipelines with tools like Jenkins, ArgoCD, or GitHub Actions. Set up and manage monitoring, logging, and alerting systems using Prometheus, Grafana, and related tools. Ensure high availability and uptime for critical services, including media processing pipelines and APIs. Collaborate with development and ML teams to support model deployment workflows and infrastructure needs. Drive secure deployment practices, access control, and environment isolation. Troubleshoot production issues and participate in on-call rotations where required. Contribute to documentation and DevOps process optimization for better agility and resilience. Qualifications: 2+ years of experience in DevOps, SRE, or cloud infrastructure roles. Hands-on experience with Docker, Kubernetes, and Terraform. Solid knowledge of CI/CD tooling (e.g., Jenkins, ArgoCD, GitHub Actions). Experience with observability tools such as Prometheus and Grafana. Familiarity with AWS or GCP infrastructure, including networking, compute, and IAM. Strong understanding of deployment security, versioning, and full lifecycle support. Preferred Qualifications : Experience supporting media pipelines or AI/ML model deployment infrastructure. Understanding of DevSecOps practices and container security tools (e.g., Trivy, Aqua). Scripting skills (Bash, Python) for automation and tooling. Experience in managing incident response and performance optimization. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India
Posted 1 week ago
5.0 - 8.0 years
4 - 7 Lacs
Hyderabad
Work from Office
Responsibilities: Responsible for building and maintaining an innovative Web application Analyse requirements, prepare High level/low-level designs, and realize it with project team. Lead a team of React.Js and Node.Js software engineers, take the delivery responsibility of the team. Ensures quality of deliverables from team by doing stringent reviews and coaching the team. Provide the estimates for complex and large projects, support Project manager to arrive at project planning. Forms the bridge between the Software Engineers and Solution Analysts, IT architects. Discusses technical topics (with the SEs, as a specialist), as well as holistic, architecture topics (with the IT Architects, as a generalist). Translates complex content to different stakeholders, both technical (like software engineers) as well as functional (business), in a convincing and well-founded way, and adapted to the target audience. Works in a support environment, eye for details and keen on optimizations. Profile Description: Able to take care of all responsibilities mentioned in above section. 5 to 8 years of experience in working on Full stack Web applications, of which at least 6 years in React.js and Node.Js. Minimum 6 years of experience of React.Js and Node.js (be able to demonstrate contribution to a product build) Good knowledge on building reusable web components Experience with deploying software CICD approach, Able to write well-documented and clean Typescript code. Affinity with maintaining and evolving a codebase to nourish high-quality code. Knowing how to make the app accessible to all users is an expectation knowledge of the underlying framework Fastify and Remix Experience with automated testing suites like Jest Familiar with one or more CI/CD environments: Gitlab CI, Github Actions, Circle CI, etc Strong problem-solving and critical-thinking abilities Good communication skills that facilitate interaction Confident, detail oriented and highly motivated to be part of a high-performing team A positive mindset and continuous learning attitude You have the ability to work under pressure and adhere to tight deadlines Familiar with SCRUM and Agile collaborations. Ensures compliance of project deliverables in line with Project Management methodologies. Exchanging expertise with other team members(knowledge sharing) Strong customer affinity, to deliver highly performant applications & quick turn around in bug fixes. Work in project teams and go for your success as a team.lead by example to drive the success of the team on a technical level. Is willing to work in both Projects and Maintenance activities. Open for travel to Belgium Nice to have Competencies: Working experience in a SAFe environment is a plus. Experience working with European clients
Posted 1 week ago
3.0 - 6.0 years
6 - 16 Lacs
Pune
Work from Office
Skills: Performance Testing, Databricks Pipeline Key Responsibilities: Design and execute performance testing strategies specifically for Databricks-based data pipelines. Identify performance bottlenecks and provide optimization recommendations across Spark/Databricks workloads. Collaborate with development and DevOps teams to integrate performance testing into CI/CD pipelines. Analyze job execution metrics, cluster utilization, memory/storage usage, and latency across various stages of data pipeline processing. Create and maintain performance test scripts, frameworks, and dashboards using tools like JMeter, Locust, or custom Python utilities. Generate detailed performance reports and suggest tuning at the code, configuration, and platform levels. Conduct root cause analysis for slow-running ETL/ELT jobs and recommend remediation steps. Participate in production issue resolution related to performance and contribute to RCA documentation. Technical Skills: Mandatory: Strong understanding of Databricks, Apache Spark, and performance tuning techniques for distributed data processing systems. Hands-on experience in Spark (PySpark/Scala) performance profiling, partitioning strategies, and job parallelization. 2+ years of experience in performance testing and load simulation of data pipelines. Solid skills in SQL, Snowflake, and analyzing performance via query plans and optimization hints. Familiarity with Azure Databricks, Azure Monitor, Log Analytics, or similar observability tools. Proficient in scripting (Python/Shell) for test automation and pipeline instrumentation. Experience with DevOps tools such as Azure DevOps, GitHub Actions, or Jenkins for automated testing. Comfortable working in Unix/Linux environments and writing shell scripts for monitoring and debugging. Good to Have: Experience with job schedulers like Control-M, Autosys, or Azure Data Factory trigger flows. Exposure to CI/CD integration for automated performance validation. Understanding of network/storage I/O tuning parameters in cloud-based environments.
Posted 1 week ago
5.0 - 10.0 years
16 - 25 Lacs
Hyderabad, Bengaluru
Work from Office
Urgent Hiring for PySpark Data Engineer:- Job Location- Bangalore and Hyderabad Exp- 5yrs-9yrs Share CV Mohini.sharma@adecco.com OR Call 9740521948 Job Description: 1. API Development : Design, develop, and maintain robust APIs using FastAPI and RESTful principles for scalable backend systems. 2. Big Data Processing : Leverage PySpark to process and analyze large datasets efficiently, ensuring optimal performance in big data environments. 3. Full-Stack Integration : Develop seamless backend-to-frontend feature integrations, collaborating with front-end developers for cohesive user experiences. 4. CI/CD Pipelines : Implement and manage CI/CD pipelines using GitHub Actions and Azure DevOps to streamline deployments and ensure system reliability. 5. Containerization : Utilize Docker for building and deploying containerized applications in development and production environments. 6. Team Leadership : Lead and mentor a team of developers, providing guidance, code reviews, and support to junior team members to ensure high-quality deliverables. 7. Code Optimization : Write clean, maintainable, and efficient Python code, with a focus on scalability, reusability, and performance. 8. Cloud Deployment : Deploy and manage applications on cloud platforms like Azure , ensuring high availability and fault tolerance. 9. Collaboration : Work closely with cross-functional teams, including product managers and designers, to translate business requirements into technical solutions. 10. Documentation : Maintain thorough documentation for APIs, processes, and systems to ensure transparency and ease of maintenance Highlighted Skillset:- Big Data : Strong PySpark skills for processing large datasets. DevOps : Proficiency in GitHub Actions , CI/CD pipelines , Azure DevOps , and Docker . Integration : Experience in backend-to-frontend feature connectivity. Leadership : Proven ability to lead and mentor development teams. Cloud : Knowledge of deploying and managing applications in Azure or other cloud environments. Team Collaboration : Strong interpersonal and communication skills for working in cross-functional teams. Best Practices : Emphasis on clean code, performance optimization, and robust documentation
Posted 1 week ago
5.0 - 10.0 years
16 - 25 Lacs
Hyderabad, Bengaluru
Work from Office
PySpark Data Engineer:- Job Description: 1. API Development : Design, develop, and maintain robust APIs using FastAPI and RESTful principles for scalable backend systems. 2. Big Data Processing : Leverage PySpark to process and analyze large datasets efficiently, ensuring optimal performance in big data environments. 3. Full-Stack Integration : Develop seamless backend-to-frontend feature integrations, collaborating with front-end developers for cohesive user experiences. 4. CI/CD Pipelines : Implement and manage CI/CD pipelines using GitHub Actions and Azure DevOps to streamline deployments and ensure system reliability. 5. Containerization : Utilize Docker for building and deploying containerized applications in development and production environments. 6. Team Leadership : Lead and mentor a team of developers, providing guidance, code reviews, and support to junior team members to ensure high-quality deliverables. 7. Code Optimization : Write clean, maintainable, and efficient Python code, with a focus on scalability, reusability, and performance. 8. Cloud Deployment : Deploy and manage applications on cloud platforms like Azure , ensuring high availability and fault tolerance. 9. Collaboration : Work closely with cross-functional teams, including product managers and designers, to translate business requirements into technical solutions. 10. Documentation : Maintain thorough documentation for APIs, processes, and systems to ensure transparency and ease of maintenance Highlighted Skillset:- Big Data : Strong PySpark skills for processing large datasets. DevOps : Proficiency in GitHub Actions , CI/CD pipelines , Azure DevOps , and Docker . Integration : Experience in backend-to-frontend feature connectivity. Leadership : Proven ability to lead and mentor development teams. Cloud : Knowledge of deploying and managing applications in Azure or other cloud environments. Team Collaboration : Strong interpersonal and communication skills for working in cross-functional teams. Best Practices : Emphasis on clean code, performance optimization, and robust documentation Share updated resume at siddhi.pandey@adecco.com or whatsapp at 6366783349
Posted 1 week ago
0.0 years
0 Lacs
Remote, , India
On-site
Sr. Azure Cloud Engineer Location: India We are seeking an experienced Azure Cloud Engineer who specializes in migrating and modernizing applications to the cloud. The ideal candidate will have deep expertise in Azure Cloud, Terraform (Enterprise), containers (Docker), Kubernetes (AKS), CI/CD with GitHub Actions, and Python scripting . Strong soft skills are essential to communicate effectively with technical and non-technical stakeholders during migration and modernization projects. Key Responsibilities: Lead and execute the migration and modernization of applications to Azure Cloud using containerization and re-platforming. Re-platform, optimize, and manage containerized applications using Docker and orchestrate through Azure Kubernetes Service (AKS) . Implement and maintain robust CI/CD pipelines using GitHub Actions to facilitate seamless application migration and deployment. Automate infrastructure and application deployments to ensure consistent, reliable, and scalable cloud environments. Write Python scripts to support migration automation, integration tasks, and tooling. Collaborate closely with cross-functional teams to ensure successful application migration, modernization, and adoption of cloud solutions. Define and implement best practices for DevOps, security, migration strategies, and the software development lifecycle (SDLC). Infrastructure deployment via Terraform (IAM, networking, security, etc) Non-Functional Responsibilities: Configure and manage comprehensive logging, monitoring, and observability solutions. Develop, test, and maintain Disaster Recovery (DR) plans and backup solutions to ensure cloud resilience. Ensure adherence to all applicable non-functional requirements, including performance, scalability, reliability, and security during migrations. Required Skills and Experience: Expert-level proficiency in migrating and modernizing applications to Microsoft Azure Cloud services. Strong expertise in Terraform (Enterprise) for infrastructure automation. Proven experience with containerization technologies (Docker) and orchestration platforms (AKS). Extensive hands-on experience with GitHub Actions and building CI/CD pipelines specifically for cloud migration and modernization efforts. Proficient scripting skills in Python for automation and tooling. Comprehensive understanding of DevOps methodologies and software development lifecycle (SDLC). Excellent communication, interpersonal, and collaboration skills. Demonstrable experience in implementing logging, monitoring, backups, and disaster recovery solutions within cloud environments
Posted 1 week ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
We are seeking a Senior DevOps Engineer to build pipeline automation, integrating DevSecOps principles and operations of product build and releases. Mentor and guide DevOps teams, fostering a culture of technical excellence and continuous learning. What You'll Do Design & Architecture: Architect and implement scalable, resilient, and secure Kubernetes-based solutions on Amazon EKS. Deployment & Management: Deploy and manage containerized applications, ensuring high availability, performance, and security. Infrastructure as Code (IaC): Develop and maintain Terraform scripts for provisioning cloud infrastructure and Kubernetes resources. CI/CD Pipelines: Design and optimize CI/CD pipelines using tools like Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCD along with automated builds, tests (unit, regression), and deployments. Monitoring & Logging: Implement monitoring, logging, and alerting solutions using Prometheus, Grafana, ELK stack, or CloudWatch. Security & Compliance: Ensure security best practices in Kubernetes, including RBAC, IAM policies, network policies, and vulnerability scanning. Automation & Scripting: Automate operational tasks using Bash, Python, or Go for improved efficiency. Performance Optimization: Tune Kubernetes workloads and optimize cost/performance of Amazon EKS clusters. Test Automation & Regression Pipelines - Integrate automated regression testing and build sanity checks into pipelines to ensure high-quality releases. Security & Resource Optimization - Manage Kubernetes security (RBAC, network policies) and optimize resource usage with Horizontal Pod Autoscalers (HPA) and Vertical Pod Autoscalers (VPA) . Collaboration: Work closely with development, security, and infrastructure teams to enhance DevOps processes. Minimum Qualifications Bachelor's degree (or above) in Engineering/Computer Science. 8+ years of experience in DevOps, Cloud, and Infrastructure Automation in a DevOps engineer role. Expertise with Helm charts, Kubernetes Operators, and Service Mesh (Istio, Linkerd, etc.) Strong expertise in Amazon EKS and Kubernetes (design, deployment, and management) Expertise in Terraform, Jenkins and Ansible Expertise with CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, ArgoCD, etc.) Strong experience with monitoring and logging tools (Prometheus, Grafana, ELK, CloudWatch) Proficiency in Bash, Python, for automation and scripting
Posted 1 week ago
4.0 - 8.0 years
13 - 18 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
PreAuth & DRG AI Transformation Hybrid mode Location - Mumbai, Pune, Goa, Nagpur, Indore, Ahmedabad, Noida Gurgaon, Bangalore, Hyderabad, Chennai, Jaipur, Kolkata, Kochi Experience:- 4- 8years Primary Responsibilities: Collaborate with developers, Mangers and other stake holders to understand feature requirements. Design and execute detailed test cases and perform various testing types like functional, regression, Integration, System, Smoke and Sanity testing. Design and execute API testing Build and maintain automated test suites using frameworks like Selenium, TestNG. Log and track defects with comprehensive details Collaborate with developers to ensure timely resolution and resting of bugs. Must Have Skill: Manual Testing, Automation Tools: Selenium, TestNG API Testing Tools: Postman, SoapUI Testing Management and Defect tracking: JIRA, TestRail Nice To Have Skills: Jenkins, GitHub Actions, JMeter
Posted 1 week ago
5.0 - 7.0 years
15 - 18 Lacs
Thane, Mumbai (All Areas)
Work from Office
As a Software Developer, you will be responsible for the design, development, microservices, along with the implementation of desktop user interfaces. to deployment ensuring reliability, security, and maintainability of the codebase.
Posted 1 week ago
5.0 - 10.0 years
20 - 30 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Gurgaon Payroll: BCforward Work Mode: Hybrid JD QA - Selenium, Playwright, Rest Assured, GitHub Actions, SQL Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 15-Days joiners at most. All the best
Posted 1 week ago
3.0 - 6.0 years
3 - 6 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Devops Engineer(I04): AT&T is one of the leading service providers in the telecommunication sector and propelling it into the data and AI driven era is powered by CDO (Chief Data Office). CDO is empowering AT&T, through execution, self-service, and as a data and AI center of excellence, to unlock transformative insights and actions that drive value for the company and its customers. Employees at CDO imagine, innovate, and unlock data & AI driven insights and actions that create value for our customers and the enterprise. Part of the work, we govern data collection and use, mitigate for potential bias in machine learning models, and encourage an enterprise culture of responsible AI. AT&T s Chief Data Office (CDO) is harnessing data and making AT&T s data assets and ground-breaking AI functionality accessible to employees across the firm. In addition, our talented employees are a significant component that contributes to AT&T s place as the U.S. company with the sixth most AI-related patents. CDO also maintains academic and tech partnerships to cultivate the next generation of experts in statistics and machine learning, statistical computing, data visualization, text mining, time series modelling, data stream and database management, data quality and anomaly detection, data privacy, and more. Job Description: DevOps Engineer Position Overview: We are seeking a skilled and passionate DevOps Engineer to join our dynamic team. The ideal candidate will possess extensive experience in Linux systems, cloud platforms, automation tools, and a variety of scripting and coding languages. As a DevOps Engineer, you will be responsible for the design, development, security compliance, and reoccurring maintenance of our infrastructure, ensuring robust, scalable, and secure solutions. The ideal candidate should excel in a dynamic business setting, effectively manage multiple projects, demonstrate a willingness to learn, possess self-motivation, and work collaboratively as part of a team. Key Responsibilities: This role requires proficiency in working within both Azure cloud and on-premises Linux environments, along with a solid understanding of cloud security and cloud administration. The candidate will be responsible for performing recurring and ad-hoc security compliance, image updates, and security remediation. DevOps responsibilities will include building and maintaining CI/CD pipelines using tools such as Terraform, Ansible, or Jenkins, as well as implementing updates to existing deployment pipelines and handling the deployments. Strong scripting skills are essential, with experience in languages such as Perl, Python, and Bash. Familiarity with containerization platforms like OpenShift or Kubernetes in a Linux environment is highly preferred. Knowledge of database deployment and support within cloud environments is also desirable. Desired Skills & Qualifications: Advanced knowledge and hands-on experience with Redhat, Rocky, and Ubuntu. Strong understanding of Azure security, administration, architecture, ADO Pipeline CI/CD, Terraform, GitHub Actions, Powershell, Databricks, Snowflake, EventHubs. Experience with Ansible and Jenkins for automated deployments. Expertise in KSH/BASH, Python, C/C++, and JAVA. Proficient in using Visual Studio Code, GitHub, JFROG Proficient in RHEL KVM, Openshift, Kubernetes, REDIS, and KAFKA. Strong problem-solving skills and the ability to troubleshoot complex issues. Excellent communication and collaboration skills to work effectively within a team. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Flexible to work from office 3 days(in a week) from 12:30pm to 9:30pm Location: IND:KA:Bengaluru / Innovator Building, Itpb, Whitefield Rd - Adm: Intl Tech Park, Innovator Bldg
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Req ID: 327620 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Devops with .net framework to join our team in Chennai, Tamil N?du (IN-TN), India (IN). Key Responsibilities: Design, build, and maintain CI/CD pipelines for .NET applications using tools like Azure DevOps, GitHub Actions, or Jenkins Automate build, test, and deployment processes with a strong emphasis on security ad reliability Collaborate with software engineers and QA teams to ensure automated testing and code coverage practices are embedded into the pipelines Monitor and troubleshoot build failures and deployment issues Manage build artifacts, versioning strategies, and release orchestration Integrate static code analysis, security scanning (SAST/DAST), and compliance checks into the pipelines Support infrastructure-as-code deployments using tools like Terraform , Azure Bicep , or ARM templates Maintain and improve documentation for build/release processes, infrastructure, and tooling standards Contribute to DevOps best practices and help shape our CI/CD strategy as we move toward cloud-native architecture and Cloud 3.0 adoption Qualifications: Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience) 3+ years of experience in DevOps, CI/CD, or build and release engineering Hands-on experience with .NET Core / .NET Framework build and deployment processes Strong experience with Azure DevOps Pipelines (YAML and Classic) Familiarity with Git , NuGet , NUnit/xUnit , SonarQube , OWASP/ZAP , etc. Experience deploying to Azure App Services, Azure Kubernetes Service (AKS) , or Azure Functions Experience with Docker and container-based deployments Working knowledge of infrastructure-as-code (Terraform, Bicep, or similar) Understanding of release management and software development lifecycle (SDLC) best practices Excellent problem-solving, collaboration, and communication skills Can you walk us through how you would design a CI/CD pipeline for a .NET application using Azure DevOps from build to deployment How do you handle secrets and sensitive configuration values in a CI/CD pipeline Have you ever had to troubleshoot a failing deployment in production What tools did you use to diagnose the issue, and how did you resolve it About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and streaming pipelines using services like S3, Glue, Redshift, Athena, EMR, Lake Formation, and Kinesis. Develop and orchestrate ETL/ELT pipelines Required Candidate profile Participate in pre-sales and consulting activities such as: Engaging with clients to gather requirements and propose AWS-based data engineering solutions. Supporting RFPs/RFIs, technical proposals
Posted 2 weeks ago
5.0 - 7.0 years
12 - 18 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are hiring an experienced Integration Engineer with deep expertise in Dell Boomi and proven skills in Python, AWS, and automation frameworks. This role focuses on building and maintaining robust integration pipelines between enterprise systems like Salesforce, Snowflake, and EDI platforms, enabling seamless data flow and test automation. Key Responsibilities: Design, develop, and maintain integration workflows using Dell Boomi. Build and enhance backend utilities and services using Python to support Boomi integrations. Integrate test frameworks with AWS services such as Lambda, API Gateway, CloudWatch, etc. Develop utilities for EDI document automation (e.g., generating and validating EDI 850 purchase orders). Perform data syncing and transformation between systems like Salesforce, Boomi, and Snowflake. Automate post-test data cleanup and validation within Salesforce using Boomi and Python. Implement infrastructure-as-code using Terraform to manage cloud resources. Create and execute API tests using Postman, and automate test cases using Cucumber and Gherkin. Integrate test results into Jira and X-Ray for traceability and reporting. Must-Have Qualifications: 5 to 7 years of professional experience in software or integration development. Strong hands-on experience with Dell Boomi (Atoms, Integration Processes, Connectors, APIs). Solid programming experience with Python. Experience working with AWS services: Lambda, API Gateway, CloudWatch, S3, etc. Working knowledge of Terraform for cloud infrastructure automation. Familiarity with SQL and modern data platforms (e.g., Snowflake). Experience working with Salesforce and writing SOQL queries. Understanding of EDI document standards and related integration use cases. Test automation experience using Cucumber, Gherkin, Postman. Integration of QA/test reports with Jira, X-Ray, or similar platforms. Familiarity with CI/CD tools like GitHub Actions, Jenkins, or similar. Tools & Technologies: Integration: Dell Boomi, REST/SOAP APIs Languages: Python, SQL Cloud: AWS (Lambda, API Gateway, CloudWatch, S3) Infrastructure: Terraform Data Platforms: Snowflake, Salesforce Automation & Testing: Cucumber, Gherkin, Postman DevOps: Git, GitHub Actions Tracking/Reporting: Jira, X-Ray Location-Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad
Posted 2 weeks ago
5.0 - 10.0 years
14 - 21 Lacs
Pune
Work from Office
Lead Mobile Developers (Android & iOS) Android - Kotlin, Jetpack Compose, MVVM, biometric auth 6+ years building financial-grade Android app IOS -Swift, Combine, CoreData, Keychain 6+ years in Swift development with banking security compliance Flexi working Work from home
Posted 2 weeks ago
5.0 - 10.0 years
8 - 10 Lacs
Pune
Work from Office
Assist in developing AI-powered QA automation tools using LLMs Collaborate with QA engineers to understand test requirements and translate them into automated scripts Use LLMs to generate test cases, validation scripts, and test data automatically Maintain, debug, and optimize automated test scripts and frameworks Integrate LLM-based solutions into existing CI/CD pipelines and QA workflows Document automation processes and assist with training teams on new AI-driven QA tools Continuously research and apply advancements in LLMs and AI for QA improvements To ensure youre set up for success, you will bring the following skillset & experience: You have 3-5 years for software development experience You have experince of basic programming skills in Python, JavaScript, or relevant languages for automation You have understanding of software testing concepts and QA automation tools (e.g., Selenium, Cypress, JUnit) You have some experience in AI, NLP, or machine learning concepts You are familiarity with version control systems such as Git You have strong problem-solving skills and ability to learn quickly in a dynamic environment Whilst these are nice to have, our team can help you develop in the following skills: Experience with AI/ML frameworks like PyTorch or TensorFlow Familiarity with Large Language Models such as GPT, BERT, or OpenAI APIs Knowledge of continuous integration tools (Jenkins, GitHub Actions) Exposure to writing or maintaining automated test frameworks Understanding of cloud environments for deploying automation solutions
Posted 2 weeks ago
5.0 - 10.0 years
5 - 10 Lacs
Chennai, Tamil Nadu, India
On-site
Design and implement scalable AI platform solutions to support machine learning workflows. Experience building and delivering software using the Python programming language, exceptional ability in other programming languages will be considered. Demonstratable experience deploying the underlying infrastructure and tooling for running Machine Learning or Data Science at Scale using Infrastructure of Code Experience using DevOps to enable automation strategies Experience or awareness of MLOps practices and building pipelines to accelerate and automate machine learning will be looked upon favorably Manage and optimize the deployment of applications on Amazon EKS (Elastic Kubernetes Service). Implement Infrastructure as Code using tools like Terraform or AWS CloudFormation. Provision and scale AI platforms such as Domino Data Labs, Databricks, or similar systems. Collaborate with cross-functional teams to integrate AI solutions into the AWS cloud infrastructure. Drive automation and Develop DevOps pipelines using GitHub and GitHub Actions. Ensure high availability and reliability of AI platform services. Monitor and troubleshoot system performance, providing quick resolutions. Stay updated with the latest industry trends and advancements in AI and cloud technologies. Experience working with GxP compliant life science systems will be looked upon favorably Qualifications: Proven hands-on experience with Amazon EKS and AWS cloud services. Strong expertise in Infrastructure as Code with Terraform and AWS CloudFormation. Strong expertise with Python programming. Experience in provisioning and scaling AI platforms like Domino Data Labs, Databricks, or similar systems. Solid understanding of DevOps principles and experience with CI/CD tools like GitHub Actions. Familiarity with version control using Git and GitHub. Excellent problem-solving skills and the ability to work independently and in a team. Strong communication and collaboration skills.
Posted 2 weeks ago
5.0 - 8.0 years
5 - 8 Lacs
Chennai, Tamil Nadu, India
On-site
Design, implement, and manage cloud infrastructure on AWS using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Maintain and enhance CI/CD pipelines using tools like GitHub Actions, AWS CodePipeline, Jenkins, or ArgoCD. Ensure platform reliability, scalability, and high availability across development, staging, and production environments. Automate operational tasks, environment provisioning, and deployments using scripting languages such as Python, Bash, or PowerShell. Enable and maintain Amazon SageMaker environments for scalable ML model training, hosting, and pipelines. Integrate AWS Bedrock to provide foundation model access for generative AI applications, ensuring security and cost control. Manage and publish curated infrastructure templates through AWS Service Catalogue to enable consistent and compliant provisioning. Collaborate with security and compliance teams to implement best practices around IAM, encryption, logging, monitoring, and cost optimization. Implement and manage observability tools like Amazon CloudWatch, Prometheus/Grafana, or ELK for monitoring and alerting. Support container orchestration environments using EKS (Kubernetes), ECS, or Fargate. Contribute to incident response, post-mortems, and continuous improvement of the platform operational excellence. Required Skills & Qualifications: Bachelors degree in Computer Science, Engineering, or related field (or equivalent experience). 5plus years of hands on experience with AWS cloud services. Strong experience with Terraform, AWS CDK, or CloudFormation. Proficiency in Linux system administration and networking fundamentals. Solid understanding of IAM policies, VPC design, security groups, and encryption. Experience with Docker and container orchestration using Kubernetes (EKS preferred). Hands-on experience with CI/CD tools and version control (Git). Experience with monitoring, logging, and alerting systems. Strong troubleshooting skills and ability to work independently or in a team. Preferred Qualifications (Nice to Have): AWS Certification (e.g., AWS Certified DevOps Engineer, Solutions Architect Associate/Professional). Experience with serverless technologies like AWS Lambda, Step Functions, and EventBridge. Experience supporting machine learning or big data workloads on AWS.
Posted 2 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Pune
Remote
What You'll Do Reports to: Manager - Security Engineering Avalara is seeking a Security Automation Engineer to join our Security Automation & Platform Enhancement Team (SAPET). You will be at the intersection of cybersecurity, automation, and AI, focusing on designing and implementing scalable security solutions that enhance Avalara's security posture. You will have expertise in programming, cloud technologies, security automation, and modern software engineering practices, with experience with using Generative AI to improve security processes. What Makes This Role Unique at Avalara? Cutting-Edge Security Automation: You will work on advanced cybersecurity automation projects, including fraud detection, AI-based security document analysis, and IT security process automation. AI-Powered Innovation: We integrate Generative AI to identify risks, analyze security documents, and automate compliance tasks. Impact Across Multiple Security Domains: Your work will support AML, fraud detection, IT security, and vendor risk management. What Your Responsibilities Will Be As a Security Automation Engineer, your primary focus will be to develop automation solutions that improve efficiency across several security teams. Develop and maintain security automation solutions to streamline security operations and reduce manual efforts. Work on automation projects that augment security teams, enabling them to work more efficiently. Design and implement scalable security frameworks for Security Teams. What You'll Need to be Successful 5+ years experience Programming & Scripting: Python, GoLang, Bash Infrastructure as Code & Orchestration: Terraform, Kubernetes, Docker Security & CI/CD Pipelines: Jenkins, GitHub Actions, CI/CD tools Database & Data Analysis: SQL, security data analytics tools Experience with RDBMS and SQL, including database design, normalization, query optimization Experience. Hands-on experience with security automation tools, SIEM, SOAR, or threat intelligence platforms.
Posted 2 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
New Delhi, Chennai, Bengaluru
Hybrid
Your day at NTT DATA We are seeking an experienced Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What youll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Bachelors degree in computer science, Engineering, or related fields (Masters recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France