Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Responsibilites Automation: Automate routine tasks, including system configuration, application deployment, and environment setup. Infrastructure as Code (IaC): Design, implement, and manage infrastructure using tools such as Terraform, CDK, or CloudFormation. CI/CD Pipeline Management: Develop, maintain, and optimize CI/CD pipelines using Jenkins, GitLab CI, or other relevant tools. Cloud Services Management: Deploy and manage applications in cloud environments like AWS, Azure, or Google Cloud, ensuring cost efficiency, security, and scalability. Monitoring and Logging: Implement and maintain monitoring and logging systems using tools such as Prometheus, Grafana, ELK Stack, or Splunk. Security: Implement and manage security best practices, including vulnerability scanning, patch management, and access control. Collaboration: Work closely with development, QA, and operations teams to ensure seamless integration and continuous delivery of new features and fixes. Troubleshooting: Investigate and resolve issues related to infrastructure, application performance, and deployment processes. Documentation: Create and maintain detailed documentation of infrastructure, processes, and systems for internal use and knowledge sharing. This job was posted by Anshika Agarwal from CloudTechner.
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are looking for a QA Engineer with solid experience in manual, automation, and performance testing to join our dynamic team. You'll play a key role in ensuring the quality, performance, and reliability of our software products, collaborating closely with cross-functional teams. Responsibilities Analyze technical documentation and requirements to plan, design, and implement comprehensive test strategies. Build and maintain automated test suites for backend, API, and frontend using tools like Selenium, Playwright, or AI-driven tools, following POM, BDD, or data-driven approaches. Utilize AI and machine learning techniques in testing workflowsfor example, in predictive test case selection, self-healing test scripts, visual testing, or anomaly detection increase efficiency and reduce test maintenance. Perform manual testing and write clear, thorough test cases. Conduct performance and load testing using tools like JMeter to evaluate scalability and responsiveness. Use SQL and related tools for data validation and database testing. Integrate and maintain CI/CD pipelines, preferably using Jenkins. Collaborate closely with developers and cross-functional teams to improve product quality and user experience. Diagnose issues, identify performance bottlenecks, and ensure timely resolution. Support debugging and issue triaging efforts, and contribute to the continuous improvement of QA processes and best practices. Requirements Bonus: Experience with cloud platforms such as AWS, GCP, or Azure. Advantage: Working knowledge of DevOps practices, including infrastructure as code (e. g., Terraform), containerization (e. g., Docker), and monitoring/logging tools. 4+ years of experience in QA, with strong exposure to test automation and performance testing. Proficiency in functional, regression, integration, and system testing methodologies. Experience with defect management tools like JIRA. Strong analytical, debugging, and communication skills. Quick learner with a proactive mindset. Committed to timelines and adaptable in a dynamic, collaborative environment. This job was posted by Nibila Manikandan from Opsera.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a highly skilled. NET Developer interested in designing, developing, and maintaining robust, scalable, and high-performance RESTful APIs and web applications. The ideal candidate will have expertise in. NET Core 7+, Angular 17+, MS SQL Server, and experience working in an Agile development environment. Responsibilities Develop and maintain RESTful APIs using. NET Core Web API. Optimize and create stored procedures in MS SQL Server for efficient data handling. Implement Entity Framework Core for seamless database interactions. Ensure high performance, scalability, and responsiveness of APIs. Implement security best practices for API development and data protection. Design and develop concurrent and scalable API solutions. Participate in code reviews and contribute to development best practices. Troubleshoot, debug, and resolve API-related issues efficiently. Implement logging and monitoring solutions to ensure API performance. Stay updated with the latest. NET technologies and industry best practices. Requirements Knowledge of Agile development environment. Strong problem-solving and debugging skills. Skills Are NET Core 7+ and Web API development. Angular 17+ for front-end development. MS SQL Server (stored procedures, performance tuning). Entity Framework Core. Security practices in API development. Git repositories (version control). This job was posted by Human Resource from Codeline Tech.
Posted 1 week ago
0 years
0 Lacs
India
Remote
What You Can Expect You will be part of Zoom’s Integration Engineering team responsible for designing and building strategic integration features. About The Team IT Engineering to lead an integration shared service team. The team manages 250+ system-to-system integrations. As a Software Development Engineer, you will work with a team of 9 software engineers in a multi-faceted domain. You will provide exceptional leadership and support of all integrations internal & external. The integrations are built leveraging Rest APIs, Mulesoft, pub-sub, and file exchange using S3 or SFTP. This engineering position would play a pivotal role in architecting, designing, building and supporting the full-stack cloud-native solution. What We’re Looking For Have a Bachelor’s degree in Computer Science, Engineering, MIS, or a related field, with 7 - 10y of overall experience. Have experience in delivering end-to-end services or products from requirement gathering, design and production deployment . Have experience with application monitoring and logging tools such as Splunk, ELK/Kibana, Datadog, and Prometheus, along with practices to ensure performance and system observability. Have knowledge of cloud-native system design with exposure to reliability patterns like failovers, circuit breakers, auto-scaling, etc. Have the ability to apply code optimization techniques to improve the performance, reliability, and maintainability of services. Open to leveraging AI tools to improve development speed, code quality, and team productivity. Such as Copilot or Git hub co-pilot. Responsibilities Strong development background in Java, including experience with JVM internals, multithreading, concurrency, I/O, and network programming. Experience with Kafka, RESTful APIs, and Microservices is required. Having software development experience with building Microservices, implementing message queues and process services via enterprise middleware such as Mulesoft. Having solid understanding of DevOps practices, with hands-on experience using tools like Git, Maven/Gradle, Jenkins, and CI/CD workflows. Having experience mentoring junior engineers, conducting effective code reviews, and supporting career growth through structured feedback and guidance. #India #Remote India #Remote Ways of Working Our structured hybrid approach is centered around our offices and remote work environments. The work style of each role, Hybrid, Remote, or In-Person is indicated in the job description/posting. Benefits As part of our award-winning workplace culture and commitment to delivering happiness, our benefits program offers a variety of perks, benefits, and options to help employees maintain their physical, mental, emotional, and financial health; support work-life balance; and contribute to their community in meaningful ways. Click Learn for more information. About Us Zoomies help people stay connected so they can get more done together. We set out to build the best collaboration platform for the enterprise, and today help people communicate better with products like Zoom Contact Center, Zoom Phone, Zoom Events, Zoom Apps, Zoom Rooms, and Zoom Webinars. We’re problem-solvers, working at a fast pace to design solutions with our customers and users in mind. Find room to grow with opportunities to stretch your skills and advance your career in a collaborative, growth-focused environment. Our Commitment At Zoom, we believe great work happens when people feel supported and empowered. We’re committed to fair hiring practices that ensure every candidate is evaluated based on skills, experience, and potential. If you require an accommodation during the hiring process, let us know—we’re here to support you at every step. If you need assistance navigating the interview process due to a medical disability, please submit an Accommodations Request Form and someone from our team will reach out soon. This form is solely for applicants who require an accommodation due to a qualifying medical disability. Non-accommodation-related requests, such as application follow-ups or technical issues, will not be addressed.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities Lead design and delivery of complex end-to-end features across frontend, backend, and data layers. Make strategic architectural decisions on frameworks, datastores, and performance patterns. Review and approve pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns. Build and maintain shared UI component libraries and backend service frameworks for team reuse. Identify and eliminate performance bottlenecks in both browser rendering and server throughput. Instrument services with metrics and logging, driving SLIs, SLAs, and observability. Define and enforce comprehensive testing strategies: unit, integration, and end-to-end. Own CI/CD pipelines, automating builds, deployments, and rollback procedures. Ensure OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices. Partner with Product, UX, and Ops to translate business objectives into technical roadmaps. Facilitate sprint planning, estimation, and retrospectives for predictable deliveries. Mentor and guide SDE-1s and interns; participate in hiring. Requirements 3-5 years building production full-stack applications end-to-end with measurable impact. Proven leadership in Agile/Scrum environments with a passion for continuous learning. Deep expertise in React (or Angular/Vue) with TypeScript and modern CSS methodologies Proficient in Node.js (Express/NestJS), Python (Django/Flask/FastAPI), or Java (Spring Boot) Expert in designing RESTful and GraphQL APIs and scalable database schemas. Knowledge of MySQL/PostgreSQL indexing, NoSQL (ElasticSearch/DynamoDB), and caching (Redis). Knowledge of Containerization (Docker) and commonly used AWS services such as Lambda, EC2 S3 apigateway etc. Skilled in unit/integration (Jest, pytest) and E2E testing (Cypress, Playwright). Frontend profiling (Lighthouse) and backend tracing for performance tuning. Secure coding: OAuth2/JWT, XSS/CSRF protection, and familiarity with compliance regimes. Strong communicator able to convey technical trade-offs to non-technical stakeholders. Experience in reviewing pull requests and providing constructive feedback to the team. Qualities We'd Love To Find In You The attitude to always strive for the best outcomes and an enthusiasm to deliver high-quality software. Strong collaboration abilities and a flexible and friendly approach to working with teams. Strong determination with a constant eye on solutions. Creative ideas with problem problem-solving mindset. Be open to receiving objective criticism and improving upon it. Eagerness to learn and zeal to grow. Strong communication skills are a huge plus. This job was posted by Shikha Mittal from NxtWave.
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are looking for a highly motivated DevSecOps Engineer with 5+ years of hands-on experience in integrating security into the DevOps lifecycle. The ideal candidate will work closely with development, security, and operations teams to ensure our applications and infrastructure are secure, scalable, and efficient from development through deployment. Responsibilities Integrate security best practices into CI/CD pipelines (GitLab, Jenkins, GitHub Actions, etc. ). Automate security scans (SAST, DAST, dependency checks) and enforce policies. Implement Infrastructure as Code (IaC) using tools like Terraform, CloudFormation, or Ansible. Collaborate with development teams to remediate vulnerabilities and conduct threat modeling. Monitor infrastructure and application security with tools like Wazuh/Ossec or equivalent. Manage secrets and credentials securely using Vault, AWS Secrets Manager, etc. Perform regular security audits and assessments for cloud environments (AWS, GCP, Azure). Improve logging, monitoring, and alerting for security anomalies (e. g., using ELK, Prometheus, Loki, SIEM tools). Stay current on security trends, vulnerabilities, and compliance requirements. Requirements 5 + years of experience in DevOps/Security engineering or a related role. Strong understanding of CI/CD practices with experience automating security checks. Hands-on experience with container security (Docker, Kubernetes, image scanning). Familiarity with cloud platforms (AWS/GCP) and cloud security principles. Experience with tools like SonarQube, OWASP ZAP, Trivy, Checkov, or Snyk. Proficiency in scripting (Python, Bash, or similar). Knowledge of IAM, RBAC, and least privilege principles. Good understanding of network and application security fundamentals. Strong collaboration and communication skills. Preferred Qualifications Certifications: AWS Security, Certified DevSecOps Professional, CEH, or similar. Experience with compliance frameworks (SOC2 ISO 27001 HIPAA, etc. ). Familiarity with Zero Trust Architecture and Secure SDLC concept. This job was posted by Parvinder Kaur from Snapmint.
Posted 1 week ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Bachelor's degree in Computer Science or a related field, or equivalent experience. 3+ years of experience in a similar role. Proficiency in at least one backend programming language (e. g., Java, Python, Go). Experience with cloud platforms (e. g., AWS, Azure, GCP). Strong understanding of DevOps principles and practices. Experience with containerization technologies (e. g., Docker, Kubernetes). Experience with configuration management tools (e. g., Ansible, Puppet, Chef). Experience with CI/CD tools (e. g., Jenkins, GitLab CI). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Experience with databases (e. g., PostgreSQL, MySQL). Experience with monitoring and logging tools (e. g., Prometheus, Grafana, ELK stack). Experience with security best practices. Experience with serverless technologies. This job was posted by Ashok Kumar Samal from HDIP.
Posted 1 week ago
5.0 years
0 Lacs
Hyderābād
Remote
Role – Cloud Infra DevOps Engineer Location – India Remote (Hyderabad Preferred but not Required) Job Summary We are seeking an experienced Cloud DevOps Engineer with a minimum 5 years of expertise in Automating Cloud Infra and Platforms development. The ideal candidate should possess a deep understanding of Infrastructure as a Code Automation, Containerization utilizing Kubernetes and strong knowledge of APIs(REST). Roles & Responsibilities: Design, build, and maintain full-stack internal tooling and platforms that improve engineering productivity and automation. Develop and support RESTful APIs, microservices, and front-end interfaces that integrate with both third-party and custom-built tools. Operate and scale cloud-based infrastructure across GCP, AWS, and Azure; contribute to infrastructure provisioning, configuration management, and monitoring. Implement and support Kubernetes-based containerization workflows, including service deployment and resource optimization. Automate infrastructure and operations using tools like Terraform, Helm, and CI/CD pipelines. Own the full lifecycle of systems—from requirements gathering to deployment and post-release support. Troubleshoot performance and reliability issues across the stack and work on platform hardening. Collaborate with product owners, site reliability engineers, and developers to continuously improve platform usability and reliability. Contribute to architectural decisions, technical design reviews, and code reviews; mentor junior team members where applicable. Qualifications: 5+ years of experience in software engineering and Cloud platform operations in a large-scale or enterprise environment. Strong programming skills in Java (preferred), with experience in Spring Boot or similar frameworks. Bonus for front-end experience (React, Angular, or similar). Experience designing, deploying, and managing application in Kubernetes and Dockerized environments. Solid background in multi-cloud architecture and automation across GCP, AWS, or Azure. Familiarity with internal developer platforms and tools like JIRA, GitHub, Artifactory, SonarQube, and related systems. Comfortable with monitoring/logging stacks (e.g., Prometheus, Grafana, ELK, or similar). Experience with infrastructure-as-code using tools like Terraform, Ansible, or Cloud Formation. Strong debugging and problem-solving skills across distributed systems, APIs, and infrastructure layers. Ability to translate platform needs into maintainable, performant code and configuration. Excellent communication skills with the ability to work across engineering and infrastructure domains.
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Pune
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Good expertise in Java and Spring boot, knowledge of AWS is added advantage. Expert in analysis of requirements and able to provide technical inputs and estimation quickly. Follow all processes and guideline for zero defects and follow guidelines for quality and delivery set in organization. Ability for innovations and improving the velocity of team. Ability to solve queries and issues quickly and correctly by unblocking the blockers. Able to give good support for production issues and CR deployments. Able to effectively communicate with stakeholders and external teams. Able to highlight risk and blockers in advance, with the impact on timelines and delivery. Ability to learn new technology as needed by the project. Well versed with JIRA and keep track of its hygiene. Well versed with scrum and SAFe agile framework. Hands-on coder and can work individually on deliverable. Collaborate with Enterprise/Solution Architects, Business Analyst to deliver high quality APIs to enable reusability in the Group Provide professional consultancy/support timely for application teams’ queries/requests Ensure the code structure is technical coherent, future proof and compliance with technology standards and regulatory obligations Work with Java/COBOL/RPG experienced developers Requirements To be successful in this role, you should meet the following requirements: Solid and proficient skills in Java, Spring Framework, Micro Service, RAML with 4-6 years relevant experience. Should be ready to be based in Pune. Should be ready to provide on call support if any issues occur in Production between 1:30 pm -1:30 am IST on weekday and weekend . This support would be on rotational basis where Rota may come after 3 to 4 weeks. Strong foundation in Restful design practices. Experience in working with API management platform. Experience in modelling data in JSON. Experience in Scrum and Agile Knowledge of DevOps tooling (e.g. Jenkins, Git, Maven) Experience in Unit Testing, Data Mockup and Automation Test Strong communication, analytical, design and problem-solving skills Knowledge on source code scanning and security (e.g. Checkmarx, Sonar) Knowledge on logging & monitoring tools like Spulnk , AppDynamics. Experience on performance tuning is a plus. Exposure to JWT, SAML. Cloud experience is a plus (e.g. Docker, Kubernetes, AWS) Knowledge or experience of any ESB tools (e.g. IIB, MQ, SpringBoot) is a plus. Willing to learn/explore various technologies. Excellent team player with ability to work under minimal supervision You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
2.0 years
3 - 5 Lacs
India
On-site
AWS Cloud Engineer with extensive experience of 2 years in designing available, cost-efficient, Fault-Tolerant and scalable distributed systems on AWS; exposure in AWS deployment and management services. Monitoring the deployments in environments, debugging deployment issues and resolving the same in timely manner reducing the downtime. Experience in AWS Cloud and DevOps Tools. Experienced working in AWS Infrastructure and its services like IAM, VPC, EC2, EBS, S3, ALB, NACL, Security Groups, Auto Scaling, RDS, SNS, EFS, CloudWatch, CloudFront. Good hands-on experience in IAC tool like Terraform, CloudFormation. Good Experience in source code management tool Git, Github and source control management concepts like Branches, Merges . Good Experience in automating CI CD pipeline using Jenkins tools. Good hands-on experience in Configuration Management tool like Ansible. Having experience in creating custom Docker Images using Docker file and pushing Docker Images to Docker Hub. Setting up Kubernetes Cluster using EKS and Kubeadm. Writing manifest files to create deployments and services for micro service applications. Configuring Persistent volumes (PVs), PVCs for persistent database environments. Managed Deployment, ReplicationSet, StatefullSet, AutoScaling fo r Kubernetes Clusters. Good Experience on ELK for Log Aggregation and Log monitoring. Implemented, maintained, monitored alarms and notifications for AWS services using Cloud Watch and SNS. Experienced in deploying and monitoring applications on various platforms and setting up life cycle policies to back data from AWS S3. Configured CloudWatch alarm rules for operational and performance metrics for AWS resources and applications. Provisioned AWS resources using AWS Management Console, Command line Interface (CLI) Planed, built, and configured network infrastructure within VPC and other components. Responsible for implementing and supporting of cloud-based infrastructure and its solutions. Launching and configuring EC2 Instance using AMIs (Linux) Created IAM users and Policies towards application access. Installing and configuring Apache web server in windows and Linux. Initiating alarms in CloudWatch service for monitoring the server’s performance, CPU Utilization, disk usage etc.to take recommended actions for better performance. Creating/Managing Instance Image/Snapshots/Managing Volumes. Setup/Managing VPC, Subnets, make connection between different availability zones. Monitor Access logs and Error logs in AWS Cloud watch. Configuring EFS to EC2 instances. Creating & Configuring Elastic Load Balancer to distribute the traffic. Administration of Jenkins server - Includes Setup of Jenkins, parameterized builds and Deployment automation. Experience in creating Jenkins jobs, plug-in installations, setting up distributed builds concept and other Jenkins administration activities. Experience in managing microservices application using docker and Kubernetes. Increasing EBS volume storage capacity using AWS EBS Volume features. Creating/Managing buckets on S3 and assigning access permissions Applications of software installations, troubleshooting and updating Build and release EC2 instance Amazon Linux for development and production environment. Moving EC2 logs into S3. Experience in S3 Versioning, Server access logging & Life cycle policies on S3Buckets. Creating & Maintaining user accounts, groups and permissions. Created SNS notifications for multiple services in AWS. Creating and attaching Elastic IP to EC2 instances Assigning access permissions for files and directories to users and groups. Creating and managing user accounts/groups, assigning Roles and policies using IAM Experience on AWS Cloud services like IAM, S3, VPC, EC2, CloudWatch, CloudFront, CloudTrail, Route53, EFS, AWS Auto Scaling, EBS, SNS, SES, SQS, KMS, RDS, Security groups, Lambda, ECS, EKS,Tag Editor and more. Involved in designing and developing Amazon EC2, Amazon S3, Amazon RDS, Lamnda and other services. Creating containers in docker, pulling images deployment. Creating networks, creating nodes and pods in Kubernetes. Deployments using Jenkins through CI/CD pipeline. Creating infrastructure using terraform. Responsible for designing and deploying best SCM processes and procedures. Responsible for branching , merging and resolving various conflicts arising in GIT. Setup/Created CI/CD pipeline in Jenkins and scheduling a job. Established complete Jenkins CI-CD pipeline and complete workflow of build and delivery pipelines. Involved in writing DockerFile to build customized DockerImage for creating Docker Container and pushing DockerImage to DockerHub. Creating and managed multiple containers using Kubernetes . And creating deployments using Yaml code. Used Kubernetes to Orchestrate the deployment, scaling and management of docker container. Experience with monitoring tools like Prometheus and Grafana. Responsible to establish complete pipeline work-flow starting from pulling source code from git repository till deploying end product into Kubernetes cluster. Managing infrastructure of client both Windows and Linux Creation of files and directories. Creating users and groups. Assigning access permissions for files and directories to users and group. Installing and managing Web Server. Installation of packages using YUM (HTTP, HTTPS) Monitoring system Performance of Disk utilization and CPU utilization Technical Skills Operating Systems: Linux, Cent OS, Ubuntu and Windows. AWS: EC2, VPC, S3, EBS, IAM, Load balancing, Autoscaling, CloudFormation, CloudWatch, CloudFront, SNS, EFS, Route-53 DevOps Tools: Git, Ansible, Chef, Docker, Jenkins, Kubernetes, Terraform. Scripting Languages: Shell, Python. Monitoring Tools: CloudWatch, Grafana, Prometheus. Job Types: Full-time, Permanent, Fresher Pay: ₹345,405.87 - ₹500,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Morning shift Rotational shift Supplemental Pay: Performance bonus Yearly bonus Work Location: In person Speak with the employer +91 8668118196
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru
On-site
Imagine what you could do here. At Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. The people here at Apple don’t just create products - they create the kind of wonder that’s revolutionized entire industries. It’s the diversity of those people and their ideas that inspires the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. We are looking for an Automation SME to drive automation activities for stringent enclosure and module manufacturing processes as part of Manufacturing Design’s Automation SME team. Will also work with integrators, technology suppliers, contract manufacturers, and internal customers to develop advanced automation capabilities and implement them in the supply chain to exacting standards on aggressive project timelines. Description Join a dynamic team of Subject Matter Experts and help drive automation activities (including machine vision and robotics) for key manufacturing processes in the enclosure and module spaces. We look forward to these contributions in this role : - Work with cross-function operations teams and contract manufacturers to evaluate automation implementation risk, create technical requirements/RFP package, assess capabilities of automation integrators, provide DFM feedback, support integrator down-select activities, and estimate automation costs and capabilities. - Work onsite with integrators to drive design reviews, implement proof of concept, maintain and resolve issues lists, implement data logging and traceability requirements, review control logic and programming, and define qualification requirements for FAT/SAT. - Provide onsite technical support at FAT/SAT, development builds, production builds, and production ramp as needed to ensure equipment capability, availability, and productivity. - Coordinate above activities with Cupertino counterparts to push process capability, ensure timely completion of deliverables, and identify/mitigate risk ahead of schedule. - Present detailed analyses to key decision makers to enable commercial and design decisions. - Drive pioneering research and development activities both in-house and with industry partners to deliver step-change improvements to process capability. - Maintain deep knowledge of equipment capabilities and key process drivers; keep abreast of the latest industrial developments and leverage new technologies in novel ways to enable improved capabilities; serve as subject matter expert (SME) for project teams. Minimum Qualifications 8+ years of professional experience working on automation and robotics applications for mass production applications. BTech/ MTech in engineering (mechanical, manufacturing, automation, robotics or equivalent) from a premier school Preferred Qualifications Experience designing, deploying, qualifying, supporting, and project managing automated manufacturing process lines that incorporate machine vision (including cameras, lighting, and programming), motion control (including actuators, motors, pneumatics) and sensors, with expertise in most of those component groups. Strong mechanical design, development, and FA skills. Ability to reconcile DFM feedback with design goals, anticipate risk and identify mitigation opportunities, drive root cause/corrective action and process optimization activities to closure efficiently with proper design of experiment (DOE) practices and detailed data analysis using statistical methods such as process capability, correlation, GR&R, etc. and software such as JMP, Matlab, etc. Ability to work independently on a variety of demanding, concurrent projects that often require creative solutions. A positive attitude with a hands-on approach and a desire to drive innovation and create world-class products. Expertise with one or more brands (ABB, KUKA, Fanuc etc.) six-axis industrial robots, their textual programming languages (RAPID, KRL, Karel etc.), and their offline programming (OLP) packages (RobotStudio KUKA.Sim, Roboguide etc.), or equivalents. Extensive experience using CAD to review and design mechanical components and assemblies, and create proper engineering part drawings using GD&T practices. Expertise with PLCs (Mitsubishi, Panasonic, Beckhoff, Siemens, Allen-Bradley, etc.) and various PLC programming languages including Ladder Diagram, Structured Text and Function Block Diagram. Experience working overseas with international supply chains and contract manufacturers. Fluent spoken and written English skills with an ability to articulate complex, technical concepts and issues in a clear and concise manner to audiences of mixed backgrounds. Consistent domestic travel required (~40%) Submit CV
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
Remote
About Tala Tala is on a mission to unleash the economic power of the Global Majority – the 4 billion people overlooked by existing financial systems. With nearly half a billion dollars raised from equity and debt, we are serving millions of customers across three continents. Tala has been named by the Fortune Impact 20 list, CNBC ’s Disruptor 50 five years in a row, CNBC ’s World's Top Fintech Company, Forbes’ Fintech 50 list for eight years running, and Chief's The New Era of Leadership Award. We are expanding across product offerings, countries and crypto and are looking for people who have an entrepreneurial spirit and are passionate about our m ission. By creating a unique platform that enables lending and other financial services around the globe, people in emerging markets are able to start and expand small businesses, manage day-to-day needs, and pursue their financial goals with confidence. Currently, over nine million people across Kenya, the Philippines, Mexico, and India have used Tala products. Due to our global team, we have a remote-first approach, and also have offices in Santa Monica, CA (HQ); Nairobi, Kenya; Mexico City, Mexico; Manila, the Philippines; and Bangalore, India. Most Talazens join us because they connect with our mission. If you are energized by the impact you can make at Tala, we’d love to hear from you! The Role We're seeking a visionary Senior Cloud Infrastructure Engineer to spearhead technological innovation and revolutionize our DevOps landscape. You will bring world class cloud-native infrastructure & automation expertise to implement solutions for deployment, monitoring & remediation in an automated fashion. What You'll Do Provide technical leadership to the team in driving automation of infrastructure & platform services in Public Clouds (AWS, GCP, and Azure) using Terraform and Ansible Architect new solutions with development for infra & platform Design and manage Continuous deployment using Kubernetes, ArgoCD and Jenkins Monitor applications and services within the environments & be part of the on-call rotation to resolve issues and implement strategies to prevent future occurrences Set up intelligent application performance alerts in Datadog and ElasticSearch to find and fix issues before they impact business services and end-users Learn about technologies outside of your realm of expertise that help drive What You'll Need Understanding of how cloud-based web applications work and interest in measuring, analyzing, and improving distributed systems B.S. Degree in Computer Science or related field or equivalent combination of professional development training and experience 6 - 8 years of previous experience deploying and automating infrastructure in public cloud environments, using Infrastructure as Code such as Terraform or Ansible In-depth hands-on experience with at least one public Cloud platform (AWS or GCP) Prior experience as a technical lead working closely with Product, Engineering and SecOps on shift-left strategies, CI/CD tools and solutions needed Experience with Docker and Kubernetes in production Experience with Continuous Deployment tools such as Jenkins or ArgoCD Experience with Logging and Monitoring tools for SaaS such as Sumo, Splunk, Datadog etc Excellent verbal and written communication skills and ability to document and explain technical details and concepts clearly and concisely Flexibility to pitch in where needed across program and team Strong influence and teamwork skills; sound problem resolution, judgment, negotiating, and decision-making skills Experience working effectively with global teams in multiple time zones Our vision is to build a new financial ecosystem where everyone can participate on equal footing and access the tools they need to be financially healthy. We strongly believe that inclusion fosters innovation and we’re proud to have a diverse global team that represents a multitude of backgrounds, cultures, and experience. We hire talented people regardless of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Data Engineer at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Reality Labs, Threads). Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across Meta to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining Meta, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner. Data Engineer, Product Analytics Responsibilities: Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains Define and manage Service Level Agreements for all data sets in allocated areas of ownership Solve challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources Improve logging Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts Influence product and cross-functional teams to identify data opportunities to drive impact Minimum Qualifications: Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent 2+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions 2+ years of experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala or others.) Preferred Qualifications: Master's or Ph.D degree in a STEM field About Meta: Meta builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. People who choose to build their careers by building with us at Meta help shape a future that will take us beyond what digital connection makes possible today—beyond the constraints of screens, the limits of distance, and even the rules of physics. Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate, monthly rate, or annual salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base compensation, Meta offers benefits. Learn more about benefits at Meta.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Us We are a rapidly growing furniture and modular solutions manufacturer, combining craftsmanship with cutting-edge processes. As we expand our capacity and ERP-enable our operations, we’re looking for a driven In-Process Quality Supervisor to join our team and ensure every piece we make meets the highest standards. What You’ll Do Inspect Incoming Materials: Perform 100% inward inspections within 24 hrs of receipt Manage quarantine/release decisions, targeting ≤ 4 hrs per lot Monitor In-Process Quality: Execute all production checkpoints per SOP Track and report in-process defects (≤ 2 per 100 units) Ensure first-pass yield on assemblies ≥ 97% Verify Final Products: Conduct final inspections on ≥ 95% of shipped units Log all inspection data in ERP on-time (100% compliance) Aim for < 50 ppm customer-facing defects Drive Corrective Actions & Continuous Improvement: Raise and close ≥ 90% of corrective actions within 7 days Reduce repeat defects by ≥ 5% month-over-month Lead at least two quality Kaizen initiatives per quarter Coach & Train the Team: Deliver ≥ 2 shop-floor quality trainings per month Host daily “quality huddles” to review metrics and blockers Maintain inspection instrument calibration schedules Who You Are Bachelor’s degree in Engineering, Industrial Technology, or related field 3–5 years’ hands-on quality supervision in furniture, wood, or modular manufacturing Strong analytical toolkit (Pareto charts, 5-Why problem solving) ERP experience—comfort with logging inspections and generating reports Excellent communicator who can coach shop-floor teams to a “right-first-time” mindset Detail-oriented, proactive, and committed to continuous improvement
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a dedicated and proficient Senior Systems Engineer with extensive Data DevOps/MLOps knowledge to enhance our team. The ideal candidate should possess a comprehensive knowledge of data engineering, data pipeline automation, and machine learning model operationalization. The role demands a cooperative professional skilled in designing, deploying, and managing extensive data and ML pipelines in alignment with organizational objectives. Responsibilities Develop, deploy, and manage Continuous Integration/Continuous Deployment (CI/CD) pipelines for data integration and machine learning model deployment Set up and sustain infrastructure for data processing and model training through cloud-based resources and services Automate processes for data validation, transformation, and workflow orchestration Work closely with data scientists, software engineers, and product teams for a smooth integration of ML models into production Enhance model serving and monitoring to boost performance and dependability Manage data versioning, lineage tracking, and the reproducibility of ML experiments Actively search for enhancements in deployment processes, scalability, and infrastructure resilience Implement stringent security protocols to safeguard data integrity and compliance with regulations Troubleshoot and solve issues throughout the data and ML pipeline lifecycle Requirements Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field 4+ years of experience in Data DevOps, MLOps, or similar roles Proficiency in cloud platforms such as Azure, AWS, or GCP Background in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Ansible Expertise in containerization and orchestration technologies including Docker and Kubernetes Hands-on experience with data processing frameworks such as Apache Spark and Databricks Proficiency in programming languages including Python with an understanding of data manipulation and ML libraries like Pandas, TensorFlow, and PyTorch Familiarity with CI/CD tools including Jenkins, GitLab CI/CD, and GitHub Actions Experience with version control tools and MLOps platforms such as Git, MLflow, and Kubeflow Strong understanding of monitoring, logging, and alerting systems including Prometheus and Grafana Excellent problem-solving abilities with capability to work independently and in teams Strong skills in communication and documentation Nice to have Background in DataOps concepts and tools such as Airflow and dbt Knowledge of data governance platforms like Collibra Familiarity with Big Data technologies including Hadoop and Hive Certifications in cloud platforms or data engineering
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Dear Candidate! Greetings from TCS !!! Role: Service Desk Location: Chennai Experience Range: 4 to 7 Years Job Description: Experience of working with customer in Global Delivery Model Proactive, Performance-driven, customer service oriented, well-organized individual with expertise in IT Service Desk. Manage work load coming into Service Desk using the resources available (onsite, offshore) and ensuring effective delivery. Responsible for delivery of all Risk and Audit activities. Propose improvements in Service Desk line with business requirements and with vision of changing technology. Work with various IT teams to identify ServiceNow automation and integration opportunities to improve the existing state of IT operations Ability to coach and encourage less experienced members of the team Clear understanding and ability to articulate the IT services provided, how they are used and impact of non-availability Take a hands-on approach to managing the daily workload of the Service Desk, ensuring all calls are being properly handled, prioritized, and progressed, customers are kept informed, and communications and customer service are of the highest standard. To own, review, and revise the ITIL Service Operation Policies, Processes, and Procedures pertaining to the role and regularly report on their performance using a range of KPIs and metrics. These include Incident, Even and Access Management and Request Fulfilment. In collaboration with other team managers across Onsite and Offshore, manage the configuration of the IT Service Management and call logging tool so that all call types are managed as efficiently and effectively as possible. Compile and maintain the workflows, processes, and procedures used by Support Analysts in their day to day roles following ITIL standards. TCS has been a great pioneer in feeding the fire of Young Techies like you. We are a global leader in the technology arena and there's nothing that can stop us from growing together.
Posted 1 week ago
0 years
0 Lacs
Kolkata metropolitan area, West Bengal, India
On-site
Company Description Roopya provides a SaaS Lending Infrastructure that powers lenders with tools for Origination, Underwriting, Analytics, Early Warning, and Collection. As a specified user under the RBI CICRA Act 2005, Roopya offers cutting-edge solutions to streamline and enhance lending processes, ensuring compliance and efficiency in financial operations. Role Description This is a full-time, on-site role for an AWS Administrator, located in the Kolkata metropolitan area. The AWS Administrator will be responsible for managing and maintaining the organization's AWS infrastructure. Day-to-day tasks include configuring and monitoring AWS services, troubleshooting issues, ensuring security and compliance, performing regular backups, managing disaster recovery, and optimizing performance and cost-efficiency. Collaboration with development teams to support deployment processes will also be a key aspect of the role. Qualifications Experience with AWS services such as EC2, S3, RDS, and VPC Strong skills in cloud automation tools and scripting languages (e.g., Terraform, CloudFormation, Python, Bash) Knowledge of network administration, security best practices, and compliance standards Proficiency in monitoring and logging tools like CloudWatch, ELK stack, or equivalent Ability to troubleshoot and resolve technical issues in a timely manner Excellent communication and collaboration skills Bachelor's degree in Computer Science, Information Technology, or a related field Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified SysOps Administrator) are a plus
Posted 1 week ago
140.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About NCR VOYIX NCR VOYIX Corporation (NYSE: VYX) is a leading global provider of digital commerce solutions for the retail, restaurant and banking industries. NCR VOYIX is headquartered in Atlanta, Georgia, with approximately 16,000 employees in 35 countries across the globe. For nearly 140 years, we have been the global leader in consumer transaction technologies, turning everyday consumer interactions into meaningful moments. Today, NCR VOYIX transforms the stores, restaurants and digital banking experiences with cloud-based, platform-led SaaS and services capabilities. Not only are we the leader in the market segments we serve and the technology we deliver, but we create exceptional consumer experiences in partnership with the world’s leading retailers, restaurants and financial institutions. We leverage our expertise, R&D capabilities and unique platform to help navigate, simplify and run our customers’ technology systems. Our customers are at the center of everything we do. Our mission is to enable stores, restaurants and financial institutions to exceed their goals – from customer satisfaction to revenue growth, to operational excellence, to reduced costs and profit growth. Our solutions empower our customers to succeed in today’s competitive landscape. Our unique perspective brings innovative, industry-leading tech to all the moving parts of business across industries. NCR VOYIX has earned the trust of businesses large and small — from the best-known brands around the world to your local favorite around the corner. Primary responsibility is to develop high quality software solutions as a contributing member of a highly motivated team of Engineers. Should be able to understand the what goes behind the building of a complex resilient scalable enterprise products and should contribute through design and development. This individual will hold the title “Software Engineer III” with the expectation to solve complex technical challenges and assist in laying out technical roadmap. Should have had hands on complex applications/solutions which has integrations with various components. Experience with production systems and migrating customers from legacy systems to later versions is preferred. Advanced knowledge on the best practices on enterprise applications – logging, communication, coding, testing and CI/CD pipeline is expected. The primary solution stack technology for this position is Java with other preferred skills referred below. Responsibilities include: Develop high quality software which meets requirements, promotes re-use of software components, and facilitates ease of support. Diagnose, isolate, and implement remedies for system failures caused by errors in software code. Identifies and implements process improvements in Engineering practices. Utilize software-based system maintenance and tracking tools. Provide input and technical content for technical documentation, user help materials and customer training. Conduct unit tests, track problems, and implement changes to ensure adherence to test plan and functional/nonfunctional requirements Analyze, design and implement software mechanisms to improve code stability, performance, and reusability. Participates and leads code review sessions. Create high fidelity estimates of their own work efforts. Assist others in estimating task effort and dependencies, responsible for team commitments within the Sprint. May be asked to lead and advise other Engineering resources as part of project activities. Considered subject matter experts in their chosen field Participates with industry groups, stays current with technology and industry trends, disseminates knowledge to team members, forms best practices. Communicate with Solution Management and other internal teams. Participates in cross-functional collaboration within the organization. Works with developers to assist detailed problem resolution for difficult problems which are proving difficult for Lead Developers to resolve. Works on improving use of tools relating to AMS development/tools used BASIC QUALIFICATIONS: Bachelor’s Degree in computer science or related field A minimum of 7 years of experience in software design and development A minimum of 7 years of experience in preferred technology stack, Must to Have OOPS concepts Very strong development experience Java; Spring framework; Spring boot Spring Security Multi-threading Concepts REST API development and documentation Unit testing with JUnits and/or BDD with Cucumber Messaging services, Caching – RabbitMQ or like Strong understanding and affinity towards building scalable and robust solutions Very strong understanding of SQL or PostGRSQL DB In depth understanding of Design Patterns and ability to design a Class Model, Data Model for a given requirement Strong in Debugging, Memory Leaks, Profiling, Crashes, etc Good to Have Hands on development experience with Linux OS Good understanding of NFT Performance; scalability and availability and familiarity with Tools Cloud Native Application Development Linux OS and scripting Should be familiar with HTTPs/SSL Networking concepts like how to setup and configure name servers and network interfaces Load Balancers Must have hands on any of the two from the following skill sets Docker and K8s Azure / GCP Cucumber Selenium / UI automation JMeter Terraform Helm Ansible ARM templates Deep understanding of Software Development and Quality Assurance best practices Excellent written and verbal communication skills Excellent teamwork and collaboration skills Experience operating in an Agile environment, with a deep understanding of agile development principles. Familiarity with Continuous Improvement and Six Sigma Lean principles. PREFERRED QUALIFICATIONS: Knowledge of software development standards and protocols: Secured development knowledge DevOps for the cloud deployments CI/CD pipeline Cloud development knowledge on Azure or GCP Offers of employment are conditional upon passage of screening criteria applicable to the job EEO Statement Integrated into our shared values is NCR Voyix’s commitment to diversity and equal employment opportunity. All qualified applicants will receive consideration for employment without regard to sex, age, race, color, creed, religion, national origin, disability, sexual orientation, gender identity, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. NCR Voyix is committed to being a globally inclusive company where all people are treated fairly, recognized for their individuality, promoted based on performance and encouraged to strive to reach their full potential. We believe in understanding and respecting differences among all people. Every individual at NCR Voyix has an ongoing responsibility to respect and support a globally diverse environment. Statement to Third Party Agencies To ALL recruitment agencies: NCR Voyix only accepts resumes from agencies on the preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR Voyix employees, or any NCR Voyix facility. NCR Voyix is not responsible for any fees or charges associated with unsolicited resumes “When applying for a job, please make sure to only open emails that you will receive during your application process that come from a @ncrvoyix.com email domain.”
Posted 1 week ago
3.0 years
7 - 14 Lacs
Ranga Reddy District, Telangana
Remote
Job Title: AWS DevOps EngineerClient: Amazon Inc Experience: 3+ YearsLocation: Remote/On-site/Hybrid Employment Type: Full-time Job Summary: We are looking for an AWS DevOps Engineer with 3+ years of hands-on experience in designing, implementing, and managing cloud infrastructure and CI/CD pipelines. The ideal candidate should have strong expertise in AWS services, automation, infrastructure-as-code (IaC), and DevOps best practices to enhance scalability, security, and performance. Key Responsibilities: l Design, deploy, and manage highly available, scalable, and secure AWS cloud infrastructure. l Implement and optimize CI/CD pipelines using AWS CodePipeline, Jenkins, GitHub Actions, or GitLab CI/CD. l Automate infrastructure provisioning using Terraform, AWS CloudFormation, or CDK. l Configure and manage containerized applications using Docker, Kubernetes (EKS), or AWS ECS/Fargate. l Monitor and troubleshoot cloud environments using AWS CloudWatch, Prometheus, Grafana, or ELK Stack. l Ensure security best practices with IAM, VPC, Security Groups, KMS, and AWS WAF. l Collaborate with development teams to improve deployment processes, performance tuning, and cost optimization. l Implement infrastructure-as-code (IaC) and configuration management using Ansible, Chef, or Puppet. l Work on serverless architectures using AWS Lambda, API Gateway, and DynamoDB. l Troubleshoot and resolve issues in Linux/Windows-based cloud environments. Required Skills: l 3+ years of experience in DevOps/AWS Cloud Engineering. l Strong expertise in AWS services (EC2, S3, RDS, VPC, Lambda, IAM, CloudFront, etc.). l Hands-on experience with CI/CD tools (Jenkins, GitHub Actions, AWS CodeSuite). l Proficiency in IaC tools like Terraform, CloudFormation, or AWS CDK. l Experience with containerization & orchestration (Docker, Kubernetes, ECS/EKS). l Knowledge of scripting languages (Bash, Python, PowerShell). l Familiarity with monitoring & logging tools (CloudWatch, Prometheus, ELK, Datadog). l Understanding of networking, security, and compliance in AWS. l Experience with Linux/Windows server administration. l Knowledge of Agile/Scrum methodologies. Preferred Skills: l 1 Round (Screening) - Pentagram Infotech l 2 Round (Technical) - Amazon l 3 Round (HR) - Pentagram Infotech Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹1,400,000.00 per year Benefits: Commuter assistance Health insurance Internet reimbursement Leave encashment Paid sick time Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Rotational shift Supplemental Pay: Joining bonus Shift allowance Work Location: Hybrid remote in Rangareddy, Telangana
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Lead Engineer - Agentic AI/Python/Cloud/Architecture As an Architect/Technical Lead, you will be responsible for designing, developing, and deploying cutting-edge AI-powered solutions. You will lead the technical direction, ensuring our platforms are scalable and reliable, and leverage the latest advancements in Generative AI and cloud computing. You will work closely with product managers, engineers, and data scientists to build world-class AI experiences. Responsibilities: Architectural Design: Design and implement robust, scalable, and secure cloud-based architectures. Define technology stacks, including AI/ML frameworks, cloud services, and related technologies. Ensure seamless integration with existing systems and third-party services. Technical Leadership: Lead a team of engineers, providing technical guidance and mentorship. Drive the development process, ensuring adherence to best practices and coding standards. Conduct code reviews and ensure high-quality deliverables. Gen AI Expertise: Stay up-to-date with the latest advancements in Generative AI and apply them to product development. Evaluate and select appropriate LLMs and AI models. Develop and optimize AI models for performance and accuracy. Implement fine-tuning and training strategies for AI models. Cloud Infrastructure: Design and implement scalable and cost-effective cloud infrastructure (e.g., AWS, Azure, GCP). Ensure high availability, reliability, and security of cloud platforms. Optimize cloud resources for performance and cost. Technology Implementation: Design and implement robust API’s and microservices. Optimize system performance and identify areas for improvement. Implement monitoring and logging systems to ensure system health. Collaboration and Communication: Work closely with product managers to define product requirements and roadmaps. Communicate technical concepts effectively to both technical and non-technical stakeholders. Collaborate with data scientists to improve AI model performance. Troubleshooting and Optimization: Troubleshoot complex technical issues and provide timely solutions. Experience: Minimum of 8+ years of experience in software development, with a focus 1 on cloud-based systems. Proven experience in architecting and leading the development of complex AI-powered products. Extensive experience with cloud platforms (AWS, Azure, GCP). Strong experience with Gen AI technologies, including LLMs, and related AI/ML concepts. Technical Skills: Proficiency in programming languages such as Python, Java, or Node.js. Deep understanding of cloud architecture and services. Expertise in AI/ML frameworks and tools (e.g., TensorFlow, PyTorch). Strong knowledge of RESTful APIs and microservices architecture. Experience with containerization and orchestration technologies (Docker, Kubernetes). Why Zenoti? Be part of an innovative company that is revolutionizing the wellness and beauty industry. Work with a dynamic and diverse team that values collaboration, creativity, and growth. Opportunity to lead impactful projects and help shape the global success of Zenoti’s platform. Attractive compensation. Medical coverage for yourself and your immediate family. Access to regular yoga, meditation, breathwork, and stress management sessions. We also include your family in benefit awareness initiatives. Regular social activities, and opportunities to give back through social work and community initiatives.
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Sr. Fullstack Developer (ReactJS + NodeJS or Java) SDE3 Work Timing : 10am to 7pm (Monday to Friday) Location: Pune, India Description We are seeking a highly skilled Senior Fullstack Developer to join our team in Pune, India . The ideal candidate will have hands-on experience in building and maintaining scalable web applications using ReactJS for the frontend and Node.js or Java for the backend. You will play a key role in designing, developing, and optimizing high-performance applications while collaborating with cross-functional teams. Key Responsibilities ● Design, build, and maintain web applications using Node.js or Java , and React.js ● Build and optimize RESTful APIs and backend services ● Collaborate with cross-functional teams to define and deliver new features ● Ensure the technical feasibility and performance of UI/UX implementations ● Write clean, maintainable, and testable code following best practices ● Participate in system design and architecture for scalable solutions ● Maintain thorough documentation for code, APIs, and system flows ● Contribute to testing strategies including unit, integration, and end-to-end tests Mandatory Skills ● 6+ years of experience in software engineering (preferably full-stack or backend-heavy roles) ● Strong proficiency in backend development with Node.js or Java ● Frontend experience with React , or similar frameworks ● Familiarity with PostgreSQL , Redis , and messaging systems like Kafka or ActiveMQ ● Experience with cloud-based architecture , preferably AWS (ECS, S3, etc.) ● Solid understanding of clean code practices, testing, and CI/CD pipelines ● Experience with Git and CI/CD tools like GitHub Actions ● Familiarity with testing frameworks such as Jest , Cucumber , or Playwright ● Strong system design skills and ability to build for scale ● Excellent problem-solving skills and attention to detail ● Ability to work independently and manage multiple priorities ● Strong communication and collaboration skills Nice-to-Have ● Familiarity with mobile app architecture or cross-platform frameworks ● Experience in high-availability or event-driven systems ● Knowledge of infrastructure-as-code tools (e.g., Terraform ) ● Familiarity with monitoring, observability, or logging systems You’ll Thrive Here If You ● Enjoy working across multiple projects and wearing multiple hats ● Are a strong communicator and collaborator in distributed teams ● Take initiative and ownership of your work ● Believe in documentation and clean handoffs
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role The Core Analytics & Science Team (CAS) is Uber's primary science organisation, covering both our main lines of business as well as the underlying platform technologies on which those businesses are built. We are a key part of Uber's cross-functional product development teams, helping to drive every stage of product development through data analytic, statistical, and algorithmic expertise. CAS owns the experience and algorithms powering Uber's global Mobility and Delivery products. We optimise and personalise the rider experience, target incentives and introduce customizations for routing and matching for products and use cases that go beyond the core Uber capabilities. What the Candidate Will Do ---- Refine ambiguous questions and generate new hypotheses and design ML based solutions that benefit product through a deep understanding of the data, our customers, and our business Deliver end-to-end solutions rather than algorithms, working closely with the engineers on the team to productionize, scale, and deploy models world-wide. Use statistical techniques to measure success, develop northstar metrics and KPIs to help provide a more rigorous data-driven approach in close partnership with Product and other subject areas such as engineering, operations and marketing Design experiments and interpret the results to draw detailed and impactful conclusions. Collaborate with data scientists and engineers to build and improve on the availability, integrity, accuracy, and reliability of data logging and data pipelines. Develop data-driven business insights and work with cross-functional partners to find opportunities and recommend prioritisation of product, growth, and optimisation initiatives. Present findings to senior leadership to drive business decisions Basic Qualifications ---- Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields. 4+ years experience as a Data Scientist, Machine learning engineer, or other types of data science-focused functions Knowledge of underlying mathematical foundations of machine learning, statistics, optimization, economics, and analytics Hands-on experience building and deployment ML models Ability to use a language like Python or R to work efficiently at scale with large data sets Significant experience in setting up and evaluation of complex experiments Experience with exploratory data analysis, statistical analysis and testing, and model development Knowledge in modern machine learning techniques applicable to marketplace, platforms Proficiency in technologies in one or more of the following: SQL, Spark, Hadoop Preferred Qualifications Advanced SQL expertise Proven track record to wrangle large datasets, extract insights from data, and summarise learnings/takeaways. Proven aptitude toward Data Storytelling and Root Cause Analysis using data Advanced understanding of statistics, causal inference, and machine learning Experience designing and analyzing large scale online experiments Ability to deliver on tight timelines and prioritise multiple tasks while maintaining quality and detail Ability to work in a self-guided manner Ability to mentor, coach and develop junior team members Superb communication and organisation skills
Posted 1 week ago
0.0 - 2.0 years
0 Lacs
Hyderabad, Telangana
On-site
Basic Qualifications - Diploma/ Graduation or higher in Supply Chain or Logistics discipline - Proficient with computers and Microsoft Office (Outlook, Word, Excel) - 2years of experience in inventory management, warehousing management. - Experience in maintaining inventory levels including receiving, storage, and auditing - Good communication skills AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. We are looking for Logistic Specialist to join our Data Centers within the expanding Infrastructure Operations team. Amazon data centers are large-scale high-density centers where you will be working on changing the face of Cloud technology in the region. Logistics Specialists maintain the onsite parts room in the Indian Data Centers with in Amazon Data Centre Services Private Limited (‘ADSIPL’), managing the custody of computer hardware from arrival through to departure. This involves lifecycle cycle counts, picking and delivering parts to the hardware technicians, shipping faulty hardware back to the vendor, and then receiving and logging replacement parts. Responsibilities - Monitoring inventory levels to ensure that proper stock levels are maintained - Receiving parts, maintain inventory, and check-out parts as needed - Loading and unloading shipments and transporting parts between different locations - Keeping precise records of all commodities going in and out - Maintaining the cleanliness, organization, and safety of all workspaces. - Maintaining inventory accuracy - Creation of Delivery Challan and E-way bill The role will require the successful candidate to travel to other sites in the in the Hyderabad metro area to assist in conducting cycle counts, booking stock into the inventory control system, replenishing stock and returning defective stock to the main stocking locations. Travel between sites will require own transport, all fuel costs will be reimbursed. Physical / Environmental Demands - Loading and unloading shipments. - Standing, sitting, and walking (including stairs) for prolonged periods of time. - Willing and able to frequently push, pull, squat, bend and kneel. - Reach and stretch to position equipment and fixtures while maintaining balance. - Push or pull heavy objects of up to 16kg into position and participate in group lifts of 20kg or more. - Coordinate body movements when using tools or equipment. - Work with and/or around moving mechanical parts. - Work is in a office and warehouse environment where the noise level is low to moderate. About the team About AWS Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Preferred Qualifications Experience in customer service Ability to prioritize work based on department and production objectives Current driver’s license. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
8.0 - 13.0 years
15 - 25 Lacs
Hyderabad
Work from Office
Role Summary Akrivia HCM is seeking an experienced Site Reliability Engineer to safeguard the performance, scalability, and availability of our global HR tech platform. You will define service-level objectives, automate infrastructure, lead incident response, and partner with engineering squads to deliver reliable releases at high velocity. Key Responsibilities Define and track SLIs/SLOs for latency, availability, and error budgets. Build and maintain Terraform/Helm/ArgoCD stacks; convert manual toil into code. Instrument services with Prometheus, Grafana, Datadog, and OpenTelemetry; create actionable alerts & dashboards. Serve in the on-call rotation, lead rapid mitigation, run blameless post-mortems, and close action items. Model load growth, tune autoscaling policies, run load tests, and drive cost-optimisation reviews. Design chaos game-days and fault-injection experiments to validate fail-over and recovery paths. Review designs/PRs for reliability anti-patterns and coach development teams on SRE best practices. Must-Have Qualifications 5+ years operating large-scale, user-facing SaaS systems on AWS, GCP, or Azure (Kubernetes/EKS preferred). Proficiency with Infrastructure-as-Code (Terraform, Helm, Pulumi, or CloudFormation) and GitOps (ArgoCD/Flux). Hands-on experience building observability stacks (Prometheus, Grafana, Datadog, New Relic, etc.). Proven track record reducing MTTR and change-failure rate through automation and robust incident processes. Strong scripting or programming skills in Go, Python, or TypeScript. Deep debugging skills across Linux, networking, containers, databases, and web/API layers. Excellent written and verbal communication skills. Good-to-Have Skills Exposure to AWS Well-Architected reviews, FinOps, or cost-optimisation initiatives. Experience with service mesh (Istio/Linkerd), event-driven systems (Kafka/NATS), or serverless (Lambda). Familiarity with SOC 2 / ISO 27001 controls and secrets management (AWS KMS, Vault). Chaos engineering tools (ChaosMesh, Gremlin) and performance testing (k6, Gatling). Certifications such as AWS DevOps Pro, CKA/CKAD, or Google Cloud SRE.
Posted 1 week ago
4.0 years
0 Lacs
Punjab, India
Remote
About US We turn customer challenges into growth opportunities. Material is a global strategy partner to the worlds most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a Part Of An Awesome Tribe Senior DevOps Engineer. Material+ is hiring for DevOps/SRE, We are looking for Senior DevOps/SRE engineer with strong automation skills and a good understanding of how to build & run secure & reliable platforms for cloud-native applications. Minimum Experience : 4+ years in Senior DevOps/SRE Engineer. Job Description The focus of this role is to build scalable, resilient, secure infrastructure for cloud-native applications whilst automating every mundane task you could think of and building observability dashboards, set up alerts, etc to provide optics to relevant stakeholders. In a nutshell : You are keepers of Production environments. You must be a problem solver with the ability to multitask and come with strong collaboration and communication skills. Key Responsibilities Proactively monitor and review application performance. Handle on-call and emergency support. Ensure software has good logging and diagnostics. Create and maintain operational runbooks. Contribute in Solution Designing and evaluating Technical Debt. Set right practices for Well-Defined Architecture & to minimize toil. Own SLI, SLO configuration as per Error Budget. Maintain production services through measuring and monitoring availability, latency, and overall system health. Practice sustainable incident response and blameless postmortems. Not be afraid to contribute changes back to the Software engineering team to improve the systems. Managing the delivery pipeline into production. Able to mentor junior members on regular basis. Strong Troubleshooting issues with web applications. Understanding of security principles and best practices. Ensuring that critical data is backed up. Configuration of monitoring systems including infrastructure monitoring and Application Performance Monitoring systems such as New Relic, Dynatrace and Grafana. Ensuring that web application infrastructure is built. Ability to act as Customer Technical Advocate and negotiate well with peers on technical fronts. Flexible enough to work in different Shifts for hyper business requirement. Ability to handle multiple global clients on tech front and generate desired reports to represent health of SRE : A key skill of a (DevOps/SRE) is that they have a deep knowledge of the application, the code, and how it runs, is configured, and scales. That knowledge is what makes them so valuable at also monitoring and supporting it as site reliability engineers. System administration, security, and networking. DevOps/SRE Engineer is expected to have a good understanding of system administration (Linux or Windows) and networking. Essential commands. User and Group Management. Service Configuration. Storage Management. Good grasp of fundamental security concepts. Good understanding of infrastructure as code principles. Knowledge of a scripting language such as Bash. Ability to configure infrastructure using a Configuration Management technology such as Puppet, Chef, or Ansible. Familiarity with Jenkins or any other CI/CD tool. Should have hands on exp in Splunk. Proficiency in scripting Terraform OR Python. Understanding of container technologies such as Docker, and Kubernetes. Should have exp into Github OR Gitlab. Hands-on experience with container orchestration technologies such as ECS, EKS, AKS or Kubernetes would be beneficial. Use Terraform and other IaC to deploy cloud infrastructure. Cloud Technologies Experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on Azure. Hands-on experience using compute, networking, storage, and database Azure services. Hands-on experience of 4+ with Azure deployment and management services. Ability to identify and define technical requirements for an AZURE-based application. Ability to identify which AZURE services meet a given technical requirement. Knowledge of recommended best practices for building secure and reliable applications on the AZURE platform. An understanding of the AZURE global infrastructure. An understanding of network technologies as they relate to AZURE. An understanding of security features and tools that AZURE provides and how they relate to traditional services. What We Offer Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified). Health and Family Insurance. 40+ Leaves per year along with maternity & paternity leaves. Wellness, meditation and Counselling sessions. (ref:hirist.tech)
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
24052 Jobs | Dublin
Wipro
12710 Jobs | Bengaluru
EY
9024 Jobs | London
Accenture in India
7651 Jobs | Dublin 2
Uplers
7362 Jobs | Ahmedabad
Amazon
7248 Jobs | Seattle,WA
Oracle
6567 Jobs | Redwood City
IBM
6559 Jobs | Armonk
Muthoot FinCorp (MFL)
6161 Jobs | New Delhi
Capgemini
5158 Jobs | Paris,France