Jobs
Interviews

23044 Automate Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: Senior Java Spring Boot Developer Experience: 6+ Years Location: Mysore/Bangalore Job Description: We are seeking an experienced Senior Java Spring Boot Developer with 6+ years of hands-on experience in building scalable, high-performance microservices using Java, Spring Boot, and Spring JPA. The ideal candidate will have strong expertise in designing and developing RESTful APIs, microservices architecture, and cloud-native applications. As part of our team, you will work on enterprise-grade applications, collaborate with cross-functional teams, and contribute to the full software development lifecycle. Mandatory Skills: ✔ 6+ years of Java development (Java 8/11/17). ✔ Strong Spring Boot & Spring JPA experience. ✔ Microservices architecture (design, development, deployment). ✔ RESTful API development & integration. ✔ Database expertise (SQL/NoSQL – PostgreSQL, MySQL, MongoDB). ✔ Testing frameworks (JUnit, Mockito). ✔ Agile methodologies & CI/CD pipelines. Key Responsibilities: Design & Development: Develop high-performance, scalable microservices using Spring Boot. Design and implement RESTful APIs following best practices. Use Spring JPA/Hibernate for database interactions (SQL/NoSQL). Implement caching mechanisms (Redis, Ehcache) for performance optimization. Microservices Architecture: Build and maintain cloud-native microservices (Docker, Kubernetes). Integrate with message brokers (Kafka, RabbitMQ) for event-driven systems. Ensure fault tolerance, resilience, and scalability (Circuit Breaker, Retry Mechanisms). Database & Performance: Optimize database queries (PostgreSQL, MySQL, MongoDB). Implement connection pooling, indexing, and caching strategies. Monitor and improve application performance (JVM tuning, profiling). Testing & Quality Assurance: Write unit & integration tests (JUnit, Mockito, Test Containers). Follow TDD/BDD practices for robust code quality. Perform code reviews and ensure adherence to best practices. DevOps & CI/CD: Work with Docker, Kubernetes, and cloud platforms (AWS/Azure). Set up and maintain CI/CD pipelines (Jenkins, GitHub Actions). Automate deployments and monitoring (Prometheus, Grafana). Collaboration & Agile: Work in Agile/Scrum teams with daily standups, sprint planning, and retrospectives. Collaborate with frontend, QA, and DevOps teams for seamless delivery.

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Company They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About the Client Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title : Java +React Location : Hyderabad Experience : 5 -8 yrs About the Role Java, React,Springboot, Microservices Responsibilities : Backend Development : Design, develop, and maintain scalable and efficient backend services using Java and Spring Boot. Develop RESTful APIs and integrate them with front-end applications. Design and implement Microservices architectures for scalability and modularity. Ensure secure and optimized handling of data, ensuring high performance, scalability, and fault tolerance. Frontend Development : Build and maintain interactive and responsive UIs using React.js. Collaborate with designers and backend developers to deliver seamless user experiences. Optimize web applications for maximum speed and scalability. Microservices & Cloud : Implement and manage distributed Microservices architecture. Work with cloud platforms (AWS, Azure, or GCP) to deploy, monitor, and manage applications. Automate deployment and integration with CI/CD pipelines. Collaboration & Code Quality : Collaborate with cross-functional teams to deliver high-quality solutions. Write clean, maintainable, and well-documented code. Participate in code reviews and ensure adherence to best practices. Testing & Debugging : Write unit and integration tests for both backend and frontend services. Debug and optimize code to ensure high performance and reliability. Qualifications : Technical Skills : Strong proficiency in Java and Spring Boot. Expertise in building and deploying Microservices. Experience with React.js for building modern, responsive web applications. Hands-on experience with relational and NoSQL databases (e.g., MySQL, MongoDB). Familiarity with API integration, RESTful services, and JSON. Tools & Frameworks : Familiarity with Docker and container orchestration platforms (e.g., Kubernetes). Experience with CI/CD tools (e.g., Jenkins, GitLab CI, CircleCI). Knowledge of cloud platforms (AWS) is a plus. Soft Skills : Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Self-motivated with the ability to work independently and manage time effectively. Preferred Skills : Experience with GraphQL. Knowledge of Microservices security patterns and best practices. Familiarity with Agile/Scrum methodologies. Experience with Message Brokers (Kafka, RabbitMQ, etc.).

Posted 2 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Location: Pune (Hybrid) Experience: 5-8 Years Budget up to 25 LPA Must-Have: Technology: · Python 3 · Angular · Cloud technology (GCP / Azure / AWS). · Understand Docker, Kubernetes. Python + Angular JD Essential Responsibilities and Duties: · Strong Experience in Python and/or Java: Proven experience (5+ years) in backend development with Python or Java, focusing on building scalable and maintainable applications. · Angular Development Expertise: Strong hands-on experience in developing modern, responsive web applications using Angular. · Microservices Architecture: In-depth knowledge of designing, developing, and deploying microservices-based architectures. · DevOps Understanding: Good understanding of DevOps practices, CI/CD pipelines, and tools to automate deployment and operations. · Problem-Solving Skills: Ability to investigate, analyse, and resolve complex technical issues efficiently. · Adaptability: Strong aptitude for learning and applying new technologies in a fast-paced environment. · Cloud Environments Knowledge: Hands-on experience with at least one cloud platform (GCP, Azure, AWS). · Containerization Technologies: Experience working with container technologies like Kubernetes and Docker for application deployment and orchestration. Previous Experience and Competencies: · Bachelor’s degree in IT related discipline · Strong computer literacy with aptitude and readiness for multidiscipline training · 5 – 8 years seniority (Senior and Hands on) Preferred Qualifications · Strong in Software Engineering. · Interest in designing, analysing and troubleshooting large-scale distributed systems. · Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. · Ability to debug and optimize code and automate routine tasks. · Good to have: Familiarity with data integration platforms like Dataiku or industrial data platforms like Cognite would be a bonus Behaviour: · Fosters and maintains excellent internal, client and third-party relationships · Possesses a high degree of initiative · Adaptable and willing to learn new technologies; keeps abreast of key developments in relevant technologies · Able to work under pressure · Excellent oral, written communication, and interpersonal skills · Practices effective listening techniques · Able to work independently or as part of a team · Effectively analyses and solves problems with attention to the root cause

Posted 2 days ago

Apply

2.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Cohesity is the leader in AI-powered data security. Over 13,600 enterprise customers, including over 85 of the Fortune 100 and nearly 70% of the Global 500, rely on Cohesity to strengthen their resilience while providing Gen AI insights into their vast amounts of data. Formed from the combination of Cohesity with Veritas’ enterprise data protection business, the company’s solutions secure and protect data on-premises, in the cloud, and at the edge. Backed by NVIDIA, IBM, HPE, Cisco, AWS, Google Cloud, and others, Cohesity is headquartered in Santa Clara, CA, with offices around the globe. We’ve been named a Leader by multiple analyst firms and have been globally recognized for Innovation, Product Strength, and Simplicity in Design , and our culture. Want to join the leader in AI-powered data security? About The Team Join a cutting-edge team at the intersection of AI, automation, and global user experience. We work closely with engineering, AI/ML, DevOps, and product teams to ensure seamless, multilingual product quality. We're building intelligent automation frameworks that scale across languages and platforms—delivering globally ready products with precision, speed, and cultural relevance. If you're passionate about AI-driven testing, cloud technologies, and automation at scale, we’d love to have you on board. About The Role As a Software Quality Engineer, you’ll drive automation and validation for multilingual, cloud-native storage and backup products. This hybrid role blends QA expertise with domain knowledge in Storage, Backup, Virtualization, and localization. You’ll build scalable automation (Python, Robot Framework), simulate real-world language environments, and validate AI/ML-powered content — ensuring fast, accurate, and high-quality global releases. Ideal for those who thrive on technical depth, cross-functional teamwork, and innovation in AI-driven localization testing. How You’ll Spend Your Time Here Design, develop, and maintain automated localization test frameworks for multilingual UI and content validation using Python, Robot Framework, or similar tools. Manage and generate test datasets using your understanding of NAS, SAN, Object Storage, and backup/restore configurations. Integrate AI/ML models for advanced analysis of localized content, including text, voice, and image-based quality validation. Build and execute test strategies across Storage, Backup, and Virtualization product lines that have a global reach. Automate localization quality checks to validate UI/UX consistency, layout integrity, cultural appropriateness, and linguistic accuracy. Simulate localized user environments using virtualization tools such as VMware, KVM, and Hyper-V. Define and implement end-to-end test methodologies and test plans that reflect both user expectations and technical requirements. Perform root cause analysis, log defects using tools like JIRA, and ensure timely resolution with relevant teams. Test REST APIs, microservices, and containerized applications using tools like Postman, REST Assured, etc. We’d Love to Talk to You If You Have Many of the Following: B.Tech/M.Tech in Computer Science or related field with 2-8 years of experience in QA Automation and Localization Testing. Proven experience testing system-level products involving storage, networking, or virtualization. A strong commitment to product quality and detail orientation. Expertise in Python programming and experience with tools like Robot Framework, Selenium, or Appium. Applied experience with AI/ML tools/libraries such as spaCy, Transformers, TensorFlow, OpenAI, or Google Translate APIs. Strong understanding of Storage, Backup, and Virtualization technologies. Experience working on Cloud platforms (AWS, Azure, GCP) including CI/CD integrations. Familiarity with i18n and l10n practices, and experience in localization tooling and workflows (e.g., Crowdin, Smartling, SDL Trados). Familiarity with Hypervisors (e.g., ESXi, Hyper-V, KVM). Knowledge of containerization tools like Docker and orchestration via Kubernetes. Comfortable working in Linux environments, with hands-on shell scripting skills. Strong test planning, execution, and problem-solving skills. Demonstrated ability to work independently, manage priorities, and adapt to fast-paced environments. Fluency in English; knowledge of additional languages like Japanese, French, or Chinese is a plus. Experience using or validating LLM-based translation or testing tools like XTM is highly desirable. Demonstrated ability to leverage AI tools to enhance productivity, streamline workflows, and support decision making. Data Privacy Notice For Job Candidates For information on personal data processing, please see our Privacy Policy . Equal Employment Opportunity Employer (EEOE) Cohesity is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status or any other category protected by law. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, or are limited in the ability or unable to access or use this online application process and need an alternative method for applying, you may contact us at 1-855-9COHESITY or talent@cohesity.com for assistance. In-Office Expectations Cohesity employees who are within a reasonable commute (e.g. within a forty-five (45) minute average travel time) work out of our core offices 2-3 days a week of their choosing.

Posted 2 days ago

Apply

0.0 - 6.0 years

0 - 0 Lacs

Moti Nagar, Delhi, Delhi

Remote

Job Title: IT Automation & Internal Tools Engineer Location: Moti Nagar, Delhi (Hybrid/On-site) Department: Technology / Engineering Reports To: Directors Experience Level: 3-6 years About Admissify Admissify is a leading overseas education consultancy empowering students through smart technology and expert guidance. As we scale our operations and internal capabilities, we’re looking to streamline workflows, enhance productivity, and automate repetitive tasks across departments with robust in-house tooling. If you're passionate about creating intelligent automation systems and love solving organizational problems with code, this role is for you. Role Overview As an IT Automation & Internal Tools Engineer , you will design, build, and maintain internal systems and tools to enhance team efficiency, reduce manual workload, and ensure robust data integrity across business operations. You’ll work closely with cross-functional teams like Operations, Sales, HR, and Customer Success to identify pain points and deliver scalable tech solutions. Key ResponsibilitiesAutomation & Tool Development Develop and maintain internal web-based tools and dashboards (admin panels, lead management modules, reporting tools, etc.) Automate recurring tasks such as data entry, reporting, alerts, form processing, and CRM integrations (e.g., Zoho, HubSpot). Build scripts to integrate SaaS platforms and APIs (Google Workspace, Slack, WhatsApp APIs, payment gateways, etc.) Workflow Optimization Collaborate with stakeholders to identify bottlenecks in manual workflows and propose tech-driven improvements. Implement task automation using platforms like Zapier, Make (Integromat), or custom scripts in Python/Node.js. Data Pipelines & Monitoring Build automation pipelines to collect, clean, and sync data across tools (Google Sheets, internal DBs, CRM). Set up internal alert systems for failures, deadline breaches, or anomalies using webhooks and cron jobs. Maintenance & Support Own uptime and performance of internal tools; debug issues proactively and ensure long-term reliability. Maintain documentation for each tool and automation flow for ease of future development and training. Qualifications Must-Have: 2+ years of experience in backend scripting (React.js,Python, Node.js, etc.) Experience building and deploying internal dashboards or admin panels (React, Flask, Express, etc.) Hands-on with automation tools (Zapier, Make, n8n) or writing custom scripts for automation API integration experience (RESTful APIs, webhooks) Understanding of databases (SQL or NoSQL) and version control (Git) Familiarity with cloud platforms (AWS, GCP) and deployment tools Good to Have: Prior experience in education, SaaS, or consulting environments Knowledge of chatbot integrations (WhatsApp, Telegram, Messenger) Understanding of microservices and containerization (Docker) Soft Skills Strong analytical and problem-solving mindset Effective communicator with technical and non-technical stakeholders Self-starter attitude with the ability to manage multiple priorities Willingness to take ownership of projects end-to-end How to Apply Email your resume and GitHub/portfolio (if any) to priyanka.k@admissify.com with subject: “Application – IT Automation & Tools Engineer” Job Type: Full-time Pay: ₹50,000.00 - ₹70,000.00 per month Benefits: Flexible schedule Work from home Work Location: In person Speak with the employer +91 9319228283

Posted 2 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Cyber Threat Intelligence Analyst Job Location: UniOps Bangalore About Unilever Be part of the world’s most successful, purpose-led business. Work with brands that are well-loved around the world, that improve the lives of our consumers and the communities around us. We promote innovation, big and small, to make our business win and grow; and we believe in business as a force for good. Unleash your curiosity, challenge ideas and disrupt processes; use your energy to make this happen. Our brilliant business leaders and colleagues provide mentorship and inspiration, so you can be at your best. Every day, nine out of ten Indian households use our products to feel good, look good and get more out of life – giving us a unique opportunity to build a brighter future. Every individual here can bring their purpose to life through their work. Join us and you’ll be surrounded by inspiring leaders and supportive peers. Among them, you’ll channel your purpose, bring fresh ideas to the table, and simply be you. As you work to make a real impact on the business and the world, we’ll work to help you become a better you. About Uniops Unilever Operations (UniOps) is the global technology and operations engine of Unilever offering business services, technology, and enterprise solutions. UniOps serves over 190 locations and through a network of specialized service lines and partners delivers insights and innovations, user experiences and end-to-end seamless delivery making Unilever Purpose Led and Future Fit. Unilever is one of the world’s leading consumer goods companies with operations in over 190 countries and serving 3.4 billion consumers every day. Unilever delivers best in class performance with market making, unmissably superior brands which include Dove, Knorr, Domestos, Hellmann’s, Marmite and Lynx. Our strategy beings with a purpose that places our consumers at the heart of everything we do, “Brighten everyday life for all”. Role Purpose This role will support the Cyber Threat Intelligence (CTI) team in proactively collecting cyber security information and events and converting them into actionable intelligence that will be used by various technologies and stakeholders for securing Unilever. The ideal candidate will have a strong understanding of cyber threat intelligence processes, tools, and technologies, and will play a key role in identifying, analysing, and reporting on cyber threats that could impact our organization. Role Summary The Threat Intel Analyst will play a key role in identification, interpretation, transformation, and dissemination of threat intelligence crucial to the protection of Unilever. The candidate will support the daily operations of the CTI team in areas ranging from Strategic, Tactical and Operational intelligence. The role should possess analytical skills to be able to assess and prioritize signals from the noise to ensure resources are utilized optimally at CTI and dependent teams. This role involves continuous monitoring of the threat landscape, profiling threat actors and malware, tracking vulnerabilities, and the production of actionable intelligence to support decision-making, and keeping the stakeholders informed of threats that could have an adverse impact on the organization. The role is key to transforming the produced intelligence to cater to audiences ranging from technical to business stakeholders. This role is also crucial to Unilever's overall cyber threat management efforts, as it helps to drive the right focus on cyber threats and instilling confidence that adequate countermeasures in line with the NIST Cyber Security Framework (version 2.0). Main Accountabilities Threat Profiling: Monitor surface, deep and dark web for cyber threats impacting the manufacturing sector and Unilever in specific. Ensure 0-days and critical vulnerabilities are analysed and raised with the Threat and Vulnerability Management team to identify exposure and drive remediation. Support campaigns with the human risk team to increase threat awareness across the organization. Tools and Technology Management Work with Security Engineering team to maintain the technology stack used by the CTI team. Drive innovative integrations using the existing toolsets to automate workflows resulting in efficient ways of working. Incident Response Support Work with the Security Operations Centre (SOC) and Cyber Emergency Response Team (CERT) in supporting them with cyber investigations. Enrich and contextualize threat intelligence to support the investigations and containment efforts. VIP Protection Support investigations to ensure scams and frauds against / impersonation Executives are thwarted in a quick and efficient manner. Support in creation of digital footprints for Executives to create awareness about their sensitive information present in publicly accessible forums. Metrics And Reporting (Including Cloud Resilience) Create and maintain cyber threat intelligence content in Unilever’s central collaboration spaces. Collaborate with Unilever’s Cyber Security Analytics (CSA) team for alignment on reporting of CTI metrics. Skills Key Skills and Relevant Experience The role is highly responsive, and responsible for identification, analysis, processing, and distribution of intelligence related to threats and vulnerabilities. Stay up to date on the threat landscape. Excellent analytical, problem solving and presentation skills with a flair for technical aspects of cyber security. Prioritize and use information derived from open and commercial intelligence disciplines to determine new / changes in actor activity, capabilities, intent, and resources. Lead research efforts tracking threats and actors across industry verticals Performing and adding structured intelligence analysis to the Threat Intelligence Platform (TIP). Technical analysis of Tactics, Techniques and Procedures (TTPs) used in cyber incidents and campaigns: Analyzing attack vectors, finding adversary infrastructure, establishing intrusion chain, structured documentation of findings on the TIP. Focus on integration and automation of threat intelligence to security tools using STIX / TAXII Providing Intelligence support to Incident Response teams in Security Operations, Cyber Security teams and Business stakeholders. Engage with IT and Security teams to apprise them of threats to the technology landscape and drive remediation. Producing intel reports on incidents, campaigns and emerging threats for technical and Executive audience. Usage of AI to simplify and automate CTI activities with working knowledge of automation using API integrations and webhooks. Experience Minimum 4 – 5 years of experience in Information / Cyber Security domain with at least 3 years as Threat Intelligence Analyst. Strong experience analyzing and synthesizing actionable threat intelligence via open-source tools. Solid understanding of threat intelligence lifecycle, cyber kill chain and Mitre ATT&CK framework. Experience with cloud platforms (Azure, Google Cloud) and their resilience features. Solid understanding of network and endpoint security concepts in on-prem and cloud environments. Solid understanding of vulnerabilities, how they affect systems, organizations and their corresponding context and severity (CVEs, CVSS, CPE and vulnerability disclosures). Ability to identify, create, execute, and adjust standard operating procedures for day-to-day operations. Ability to document technical analysis and articulate outcomes to non-technical audiences Understanding of current events in the security and threat intelligence world. Strong experience with SIEM, EDR, NDR tools. Good to have, but not mandatory – Cyber security certifications Note: "All official offers from Unilever are issued only via our Applicant Tracking System (ATS). Offers from individuals or unofficial sources may be fraudulent—please verify before proceeding."

Posted 2 days ago

Apply

0.0 - 5.0 years

0 - 0 Lacs

Mumbai, Maharashtra

On-site

Job Title: MIS Executive, Data Management Executive (DME) Location: Mumbai, India Company: Voraco Distributors Pvt. Ltd. Employment Type: Full-time Job Summary: Voraco Distributors Pvt. Ltd. is looking for a skilled Data Management Executive (DME) to manage and enhance our data management systems. The ideal candidate should be proficient in Excel, Google Sheets, and data analysis, with strong mathematical and analytical skills. The role involves automation, data processing, and reporting to support business operations and decision-making. Key Responsibilities: Develop and maintain MIS reports, dashboards, and automation solutions using Google Sheets, Excel, and macros (if possible). Automate data processing and reporting tasks to improve efficiency. Ensure data accuracy and consistency across systems. Analyse data and generate insights to support management decisions. Work closely with different teams to understand data requirements and provide analytical solutions. Troubleshoot and resolve any data-related issues. Maintain documentation of processes, reports, and automation scripts. Key Requirements: Education: Bachelor's degree in Computer Science, IT, Mathematics, or a related field. Technical Skills: Thorough knowledge of Excel and Google Sheets. Proficiency in different Excel formulas and functions. Knowledge of Pivot Tables. Experience with Macros (preferred). Strong mathematical and analytical skills. Experience: 3-5 years of working experience as an MIS Executive or in a similar data management role. Experience in handling large datasets and automating reporting processes. Soft Skills: Strong problem-solving and analytical thinking. Ability to work independently and in a team environment. Good communication and documentation skills. Preferred Qualifications: Experience with database management and SQL is a plus. Knowledge of ERP systems or working experience in a distribution company is an added advantage. Why Join Voraco Distributors? Opportunity to work in a dynamic and growing organisation. Exposure to advanced data analytics and automation techniques. Collaborative work environment with learning and growth opportunities. We would love to hear from you if you are passionate about data management, automation, and analytics! How to Apply: Please send your resume to hr@voraco.in with the subject line "Application for Data Management Executive—Voraco Distributors." Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹40,000.00 per month Benefits: Health insurance Leave encashment Provident Fund Ability to commute/relocate: Mumbai, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you okay with the proposed CTC? Work Location: In person

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Name Administrator, Systems – Senior. Position Summary We are seeking a skilled Systems Administrator to manage, maintain, and optimize our database systems and IT infrastructure as well as cloud-based systems and services. The ideal candidate will be responsible for deploying, managing, and supporting databases, cloud infrastructure and IT Systems and will be ensuring high availability, security and performance of critical IT resources. Key Responsibilities Install, configure, and upgrade database management systems, servers and cloud infrastructure. Troubleshoot and resolve issues and provide technical support to ensure optimal operation. Monitor workload and implement tuning measures to optimize performance of databases, servers and cloud services. Implement and manage security measures to protect servers, systems and data. Design, implement, and manage cloud solutions using platforms such as AWS, Azure, or Google Cloud. Optimize resource utilization to ensure cost-effectiveness and efficiency. Implement security best practices to protect cloud environments, databases and servers. Automate deployment and management processes. Collaborate with development and operations teams to support CI/CD pipelines and application deployments. Collaborate with IT teams to design and implement system upgrades and enhancements. Document system configurations, procedures, and policies for reference and compliance. Stay current with industry trends and emerging technologies to improve system performance and security. Required Skills And Abilities Proven experience as Systems, Cloud or Database Administrator. Strong knowledge of operating systems (Windows, Linux) and server management. Profound understanding of database design and database management systems such as MySQL, SQL Server, or PostgreSQL. Experience with cloud service models (IaaS, PaaS, SaaS) and cloud architecture. Experience with backup solutions and recovery tools and techniques. Knowledge of server virtualization technologies. Understanding of networking concepts and protocols (TCP/IP, DNS, DHCP). Familiarity with cloud database solutions (e.g., AWS RDS, Azure SQL Database). Experience with monitoring and logging tools (e.g., SCOM, CloudWatch, Azure Monitor). Knowledge of programming or scripting languages such as Python or Java is a plus. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Education And Qualifications Bachelor’s degree in Computer Science, Information Technology, or equivalent experience. Proven experience as a Systems Administrator, Database Administrator or similar role. Certification in system administration, database management or in cloud technologies. Experience in international business environment. Strong written and verbal communication skills in English. Additional language skills are an advantage. Must reflect a courteous and professional attitude and be able to communicate with all levels of the company personnel as well as Cooper-Standard customers and vendors. Driver´s license. Work Environment/Work Conditions Normal working hours are as defined, but additional maybe required to work off shift hours depending upon projects and maintenance tasks. Will be on-call 24 X 7 for emergency situations that arise. Some domestic and international travel might be required occasionally (less than 20%).

Posted 2 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description The FBA Inventory and Capacity Management team is responsible for the challenging task of controlling FBA inventory volume sent to and stored in the Amazon Fulfillment Network (AFN), while still allowing sellers to grow at the pace they desire. Our team is uniquely positioned to identify opportunities to better align incentives between Amazon and sellers, for example if there is selection or sellers that are systematically unprofitable. As a Senior Program Manager for FBA Capacity Management, you will play a key role in developing and managing processes that will improve the seller experience when it comes to managing their FBA inventory. You will work with Product Managers, Sales & Operations Planning, Fulfillment Technologies, and Selling Partner Support leaders to determine the right system configurations to execute capacity plans for global marketplaces. You will work with tech teams to develop ways to automate key processes and create seller level analytics to identify further opportunities for improving seller experience. This is also a highly cross functional role which will require you to work with stakeholders WW and scale product adoption. This is an opportunity to work in a startup like environment within Amazon and we seek a Program leader who is motivated by a fast-paced and highly entrepreneurial environment. You will leverage your deep expertise to work backwards from our customers, identify the right opportunities to help us accelerate at scale, and innovate faster for our customers. If you have a passion for innovation, for thinking big to tackle ambiguous problems, for solving some of the biggest technical challenges in the industry, and for building elegant products that delight our customers, we need you! Key job responsibilities Owner of capacity formation decisions, operations and operational excellence Manage day-to-day processes needed for inventory capacity management Work with SCOT S&OP, Capacity and FBA Inbound teams to configure FBA tools Make process improvements and innovations to reduce seller contacts Work with tech teams on automating processes and improving operational excellence Deep dive impact processes have on the seller experience and work with Product Managers on prioritizing features Develop and track operational excellence metrics and goals Basic Qualifications 5+ years of program or project management experience Experience using data and metrics to determine and drive improvements Experience owning program strategy, end to end delivery, and communicating results to senior leadership Preferred Qualifications 2+ years of driving process improvements experience Master's degree, or MBA in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR - DTA - I99 Job ID: A2996113

Posted 2 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Are you passionate about helping people solve IT problems? Love being a part of an exciting and innovative environment? Join Amazon Global IT Support! We’re looking for people who strive to “Work Hard. Have Fun. Make History.” Amazon, is seeking bright, adaptable, and hardworking applicants to work at our Corporate Offices in the National Capital Region, India. IT Support Managers work with Amazon teams to provide and support the IT equipment and services they need. We treat Amazon employees as our customers and provide timely, accurate, and professional support. A successful IT Support Manager II excels in a fast-paced, team environment and possesses excellent communication skills. They have a high degree of Leadership skills and technical aptitude over a large scope of IT software, hardware, and networking disciplines. About The Role As an IT Support Manager II, you will use your Leadership & technical knowledge and specialized skills to support, build, implement, and improve technology solutions. You are able to manage large projects with minimal guidance that affect multiple locations in a region. You are able to Lead a technical IT Team who take care of customer issues in time of crisis to get them working again. You are actively expanding your scope considering customer need. Responsibilities include, but are not limited to Managing team of ITSEs Co-ordination with other internal & external stake holders. Support virtual or physical events and town halls for India Manage Audio Visual devices and services in India SL governance for team ensuring services uptime for customers. Helping and leading team to troubleshoot difficult IT problems. Lead continuous improvement efforts. Audit the quality of work performed and provide constructive feedback when necessary. Automate manual tasks; create/improve small tools that help make team operations more efficient. Be the first point of escalation. Senior Level Customer Support. Participate in hiring, training and development of team. Basic Qualifications Bachelor’s degree in Computer science or IT related field. 4+ years of experience in two or more of the following: Microsoft Administration, Linux Administration, or Cisco IOS (CLI) 4+ years of experience troubleshooting skills in a multi-user high availability environment Experience in Audio Visual devices and services 4+ years of experience in virtual or physical events and town halls 4+ years of experience with networking concepts such as DNS, DHCP, SSL, OSI Model, and TCP/IP 4+ years of experience in leading Technical team. Preferred Qualifications Bachelor’s degree in Computer science or IT related field. Microsoft MCSE, MCITP Systems Administrator (Active Directory) experience ITIL certification Experience in Audio Visual devices and services Experience in Linux, Microsoft, and network systems administration Strong troubleshooting skills of very complex systems Ability to explain complex IT concepts in simple terms Excellent written and verbal communication skills Ability to manage high priority projects Basic Qualifications 5+ years of developing a team of technical professionals across multiple locations experience 2+ years of leading technology teams as a information technology operations manager experience Bachelor's degree, or 4+ years of professional or military experience Knowledge of Linux or Unix systems administration Preferred Qualifications Knowledge of hardware architectures Experience with system management tools and client/server environments Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3043307

Posted 2 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description ShipTech is the connective tissue which connects Transportation Service Providers, First Mile, Middle Mile, and Last Mile to facilitate the shipping of billions of packages each year. Our technology solutions power Amazon's complex shipping network, ensuring seamless coordination across the entire delivery chain. We are seeking a Business Intelligence Engineer II to join our ShipTech Program and Product Growth team, focusing on driving data-driven improvements for our ecosystem. This role will be instrumental in building the right data pipeline, analyzing and optimizing the program requests, scan related data, customer experience data, trans performance metrics and product adoption/growth patterns to enable data-driven decision making for our Program and Product teams. Key job responsibilities Analysis of historical data to identify trends and support decision making, including written and verbal presentation of results and recommendations Collaborating with product and software development teams to implement analytics systems and data structures to support large-scale data analysis and delivery of analytical and machine learning models Mining and manipulating data from database tables, simulation results, and log files Identifying data needs and driving data quality improvement projects Understanding the broad range of Amazon’s and ShipTech's data resources, which to use, how, and when Thought leadership on data mining and analysis Helping to automate processes by developing deep-dive tools, metrics, and dashboards to communicate insights to the business teams Collaborating effectively with internal end-users, cross-functional software development teams, and technical support/sustaining engineering teams to solve problems and implement new solutions Develop ETL pipelines to process and analyze cross-network data. A day in the life ShipTech Program and Product Growth team is hiring for a BIE to own generating insights, defining metrics to measure and monitor, building analytical products, automation and self-serve and overall driving business improvements. The role involves combination of data mining, data-analysis, visualization, statistics, scripting, a bit of machine learning and usage of AWS services too. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3043691

Posted 2 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At ZoomInfo, we encourage creativity, value innovation, demand teamwork, expect accountability and cherish results. We value your take charge, take initiative, get stuff done attitude and will help you unlock your growth potential. One great choice can change everything. Thrive with us at ZoomInfo. About the role : We are seeking a detail-oriented and data-driven Partnership Operations Manager to support our growing partnerships function. This is an individual contributor role focused on operational excellence , where you will own the day-to-day administration, reporting, and process management of our partnerships and partner-led sales motions . Your role will be pivotal in ensuring our internal systems, reports, and workflows run smoothly, enabling strategic decision-making across the partnerships organization. What You'll Do: Operational Support for Partnerships & Partner Sales Manage all administrative and backend processes related to partnership onboarding, enablement tracking, and pipeline hygiene. Support partner-led sales workflows including data management, opportunity tracking, and internal system alignment (Google Sheets, SFDC). Coordinate updates across cross-functional teams to ensure process consistency and data accuracy. Reporting & Analytics Build and maintain Tableau dashboards and Google Sheet trackers to report on partner performance, sales contribution, and key metrics. Provide timely, reliable, and actionable insights to the partnerships and sales leadership teams. Automate recurring reports and streamline manual processes wherever possible. CRM & Data Management (SFDC) Maintain clean and structured partner-related data in Salesforce. Partner with RevOps to ensure accurate mapping of partner accounts, contacts, and deal attribution. Identify and resolve data inconsistencies that impact reporting and business operations. Cross-functional Coordination Liaise with Sales, RevOps, Marketing, Finance, and Enablement teams to align on partnership processes and reporting needs. Own documentation and knowledge management for partner operations workflows What You Bring: Fulltime Bachelors in Science / Engineering 3–5 years of experience in Sales/Partnership Operations, Revenue Operations, or similar roles. Shift : Mandatory Night Shift - 5PM to 2 AM IST Hands-on experience with Tableau and advanced Google Sheets for reporting and visualization. Strong working knowledge of Salesforce (SFDC) for managing and reporting on partner-related data. High attention to detail and an organized, self-driven approach to managing multiple concurrent tasks. Exposure to SaaS or technology industry environments. Excellent communication skills and comfort collaborating across teams. Prior experience supporting partnerships or channel sales would be great plus Understanding of co-selling motions or partner lifecycle stages. About us: ZoomInfo (NASDAQ: GTM) is the Go-To-Market Intelligence Platform that empowers businesses to grow faster with AI-ready insights, trusted data, and advanced automation. Its solutions provide more than 35,000 companies worldwide with a complete view of their customers, making every seller their best seller. ZoomInfo may use a software-based assessment as part of the recruitment process. More information about this tool, including the results of the most recent bias audit, is available here. ZoomInfo is proud to be an equal opportunity employer, hiring based on qualifications, merit, and business needs, and does not discriminate based on protected status. We welcome all applicants and are committed to providing equal employment opportunities regardless of sex, race, age, color, national origin, sexual orientation, gender identity, marital status, disability status, religion, protected military or veteran status, medical condition, or any other characteristic protected by applicable law. We also consider qualified candidates with criminal histories in accordance with legal requirements. For Massachusetts Applicants: It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. ZoomInfo does not administer lie detector tests to applicants in any location.

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

iFlow is hiring for Hardware Embedded Penetration Tester Key Responsibilities Below are the details: Extract firmware directly from embedded devices and systems Interact directly with hardware components and interfaces Perform firmware reverse engineering and analysis Audit the security of hardware protocols and communication interfaces Extract and analyze content from SPI flash and other on-board memory Interact with and test JTAG, UART, and other hardware debug interfaces Conduct penetration testing and vulnerability research on embedded systems Develop custom tools and scripts to automate and enhance testing capabilities Analyze findings, document vulnerabilities, and provide remediation recommendations Required Skills Proficient in firmware extraction and analysis Hands-on experience with hardware hacking and reverse engineering Strong understanding of embedded hardware interfaces and protocols Expertise in conducting JTAG, UART, and SPI-based testing Ability to identify and bypass hardware security mechanisms Familiarity with embedded operating systems and architectures Proficiency in programming and scripting (e.g., Python, C, Bash) Experience with hardware debug tools and test equipment Solid understanding of network security and penetration testing methodologies Ability to research, discover, and document hardware vulnerabilities Strong analytical and problem-solving skills

Posted 2 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Why join Stryker? We are proud to be named one the World’s Best Workplaces and a Best Workplace for Diversity by Fortune Magazine! Learn more about our award-winning organization by visiting stryker.com Our total rewards package offering includes bonuses, healthcare, insurance benefits, retirement programs, wellness programs, as well as service and performance awards – not to mention various social and recreational activities, all of which are location specific. Know someone at Stryker? Be sure to have them submit you as a referral prior to applying for this position. Learn more about our employee referral program Who we want: Analytical problem solvers. People who go beyond just fixing issues to identify root causes, evaluate optimal solutions, and recommend comprehensive upgrades to prevent future issues. Strategic thinkers. People who enjoy analyzing data or trends for the purposes of planning, forecasting, advising, budgeting, reporting. Collaborative partners. People who build and leverage cross-functional relationships to bring together ideas, data and insights to drive continuous improvement in functions. What you will do: Developing and maintaining catalog forms, workflows, automations, integrations and configurations pertaining to the requests, order guides and record producer applications in ServiceNow. Implementing Flow Designer, Client Scripts, business rules, UI Actions, approval workflows and all other configurations for the creation, management and maintenance of catalog forms, order guides and record producers. Designing and implementing automations within ServiceNow. Implementing best practices for development while understanding and utilizing update sets to move configurations and development work between instances. Participating in requirements gathering and workshops. Ensuring compliance with ITIL best practices. Provide input and direction to stakeholders and requestors as an expert in service catalog design and delivery. Experience with testing best practices, creating test scripts, regression testing and user acceptance testing. What you will need: Education & special training: Bachelor’s degree required or equivalent work experience Qualifications & experience: 4+ years of ServiceNow development experience ServiceNow Expertise: A strong understanding of the ServiceNow platform and its capabilities. Including but not limited to, ITSM, Service Requests, Change Requests, Record Producers, Order Guides, Reporting, Workflow and flow designer Experience with Orchestration, AD/LDAP integrations, EntraID (Azure) integrations, API Integrations, including REST/SOAP. Strong Proficiency in JavaScript, GlidesScript, REST, XML, and other relevant technologies. Ability to troubleshoot, analyze, and resolve technical issues, including complex workflows, custom table references and automations. Strong communication and collaboration skills to work effectively with cross-functional teams. Familiarity with the ITIL framework and its application in service management. Experience with SOX enforced policies/procedures and working in a regulated environment is preferred. Ability to manage multiple tasks, projects and priorities in a fast-paced environment. Ability to support different time zones based on the project/business stakeholders being engaged. ServiceNow certifications (e.g., CSA, CIS) are often preferred. Experience with Microsoft Power Automate preferred. ServiceNow architecture and design experience preferred. About Stryker Stryker is one of the world’s leading medical technology companies and, together with our customers, is driven to make healthcare better. The company offers innovative products and services in Medical and Surgical, Neurotechnology, Orthopedics, and Spine that help improve patient and healthcare outcomes. Alongside its customers around the world, Stryker impacts more than 100 million patients annually. More information is available at stryker.com Stryker is a global leader in medical technologies and, together with its customers, is driven to make healthcare better. The company offers innovative products and services in MedSurg, Neurotechnology, Orthopaedics and Spine that help improve patient and healthcare outcomes. Alongside its customers around the world, Stryker impacts more than 150 million patients annually.

Posted 2 days ago

Apply

2.0 - 5.0 years

0 Lacs

Tamil Nadu, India

Remote

Job Title : Web App Developer (AI-Accelerated Development & Internal Automation) Location : Chennai, India (Remote/Hybrid options available) Department : Technology & Automation Reporting To : CEO / Operations Head Experience : 2 to 5 Years Employment Type : Full-time / Contract-to-Hire About The Company Vanan Online Services, is a multi-brand digital service provider operating across transcription, translation, captioning, and typing domains, with a global presence in the U. and India. We build internal tools and workflows that automate our daily operations, improve turnaround times, and enhance visibility across teams. To keep pace with our growing operational needs, were hiring a Web App Developer with strong AI-assisted development skills, capable of delivering multiple lightweight web apps under quick turnaround times. Position Overview We are seeking a Web App Developer with hands-on experience in building functional web applications quickly and independently. The ideal candidate is skilled in using modern development frameworks and comfortable leveraging AI tools to speed up coding, troubleshooting, and feature development. This role involves working closely with leadership to understand operational pain points and transforming them into lightweight, efficient web tools that support internal workflows. Youll have the flexibility to recommend and use both full-code and no-code stacks based on the nature of the requirement. Key Responsibilities Collaborate with leadership to gather requirements and understand business use cases. Design and develop internal-use web applications tailored to specific workflows. Build And Deploy Tools Such As Internal dashboards. Task and project trackers. Lead/order management modules. Notification and reporting utilities. Use ChatGPT (Advanced) To Generate boilerplate, functional, and test code. Plan features and troubleshoot logic, identify bugs. Refactoring code with AI feedbacks. Plan features and optimize workflows. Build faster through prompt engineering. Select appropriate tech stacks (code or no-code) depending on the app scope. Manage lightweight backends using platforms like Firebase or Supabase. Conduct basic QA, testing, and validation prior to deployment. Ensure apps are intuitive, reliable, and meet internal usability expectations. Handle multiple small-to-mid scale development requests simultaneously. Required Skills & Qualifications Solid understanding of HTML, CSS, JavaScript, and frameworks such as React.js, Vue.js, or Svelte. Working knowledge of Node.js, Firebase, Supabase, or similar backend services. Experience with RESTful APIs and third-party service integration. Proficiency in using AI-based development tools (e.g., ChatGPT) to assist in code writing, refactoring, and debugging. Strong analytical and communication skills to interpret requirements and translate them into functional software. Ability to work independently in a fast-paced environment without day-to-day oversight. Track record of shipping usable products or tools under tight timelines Preferred Skills Familiarity with no-code/low-code platforms (e.g., Bubble, Webflow, Glide, AppGyver, Retool). Experience using automation platforms like Zapier, Make.com, or n8n. Basic UI/UX design sense for creating clean, user-friendly interfaces. Exposure to internal systems such as CRMs, ERPs, or other business tools. Why Join Us ? Immediate impact : Your work will be used daily across teams to enhance productivity and workflow efficiency. Innovation-driven environment : Use cutting-edge tools like ChatGPT to accelerate development. Autonomy & ownership : You'll have the freedom to choose tech stacks and manage your own development process. Collaborative leadership : Work directly with decision-makers, gaining insight and context to build better products. Flexible work model : Options for remote or hybrid work tailored to your preferences. (ref:hirist.tech)

Posted 2 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a highly skilled and self-driven Site Reliability Engineer to join our dynamic team. This role is ideal for someone with a strong foundation in Kubernetes, DevOps, and observability who can also support machine learning infrastructure, GPU optimization, and Big Data ecosystems. You will play a pivotal role in ensuring the reliability, scalability, and performance of our production systems, while also enabling innovation across ML and data teams. Key Responsibilities Automation & Reliability Design, build, and maintain Kubernetes clusters across hybrid or cloud environments (e.g., EKS, GKE, AKS). Implement and optimize CI/CD pipelines using tools like Jenkins, ArgoCD, and GitHub Actions. Develop and maintain Infrastructure as Code (IaC) using Ansible, Terraform, or & Observability : Deploy and maintain monitoring, logging, and tracing tools (e.g., Thanos, Prometheus, Grafana, Loki, Jaeger). Establish proactive alerting and observability practices to identify and address issues before they impact users. ML Ops & GPU Optimization Support and scale ML workflows using tools like Kubeflow, MLflow, and TensorFlow Serving. Work with data scientists to ensure efficient use of GPU resources, optimizing training and inference & Incident Management : Lead root cause analysis for infrastructure and application-level incidents. Participate in the on-call rotation and improve incident response & Automation : Automate operational tasks and service deployment using Python, Shell, Groovy, or Ansible. Write reusable scripts and tools to improve team productivity and reduce manual Learning & Collaboration : Stay up-to-date with emerging technologies in SRE, ML Ops, and observability. Collaborate with cross-functional teams including engineering, data science, and security to ensure system integrity and : 3+ years of experience as an SRE, DevOps Engineer, or equivalent role. Strong experience with Kubernetes ecosystem and container orchestration. Proficiency in DevOps tooling including Jenkins, ArgoCD, and GitOps workflows. Deep understanding of observability tools, with hands-on experience using Thanos and Prometheus stack. Experience with ML platforms (MLflow, Kubeflow) and supporting GPU workloads. Strong scripting skills in Python, Shell, Ansible, or : CKS (Certified Kubernetes Security Specialist) certification. Exposure to Big Data platforms (e.g., Spark, Kafka, Hadoop). Experience with cloud-native environments (AWS, GCP, or Azure). Background in infrastructure security and compliance. (ref:hirist.tech)

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Job We're looking for a highly skilled and self-driven Site Reliability Engineer (SRE-2) to join our team in Hyderabad. This is a full-time, work-from-office role (5 days a week) perfect for someone with 8-12 years of experience who thrives on challenges and is passionate about building robust, scalable, and highly available systems. You'll play a crucial role in ensuring the reliability, performance, and efficiency of our critical infrastructure and applications, with a particular focus on Kubernetes, DevOps, and observability. If you have hands-on experience with ML applications, GPU optimization, and Big Data systems, you'll be an ideal fit. Key Responsibilities As a Site Reliability Engineer (SRE-2), you will : Design, deploy, and manage highly available and scalable Kubernetes clusters and robust DevOps pipelines. Troubleshoot and resolve complex infrastructure and application issues across various environments. Implement, maintain, and enhance comprehensive observability solutions, with a strong emphasis on Thanos and related monitoring and alerting tools. Provide expert support for machine learning (ML) workflows, leveraging tools like MLflow and Kubeflow. Optimize applications to maximize performance in GPU-accelerated environments. Contribute individually to projects and proactively learn and adopt new technologies to stay ahead of industry trends. Automate repetitive tasks and streamline operational processes using a diverse set of scripting and automation tools including Python, Ansible, Groovy, and Shell scripting. Qualifications To be successful in this role, you should have : Strong, hands-on experience with Kubernetes and a deep understanding of core DevOps principles and tools. Proven expertise in observability and monitoring solutions, with a strong preference for experience with Thanos. Demonstrable experience working with ML platforms and optimizing applications for GPU-based environments. CKS (Certified Kubernetes Security Specialist) certification is preferred. Experience with Big Data systems is a significant plus. Proficiency in multiple scripting and automation languages : Python, Ansible, Groovy, and Shell scripting. Hands-on experience with CI/CD tools such as Jenkins, Ansible, and ArgoCD. (ref:hirist.tech)

Posted 2 days ago

Apply

2.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title : Site Reliability Engineer Experience : 2 - 5 Years Location : Hyderabad Work Mode : Work From Office (5 Days a Week) Overview We are seeking a proactive and technically skilled Site Reliability Engineer with a strong background in Kubernetes and DevOps practices. This role requires a self-starter who is enthusiastic about automation, observability, and enhancing infrastructure reliability. Key Responsibilities Manage, monitor, and troubleshoot Kubernetes environments in production. Design, implement, and maintain CI/CD pipelines using tools like Jenkins, ArgoCD, and Ansible. Implement and maintain observability solutions (metrics, logs, traces). Automate infrastructure and operational tasks using scripting languages such as Python, Shell, Groovy, or Ansible. Support and optimize ML workflows, including platforms like MLflow and Kubeflow. Collaborate with cross-functional teams to ensure infrastructure scalability, availability, and performance. Qualifications Strong hands-on experience with Kubernetes and container orchestration. Solid understanding of DevOps tools and practices. Experience with observability platforms Familiarity with MLflow and Kubeflow is a strong plus. CKS (Certified Kubernetes Security Specialist) certification is preferred. Exposure to Big Data environments is an added advantage. Proficient in scripting with Python, Shell, Groovy, or Ansible. Hands-on experience with tools like Jenkins, Ansible, and Argo (ref:hirist.tech)

Posted 2 days ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title : DevOps Engineer Experience : 5 - 7 Years Location : Hyderabad Work Mode : Work From Office (5 Days a Week) Overview We are looking for a skilled and proactive DevOps Engineer with hands-on experience in Kubernetes, CI/CD pipeline development, and automation. The ideal candidate should be capable of independently managing infrastructure, ensuring system reliability, and contributing to continuous delivery and improvement initiatives. Key Responsibilities Design, implement, and maintain robust CI/CD pipelines using Jenkins, Ansible, and ArgoCD. Manage, monitor, and troubleshoot Kubernetes clusters and deployments. Automate infrastructure tasks and application deployments using modern scripting and DevOps tools. Collaborate with development, QA, and operations teams to streamline delivery processes. Drive efforts to continuously improve system performance, scalability, and security. Integrate security tools and best practices throughout the CI/CD pipeline lifecycle. Qualifications 5 - 7 years of experience in DevOps, with strong expertise in Kubernetes and containerized environments. Proven hands-on experience with Jenkins, Ansible, ArgoCD, and related DevOps tools. Solid scripting knowledge in Python, Shell, Groovy, and Ansible. Experience with security integrations within CI/CD pipelines is highly desirable. Familiarity with Big Data systems is a plus. CKS (Certified Kubernetes Security Specialist) certification preferred. Strong understanding of system reliability, observability, and performance tuning. (ref:hirist.tech)

Posted 2 days ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

PLSQL Developer - Pune Job Title : PLSQL Developer Experience : 5 to 7 years Location : Pune/hybrid Notice Period : Immediate to 15days Mandatory Skills Languages : SQL, T-SQL, PL/SQL, Python libraries(PySpark, Pandas, NumPy, Matplotlib, Seaborn) Roles & Responsibilities Design and maintain efficient data pipelines and ETL processes using SQL and Python. Write optimized queries (T-SQL, PL/SQL) for data manipulation across multiple RDBMS. Use Python libraries for data processing, analysis, and visualization. Perform EOD (end-of-day) data aggregation and reporting based on business needs. Work on Azure Synapse Analytics for scalable data transformations. Monitor and manage database performance across Oracle, SQL Server, Synapse, and PostgreSQL. Collaborate with cross-functional teams to understand and translate reporting requirements. Ensure secure data handling and compliance with organizational data policies. Debug Unix-based scripts and automate batch jobs as needed. Qualifications Bachelors/Masters degree in Computer Science, IT, or related field. 5-8 years of hands-on experience in data engineering and analytics. Solid understanding of database architecture and performance tuning. Experience in end-of-day reporting setups and cloud-based analytics platforms. (ref:hirist.tech)

Posted 2 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 4-6 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 2 days ago

Apply

1.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title : RPA Developer (Entry Level) Experience : 1-4 Years Location : Pune Skills : UiPath, Automation Anywhere (Basics) Employment Type : Full-Time Job Description We are seeking a motivated and detail-oriented Entry-Level RPA Developer with 14 years of experience to join our automation team in Pune. The ideal candidate should have hands-on experience in UiPath and a basic understanding of Automation Anywhere platforms. You will be responsible for developing, testing, and maintaining RPA workflows and solutions to automate business processes across various departments. This is an excellent opportunity to grow your career in Robotic Process Automation while working on impactful projects in a collaborative environment. Roles And Responsibilities Analyze business processes to identify automation opportunities. Design, develop, test, and deploy RPA bots using UiPath. Support the configuration of bots with Automation Anywhere for basic tasks as required. Collaborate with business analysts and process owners to understand process workflows. Create and maintain technical documentation for RPA processes. Perform code reviews, testing, and debugging of RPA solutions. Monitor bots in production and handle incidents or enhancements. Maintain RPA platform best practices and follow SDLC procedures. Assist in evaluating and implementing new automation tools and solutions. Provide support for deployed bots, troubleshoot and resolve issues promptly. Key Skills Required 1- 4 years of hands-on experience in RPA development using UiPath. Basic working knowledge of Automation Anywhere. Strong analytical and problem-solving skills. Familiarity with workflow design, exception handling, and logging in RPA. Knowledge of scripting (VBScript, Python, or JavaScript) is a plus. Understanding of APIs, SQL, and web services is an advantage. Good communication and documentation skills. Ability to work independently as well as in a team environment. Preferred Qualifications UiPath RPA Developer Foundation or Advanced Certification. Bachelors degree in Computer Science, Engineering, or a related field. Exposure to Agile methodology or DevOps tools is a plus. (ref:hirist.tech)

Posted 2 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Us EverExpanse is a dynamic technology-driven organization specializing in modern web and e-commerce solutions. We pride ourselves on building scalable, high-performance applications that drive user engagement and business success. Our development team thrives on innovation and collaboration, delivering impactful digital experiences across diverse Description : We are seeking experienced Deployment Development Engineers to join our team in Pune. The ideal candidate will have a strong background in Core Java, deployment pipelines, and backend API integrations. This role demands hands-on experience in modern Java (JDK21), API development, build tools, and AI-assisted development Responsibilities : Design and develop scalable backend systems using Core Java JDK21. Develop and integrate REST and SOAP APIs. Work with Groovy, AWT/Swing, and Spring MVC for UI and service layer components. Deploy and manage applications on Tomcat Webserver. Implement persistence logic using JDBC and Hibernate. Work with RDBMS systems, especially PostgreSQL. Configure and manage builds using Gradle and Maven. Source code versioning using Git, Gerrit, CVS. Automate and orchestrate using Ansible. Leverage AI development tools such as GitHub Copilot, Cursor, Claude Code, etc. Apply unit testing practices using JUnit. Utilize MSI Builder for deployment Skills : Strong problem-solving and debugging skills. Hands-on experience in deployment automation. Familiarity with software development lifecycle and agile methodologies. Experience integrating AI-based developer productivity Join Us? Opportunity to work with cutting-edge technologies including AI-assisted development. Collaborative and learning-friendly work culture. Exposure to enterprise-grade systems and deployment environments. (ref:hirist.tech)

Posted 2 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description 66degrees is seeking a Senior Consultant with specialized expertise in AWS, Resources will lead and scale cloud infrastructure, ensuring high availability, automation, and security across AWS, GCP and Kubernetes environments. You will be responsible for designing and maintaining highly scalable, resilient, and cost- optimized infrastructure while implementing best-in-class DevOps practices, CI/CD pipelines, and observability solutions. As a key part of our clients platform engineering team, you will collaborate closely with developers, SREs, and security teams to automate workflows, optimize cloud performance, and build the backbone of their microservices candidates should have the ability to overlap with US working hours, be open to occasional weekend work and be local to offices in either Noida, or Gurgaon, India as this is an in-office opportunity. Qualifications 7+ years of hands-on DevOps experience with proven expertise in AWS; involvement in SRE or Platform Engineering roles is desirable. Experience handling high-throughput workloads with occasional spikes. Prior industry experience with live sports and media streaming. Deep knowledge of Kubernetes architecture, managing workloads, networking, RBAC and autoscaling is required. Expertise in AWS Platform with hands-on VCP, IAM, EC2, Lambda, RDS, EKS and S3 experience is required; the ability to learn GCP with GKE is desired. Experience with Terraform for automated cloud provisioning; Helm is desired. Experience with FinOps principles for cost-optimization in cloud environments is required. Hands-on experience building highly automated CI/CD pipelines using Jenkins, ArgoCD, and GitHub Actions. Hands-on experience with service mesh technologies (Istio, Linkerd, Consul) is required. Knowledge of monitoring tools such as CloudWatch, Google Logging, and distributed tracing tools like Jaeger; experience with Prometheus and Grafana is desirable. Proficiency in Python and/or Go for automation, infrastructure tooling, and performance tuning is highly desirable. Strong knowledge of DNS, routing, load balancing, VPN, firewalls, WAF, TLS, and IAM. Experience managing MongoDB, Kafka or Pulsar for large-scale data processing is desirable. Proven ability to troubleshoot production issues, optimize system performance, and prevent downtime. Knowledge of multi-region disaster recovery and high-availability architectures. Desired Contributions to open-source DevOps projects or strong technical blogging presence. Experience with KEDA-based autoscaling in Kubernetes. (ref:hirist.tech)

Posted 2 days ago

Apply

2.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title : AI/ML Engineer Experience : 2 - 5 Years Location : Gurgaon Key Skills : Python, TensorFlow, Machine Learning Algorithms Employment Type : Full-Time Compensation : 8 - 15 LPA Job Description We are seeking a highly motivated and skilled AI/ML Engineer to join our growing team in Gurgaon. The ideal candidate should have hands-on experience in developing and deploying machine learning models using Python and TensorFlow. You will work on designing intelligent systems and solving real-world problems using cutting-edge ML algorithms. Key Responsibilities Design, develop, and deploy robust ML models for classification, regression, recommendation, and anomaly detection tasks. Implement and optimize deep learning models using TensorFlow or related frameworks. Work with cross-functional teams to gather requirements, understand data pipelines, and deliver ML-powered features. Clean, preprocess, and explore large datasets to uncover patterns and extract insights. Evaluate model performance using standard metrics and implement strategies for model improvement. Automate model training and deployment pipelines using best practices in MLOps. Collaborate with data scientists, software developers, and product managers to bring AI features into production. Document model architecture, data workflows, and code in a clear and organized manner. Stay updated with the latest research and advancements in machine learning and AI. Requirements Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. 2-5 years of hands-on experience in developing and deploying ML models in real-world projects. Strong programming skills in Python and proficiency in ML libraries like TensorFlow, scikit-learn, NumPy, Pandas. Solid understanding of supervised, unsupervised, and deep learning algorithms. Experience with data wrangling, feature engineering, and model evaluation techniques. Familiarity with version control tools (Git) and deployment tools is a plus. Good communication and problem-solving skills. Preferred (Nice To Have) Experience with cloud platforms (AWS, GCP, Azure) and ML services. Exposure to NLP, computer vision, or reinforcement learning. Familiarity with Docker, Kubernetes, or CI/CD pipelines for ML projects. (ref:hirist.tech)

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies