Jobs
Interviews

20958 Automate Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 years

0 Lacs

Mohali, Punjab

On-site

Job Title: Software Engineer (SDE) Intern Company: DataTroops Location: Mohali (Punjab) - Work From Office Shift: As per client decision (Day) Shift About the Role: We are looking for a highly motivated Software Development Engineer (SDE) Intern to join our dynamic team. As an intern, you will have the opportunity to work on challenging projects, solve real-world problems, and gain exposure to full-stack development. You will work closely with experienced engineers and learn best practices in software development while honing your problem-solving and technical skills. Key Responsibilities : Investigate, debug, and resolve customer-reported bugs and production issues under mentorship. Write clean and efficient code for minor enhancements, internal tools, and support utilities. Collaborate with developers, QA, and product teams to ensure timely resolution of issues. Document solutions, support guides, and FAQs for common issues. Perform data validations, write SQL queries, and automate recurring support tasks. Assist in maintaining internal dashboards, monitoring alerts, and performing routine checks Participate in stand-ups, sprint meetings, and code reviews to learn agile workflows. Who Can Apply: Pursuing BCA (Bachelor of Computer Applications), graduating in 2025 or 2026 . Enthesitis to learn technologies related to AI, Big Data, Event-driven systems, etc Good to have : Good understanding of basic programming concepts (e.g., Java, Python, JavaScript, etc.). Familiarity with databases and writing simple SQL queries. Exposure to any web development or backend frameworks is a plus. Strong willingness to learn, problem-solving attitude, and attention to detail. Who Can Apply Pursuing BCA (Bachelor of Computer Applications), graduating in 2025 or 2026 . Good to have : Good understanding of basic programming concepts (e.g., Java, Python, JavaScript, etc.). Familiarity with databases and writing simple SQL queries. Exposure to any web development or backend frameworks is a plus. Strong willingness to learn, problem-solving attitude, and attention to detail. Why Join Us? Competitive salary with room for negotiation based on experience. Opportunity to work on cutting-edge technologies and enhance your skills. Supportive and collaborative work environment. On-site work at our Mohali office, allowing for hands-on collaboration and learning. Compensation: The salary for this internship position will be determined based on the candidate's experience, skills, and performance during the interview process. How to Apply: If you're ready to take on new challenges and grow with us, send your resume to hr@datatroops.io Note: Only candidates based in the Tricity area or willing to relocate to Mohali will be considered for this role. Job Types: Full-time, Fresher, Internship Job Types: Full-time, Fresher, Internship Pay: ₹1.00 per hour Schedule: Monday to Friday Weekend availability Work Location: In person

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Python Developer Experience Level: 5-7 Years Location : Hyderabad Job Description We are seeking an experienced Lead Python Developer with a proven track record of building scalable and secure applications, specifically in the travel and tourism industry. The ideal candidate should possess in-depth knowledge of Python, modern development frameworks, and expertise in integrating third-party travel APIs. This role demands a leader who can foster innovation while adhering to industry standards for security, scalability, and performance. Roles and Responsibilities Application Development: Architect and develop robust, high-performance applications using Python frameworks such as Django, Flask, and FastAPI. API Integration: Design and implement seamless integration with third-party APIs, including GDS, CRS, OTA, and airline-specific APIs, to enable real-time data retrieval for booking, pricing, and availability. Data Management: Develop and optimize complex data pipelines to manage structured and unstructured data, utilizing ETL processes, data lakes, and distributed storage solutions. Microservices Architecture: Build modular applications using microservices principles to ensure scalability, independent deployment, and high availability. Performance Optimization: Enhance application performance through efficient resource management, load balancing, and faster query handling to deliver an exceptional user experience. Security and Compliance: Implement secure coding practices, manage data encryption, and ensure compliance with industry standards such as PCI DSS and GDPR. Automation and Deployment: Leverage CI/CD pipelines, containerization, and orchestration tools to automate testing, deployment, and monitoring processes. Collaboration: Work closely with front-end developers, product managers, and stakeholders to deliver high- quality, user-centric solutions aligned with business goals.Requirements  Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.  Technical Expertise: o At least 4 years of hands-on experience with Python frameworks like Django, Flask, and FastAPI. o Proficiency in RESTful APIs, GraphQL, and asynchronous programming. o Strong knowledge of SQL/No SQL databases (PostgreSQL, MongoDB) and big data tools (e.g., Spark, Kafka). o Experience with cloud platforms (AWS, Azure, Google Cloud), containerization (Docker, Kubernetes), and CI/CD tools (e.g., Jenkins, GitLab CI). o Familiarity with testing tools such as PyTest, Selenium, and SonarQube. o Expertise in travel APIs, booking flows, and payment gateway integrations.  Soft Skills: o Excellent problem-solving and analytical abilities. o Strong communication, presentation, and teamwork skills. o A proactive attitude with a willingness to take ownership and perform under pressure.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Coupa makes margins multiply through its community-generated AI and industry-leading total spend management platform for businesses large and small. Coupa AI is informed by trillions of dollars of direct and indirect spend data across a global network of 10M+ buyers and suppliers. We empower you with the ability to predict, prescribe, and automate smarter, more profitable business decisions to improve operating margins. Why join Coupa? 🔹 Pioneering Technology: At Coupa, we're at the forefront of innovation, leveraging the latest technology to empower our customers with greater efficiency and visibility in their spend. 🔹 Collaborative Culture: We value collaboration and teamwork, and our culture is driven by transparency, openness, and a shared commitment to excellence. 🔹 Global Impact: Join a company where your work has a global, measurable impact on our clients, the business, and each other. Learn more on Life at Coupa blog and hear from our employees about their experiences working at Coupa. The Impact of Sr. Software Engineer to Coupa: At a technical level your development team will offer application and infrastructure support for customer environments. You’ll have the opportunity to collaborate across software products with engineers all over the company and globe to plan and deploy product releases. What you'll do: As a Sr. Software Engineer, you will help scale our Coupa platforms as we expand and find the right balance between the power of a consolidated codebase and the flexibility of microservices. You will collaborate with Product and Development teams to build new features and find creative and elegant solutions to complex problems. You will have the ability to participate in code reviews to create robust and maintainable code and work in an agile environment where quick iterations and good feedback are a way of life What you will bring to Coupa: Bachelor’s Degree in Computer Science, Information Technology or related field 4+ years of software development experience (preferably with Java, Python or Ruby) Strong object-oriented design and analysis skills Experience building REST APIs and microservices Good understanding of common design patterns Experience with React.js (or similar Javascript framework) and CSS MySQL and general database knowledge, including performance and optimization Critical thinker with a curious, passionate and growth-oriented mindset Coupa complies with relevant laws and regulations regarding equal opportunity and offers a welcoming and inclusive work environment. Decisions related to hiring, compensation, training, or evaluating performance are made fairly, and we provide equal employment opportunities to all qualified candidates and employees. Please be advised that inquiries or resumes from recruiters will not be accepted. By submitting your application, you acknowledge that you have read Coupa’s Privacy Policy and understand that Coupa receives/collects your application, including your personal data, for the purposes of managing Coupa's ongoing recruitment and placement activities, including for employment purposes in the event of a successful application and for notification of future job opportunities if you did not succeed the first time. You will find more details about how your application is processed, the purposes of processing, and how long we retain your application in our Privacy Policy.

Posted 3 days ago

Apply

0.0 - 1.0 years

0 Lacs

Delhi, Delhi

On-site

Position: Accounts Executive (MIS & Reporting) Location: Lajpat Nagar, Delhi Salary: Up to ₹3.6 LPA Experience: 6 Months to 1 Year Joining: Immediate preferred About the Role: We are looking for a proactive and detail-oriented Accounts Executive with strong Excel proficiency and a foundational understanding of accounting and taxation . This role is ideal for someone who can manage data, build reports and dashboards, and support internal teams with timely financial insights and MIS reporting. Key Responsibilities: Develop, maintain, and manage daily, weekly, and monthly reports and dashboards. Collect and analyze data to identify trends, variances, and improvement opportunities . Cross-verify data from multiple sources to ensure accuracy and consistency of reports. Automate repetitive reporting tasks using Excel tools like VBA or Power Query. Coordinate with sales, finance, and operations teams for data collection and report generation. Prepare MIS reports for leadership and assist in business performance reviews. Provide actionable insights based on historical performance and trends. Support audits and data reconciliation activities as required. Candidate Requirements: 6 months to 1 year of relevant experience. Strong proficiency in Microsoft Excel (formulas, pivot tables, charts, etc.). Basic knowledge of accounting principles and taxation . Experience with Excel-based automation tools (VBA, Power Query) is a plus. Good analytical and communication skills. Attention to detail and ability to meet deadlines. Job Types: Full-time, Permanent Pay: Up to ₹300,000.00 per year Benefits: Health insurance Life insurance Paid sick time Provident Fund Work Location: In person

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

ManpowerGroup is embarking on a significant program to fundamentally transform technology across the company. At the heart of the transformation is a significant strengthening and globalization of the company’s technology infrastructure. It will involve the establishment of an enterprise infrastructure organization through the centralization and consolidation of diverse, federated legacy solutions in place in the over 60+ countries representing revenues of $20+ bn. The resultant technology infrastructure and application landscape should provide best-in-class service (resilient, elastic, stable) in a cost-effective manner with strong operational controls and information security. To build strong Technology Foundation, ManpowerGroup is considering Key Major programs in Technology areas such as Business Delivery Management and achieving Delivery excellence by leveraging Core Infrastructure and Application platforms such as Public Cloud, Data center migrations or Automation capability. There is also a significant investment on the application solutions and technology to drive Digitization and business process changes. Each one of the above, in themselves, is a significant project, but when combined, represents a large and complex global transformation program which will fundamentally change the technology landscape in the organization and provide long term benefits. Purpose of the role: This role will be a part of the Global Technology Infrastructure function and will design and develop sustainable solutions suitable for IT Operations, Security Operations, design and implement robust discovery and multi-source CMDB for Asset Management and ITOM Responsibilities: • SME in conducting the full range of technical design, development/configuration, and delivery of Service Management solutions, and management capabilities. • Create high level solution models and architectures for all aspects of Service Management which align with organizational requirements, meeting company’s Information Technology and Security policies and standards, operational and integration needs. • Own and manage solution engineering roadmaps and blueprints, defining and delivering an on-going continuous improvement and maintenance program. • Driving implementation and continuous management of Service Management tool for Global Rollout across 60+ countries • Delivering ServiceNow to enable and automate IT Service, Operations, Customer Service and Risk management processes • Defining and documenting business process responsibilities and ownership of the controls. • Measuring and reporting on compliance to mandatory technology control standards and processes • Leading periodic meetings with technology teams to discuss remediation status, roadblocks, and development plans. • Perform the role of senior technical expert for the planning, design, and delivery of ServiceNow Solution, and making recommendations for improvements • Provide a comprehensive end to end architecture for the stakeholder’s business and technical requirements. Ensuring the solution aligns across the business for people, process, and systems domains. • Gather business and technical requirements from stakeholders in order to produce a requirements specification document. • Analyze and investigate possible solutions in order to meet the business and technical requirements. • Produce professional high level solution descriptions and obtain customer acceptance of these solutions. • Be responsible for the technical solution through all phases of the project, support the project managers and work within a defined change management process. • Manage the overall highly complex, multi-country, multi-entity, multi-year, strategic engineering roadmap. • Ensure the overall requirements captured/documented (in-scope and out-of-scope), assumptions and exceptions. • Develop workflows to support BaU requirements in ITSM, ITBM / ITOM / CMDB and CSM modules. • Get involved in RFPs in coordination with Strategic Partner Management Team, PMO and Technology infra teams. • Manage and engage closely with architecture, technical lead and engineering partners from various vendor and partner organization. • Ensuring compliance with remediation workflow, policies, procedures, and controls. Required: • Breadth and depth of Technology Infrastructure domains, industry and organizational knowledge across multiple functions, platforms, services. • Driving Service Management standards and processes. • Ability to work in a collaborative, agile environment, and be excited to learn. • Experience in developing strategic business solutions as a part of creating engineering solutions. • Ability to work in a collaborative, agile environment, and be excited to learn. • Multiple years of experience leading end-to-end engineering teams. • Experience in communicating with end users, technical & business teams to collect requirements, describe product features, and technical designs. • Excellent understanding of and implementation experience with a variety of ITSM tools (e.g., ServiceNow, JIRA, Remedy etc)

Posted 3 days ago

Apply

80.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Your Journey at Crowe Starts Here: At Crowe, you can build a meaningful and rewarding career. With real flexibility to balance work with life moments, you’re trusted to deliver results and make an impact. We embrace you for who you are, care for your well-being, and nurture your career. Everyone has equitable access to opportunities for career growth and leadership. Over our 80-year history, delivering excellent service through innovation has been a core part of our DNA across our audit, tax, and consulting groups. That’s why we continuously invest in innovative ideas, such as AI-enabled insights and technology-powered solutions, to enhance our services. Join us at Crowe and embark on a career where you can help shape the future of our industry. Job Description: The Senior Software Engineer role works within our agile/scrum Product Engineering team to develop software via assigned tasks. This role is leading initiatives with little guidance to the definition of done, working with immediate peers, and communicating across immediate team. Qualifications/Requirements : Bachelor’s degree in computer science, MCA, MIS, Information Systems or engineering fields, or equivalent experience. 4 – 7+ years of relevant experience. Understanding of object-orient programming (OOP) and Design Patterns. Experience with CRM/Dataverse customizations with basic and advanced configurations. 2+ years of experience with C#, HTML, CSS, JS/jQuery. Experience with CRM Plugins / Workflows / Actions concepts. Experience with SQL Server Reporting Services and / or Power BI reporting concepts. Experience with Dynamics CRM / Dataverse Product Modules (Sales/Marketing/Service). Understanding of CRM Solution Management concepts. Understanding of Security Management Concepts (security roles, privilege, teams, access team, field level security, Hierarchy Security…). Experience of JavaScript framework such as React, Angular, or similar….., a plus. Experience with data migration (SSIS packages using Kingsway, out of the box wizard, or console). Experience working with Power Automate (MS Flow). Experience working with MS Portal/Power Portal/Power Pages. Familiarity with building SaaS solutions using Azure services such as Service Bus, Azure SQL Database, Azure Data Factory, Azure Functions, API Manager, and Azure App Services. 2+ years of experience with agile environments. Experience leveraging Azure DevOps (ADO) or Jira for work item management and CI/CD automation, as well as Git for version control. Technology certifications, a plus. Responsibilities : Applying current and learning new technical skills and understanding required to complete tasks to the definition of done. Conducts and facilitates code reviews at times ensuring coding standards are being met. Writes technical documentation for application development. Proactively seeks in-depth knowledge of all the applications and code worked on, even code not written by the individual. Contributes to product demonstrations. Conducts Proof-of-Concepts for approaches when asked and helps to provide pro/con inputs to the team for decision making. Contributes to work process improvement concepts. Skills : Communication - Communicates with wider team. Provides feedback on communication of others. Empathy & Humility - Constantly pushes for a better understanding of the needs and perspectives of those outside your viewpoint. Makes sound decisions keeping customer in the forefront. Initiative – Assesses and initiates tasks and projects independently. Objectivity & Adaptability – Can change your mind over strongly held beliefs and pursue a new path with no loss of velocity. Growth Mindset – Open to learning new skills and recognizes weakness in themselves. Writing Code – Consistently writes testable, readable code across larger more complex projects. You are an advocate for quality. Testing – Independently tests and advises the rest of the team on quality of tests. Debugging & Monitoring – Systematically debugs issues located within a single service, while taking greater responsibility for the monitoring systems. Technical Understanding & Prioritization – Displays a clear technical confidence and understanding, prioritizes tasks and acts accordingly. Security – Understands the importance of security and starts to see work through a security lens Software Architecture – Has a good understanding of, and designs functions that are aligned with, the overall service architecture. You understand changes may have an effect beyond the immediate change. Also understands changes may impact external integrations and / or other dependencies and is conscious to plan and communicate accordingly. Business Context – Knows how the business operates on a high level as well as their core team metrics and can use that knowledge in daily decisions with help. Product Knowledge – Understands the purpose of the product. Learning how it can be adapted to meet different needs. Culture & Togetherness – Is conscious of signaling and tries to act as they would expect other team members to act. Works to develop good positive relationships. Participates in team activities. Developing Others – Recognizes strengths of peers and looks for ways to support those strengths through project work. Invests time in materials or processes to support team growth. Peers see you as an informal coach. Hiring & Org Design – A competent interviewer. Follows a structured hiring process and contributes to a decision. Stakeholder Management – Is able to keep tangential teams informed and expectations managed around everyday work updates. Uses judgement about others' reaction when disclosing information and opinions. Team Leadership – Capable of informally managing interns and other staff. Possibly manages one or two junior team members. Does not look for glory and does not complain about work that needs to be done. Assumes good decisions in others' work. Broadly, you do what you say you are going to do. We expect the candidate to uphold Crowe’s values of Care, Trust, Courage, and Stewardship. These values define who we are. We expect all of our people to act ethically and with integrity at all times. Our Benefits: At Crowe, we know that great people are what makes a great firm. We value our people and offer employees a comprehensive benefits package. Learn more about what working at Crowe can mean for you! How You Can Grow: We will nurture your talent in an inclusive culture that values diversity. You will have the chance to meet on a consistent basis with your Career Coach that will guide you in your career goals and aspirations. Learn more about where talent can prosper! More about Crowe: C3 India Delivery Centre LLP formerly known as Crowe Howarth IT Services LLP is a wholly owned subsidiary of Crowe LLP (U.S.A.), a public accounting, consulting and technology firm with offices around the world. Crowe LLP is an independent member firm of Crowe Global, one of the largest global accounting networks in the world. The network consists of more than 200 independent accounting and advisory firms in more than 130 countries around the world. Crowe does not accept unsolicited candidates, referrals or resumes from any staffing agency, recruiting service, sourcing entity or any other third-party paid service at any time. Any referrals, resumes or candidates submitted to Crowe, or any employee or owner of Crowe without a pre-existing agreement signed by both parties covering the submission will be considered the property of Crowe, and free of charge.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Position: Python Full Stack Developer Location: Remote (India-based preferred) Engagement: Full-time (8-9 hours/day) Duration : 3 Months ( +6 months extendable only based on performance and outcome oriented) Exp : 4-6 years (5 years of relevant experience is preferred) About Us: We are a small, fast-moving startup building AI agents for the trading sector, with tight deadlines and a strong culture of ownership. We are looking for passionate individual contributors who can think independently, handle end-to-end tasks, and collaborate in a small, focused team. What You Will Do: ● Design and develop end-to-end features across backend and frontend. ● Integrate external APIs and transform data into internal formats. ● Build middleware services with authentication and clean architecture. ● Work with Python for backend services and data processing. ● Design and implement React + TypeScript frontend pages with responsive UI. ● Leverage AWS services for deployment and scaling. ● Use PostgreSQL for structured data storage and efficient querying. ● Apply knowledge of system design to plan scalable, maintainable solutions. ● Use AI tools and APIs to automate tasks and improve workflow efficiency. Key Skills Required: ● Strong experience with Python for backend development with Django or Flask. ● Solid AWS knowledge for deploying and managing services and awareness of pipelines. ● Proficient with React with TypeScript for frontend development. ● Experience with REST APIs or FAST API: building, consuming, and securing them. ● Good understanding of PostgreSQL or similar relational databases. ● Ability to design clean, maintainable middleware and API integrations. ● Exposure to system design concepts for small-scale, production-ready systems. ● Familiarity with AI/ML tools (e.g., LLM APIs, automation frameworks) is a strong plus. Who We Are Looking For: ● Passionate problem-solvers who take ownership of features end-to-end. ● Comfortable working in a small team with minimal hand-holding. ● Ready to work with tight deadlines. ● Self-driven, disciplined, and able to prioritize independently. ● Eager to experiment with AI tools to speed up and automate work. ● Strong communication and collaborative mindset despite being remote. Why Join Us: ● Direct impact on building AI-driven products for trading and finance. ● Opportunity to work on cutting-edge API integrations and automation. ● Ownership of features from system design to production. ● Tight-knit, passionate team with fast feedback cycles. ● Learning environment with exposure to modern AI, cloud, and web tech.

Posted 3 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job description: Job Description Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters ͏ Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities ͏ 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders ͏ 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally ͏ Deliver No. Performance Parameter Measure 1.Continuous Integration, Deployment & Monitoring of Software100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan2.Quality & CSATOn-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation3.MIS & Reporting100% on time MIS & report generation Mandatory Skills: Generative AI . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 3 days ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Experience : 1+ years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Noida) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client) What do you need for this opportunity? Must have skills require d: Email Marketing, Lead Generation Role Overvie w : Key Responsibilities: Identify potential leads through online research, social platforms (LinkedIn, etc.), databases, and other sources. Qualify leads and maintain an organized and updated CRM with accurate lead information. Use tools like Lusha, Apollo.io, and Hunter.io for lead enrichment and verification. Conduct outbound campaigns using email marketing tools and CRM systems. Collaborate with the sales team to schedule meetings and follow up with prospects. Maintain a strong pipeline of high-quality leads to support business development targets. Track and report KPIs like conversion rate, response rate, etc. Requirements: 1-3 years of experience in lead generation or a similar role (preferably in a SaaS or IT company). Familiarity with tools like Lusha, CRM platforms (HubSpot, Zoho, etc.), and email campaign tools. Strong communication skills, both written and verbal. Proactive, self-motivated, and target-driven mindset. Ability to work independently and manage time effectively. About Client: Rannkly is a fast-growing SaaS platform helping brands manage their digital reputation, automate marketing, and drive customer engagement. We're expanding our business development team and looking for a passionate Lead Generation Executive who can help fuel our growth by identifying and connecting with potential clients. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview!

Posted 3 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary Support the data analytics & audit program of the Internal Audit function. The data analytics and audit program involves identifying and creating computer assisted audit techniques that increase the depth and breathe over conventional financial audit techniques for significant company applications. This role will be responsible for spear-heading the function’s use of AI (Artificial Intelligence) to perform these computer assisted audit techniques and advance the department’s adoption of AI What You’ll Do Utilize available data to drive the department continuous risk identification program and work with department leadership to ensure efficient generation of actionable results and integration with the development of the risk-based audit plan. Support the data analysis needs of Audit teams and Audit Lead Data Analysts in planning and completing financial and operational audits, including Gain an understanding of business processes, evaluate potential risks and work with audit teams and customers to define data indicators of risk. Identifying opportunities to partner with audit teams and provide analytic services. Gain an understanding of respective audit objectives and creatively defining analytics that can deliver efficient and effective audit approaches for the audit teams. Supporting audit teams in the data acquisition process to ensure the requirements for scoped analytics are met within reasonable timeframes. Executing analytic service requests while maintaining acceptable cycle time / quality standards and effectively leveraging consulting teams. Continuously improve analytic operations and audit efficiency/effectiveness by soliciting feedback, critically assessing processes, and defining improvement requirements. Assist in driving adoption of AuditBoard usage in alignment with the department’s Operational Excellence (OPEX) program. Create and maintain departmental reporting using data from AuditBoard and a variety of other sources. Demonstrate analytic proficiency for all assigned applications and an array of analytic techniques in order to independently define, develop and execute complex yet meaningful analytics. Effectively communicate findings and complex analytical solutions to upper management. Support and drive transformation in how we work. Coordinate with department leadership to identify opportunities for continuous improvement of department's processes and practices through the use of Eaton Business System and other relevant process improvement tools. Participates with the implementation of new processes. Key Responsibilities Data Analysis and AI Model Development Develop and implement AI models, including machine learning and deep learning algorithms, to analyze large datasets and generate actionable insights. Perform data preprocessing, feature engineering, and model validation to ensure the accuracy and reliability of AI solutions. Continuously monitor and refine AI models to improve performance and adapt to changing business needs. Business Collaboration Work closely with business stakeholders to understand their requirements and translate them into data-driven solutions. Collaborate with IT, data science, and other departments to integrate AI technologies into existing workflows and systems. Communicate complex analytical findings and AI solutions to non-technical stakeholders in a clear and concise manner. Data Management and Visualization Manage and manipulate large datasets using advanced data analysis tools and techniques. Create data visualizations and dashboards to present insights and trends to stakeholders. Ensure data quality and integrity by implementing best practices in data governance and management. Continuous Improvement Stay up-to-date with the latest advancements in AI and data analytics technologies. Identify opportunities for process improvement and innovation through the application of AI and data analytics. Participate in the development and implementation of new processes and tools to enhance the efficiency and effectiveness of data analytics initiatives. Special Projects and Ad-Hoc Analysis Perform special projects and ad-hoc data analysis requests as needed. Support the organization in addressing complex business challenges through data-driven approaches. Qualifications BS Degree in Information Systems, Computer Science, Finance, Accounting, or Mathematical/Statistical disciplines At least 3 years of prior auditing and data analytics experience Technical knowledge Experience analyzing manufacturing business processes. This would include: business process flowcharting and risk/control analysis and assessment. Knowledge of database structures, data mapping, and experience extracting/analyzing data from common Enterprise Resource Planning (ERP) systems such as SAP, Oracle and Mfg/PRO. Strong analytical skills and advanced knowledge of one or more common data analysis tools and CAAT (Computer Assisted Audit Technique) technologies (e.g., ACL, Python, R, SQL, Alteryx). Experience with Microsoft Power Platform, including Power BI, Power Apps, Power Automate or similar data transformation tools. Strong knowledge and working experience with data manipulation tools to query large databases and manipulate large data files. Experience with common data analysis/mining techniques (e.g. trend analysis, data regression, data modeling) Adept in using advanced features of MS Excel. Experience working with data visualization tools (e.g. Power BI, Tableau, Qlikview) Working knowledge of key auditing and accounting concepts (GAAS, GAAP) and experience in supporting / participating in an audit activity. Professional certification (CPA, CA, CIA, CMA, CFE, etc.) preferred Soft Skill Strong attention to detail and an ability to prioritize and work in a highly fluent and fast paced environment. Strong communication skills, both written and verbal. Strong interpersonal skills, with the ability to promote ideas and work effectively with all levels within the organization. Ability to deliver meaningful results that clearly and succinctly report and present key issues, business impact, and recommendations for improvement. A proactive “can do” attitude, with the desire to have an impact, add value to the organization, and a mindset for continuous improvement. Demonstrated ability to negotiate time lines, delivery dates, and resolve conflict between partners. Experience with computer forensics work is desired, but not required ]]>

Posted 3 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Position: Sr Data Operations Years of Experience – 6-8 Years Job Location: S.B Road –Pune, For other locations (Remote) The Position We are seeking a seasoned engineer with a passion for changing the way millions of people save energy. You’ll work within the Deliver and Operate team to build and improve our platforms to deliver flexible and creative solutions to our utility partners and end users and help us achieve our ambitious goals for our business and the planet. We are seeking a highly skilled and detail-oriented Software Engineer II for Data Operations team to maintain our data infrastructure, pipelines, and work-flows. You will play a key role in ensuring the smooth ingestion, transformation, validation, and delivery of data across systems. This role is ideal for someone with a strong understanding of data engineering and operational best practices who thrives in high-availability environments. Responsibilities & Skills You should: Monitor and maintain data pipelines and ETL processes to ensure reliability and performance. Automate routine data operations tasks and optimize workflows for scalability and efficiency. Troubleshoot and resolve data-related issues, ensuring data quality and integrity. Collaborate with data engineering, analytics, and DevOps teams to support data infrastructure. Implement monitoring, alerting, and logging systems for data pipelines. Maintain and improve data governance, access controls, and compliance with data policies. Support deployment and configuration of data tools, services, and platforms. Participate in on-call rotation and incident response related to data system outages or failures. Required Skills : 5+ years of experience in data operations, data engineering, or a related role. Strong SQL skills and experience with relational databases (e.g., PostgreSQL, MySQL). Proficiency with data pipeline tools (e.g., Apache Airflow). Experience with cloud platforms (AWS, GCP) and cloud-based data services (e.g., Redshift, BigQuery). Familiarity with scripting languages such as Python, Bash, or Shell. Knowledge of version control (e.g., Git) and CI/CD workflows. Qualifications Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Experience with data observability tools (e.g., Splunk, DataDog). Background in DevOps or SRE with focus on data systems. Exposure to infrastructure-as-code (e.g., Terraform, CloudFormation). Knowledge of streaming data platforms (e.g., Kafka, Spark Streaming).

Posted 3 days ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Noida, Uttar Pradesh

On-site

Read the JD Carefully before applying ! Job Title: DevOps Engineer Location: Noida 63 Mode : Onsite Location : Noida Experience : 1.6+ Years (Fresher & Less experience don't apply) Immediate joiner only apply ! If you are not immediate joiner don't apply ! We are seeking a DevOps Engineer with 1.6 year of experience in DevOps practices and cloud technologies. The ideal candidate will have hands-on experience with CI/CD pipelines, cloud infrastructure, and automation. As part of our team, you'll play a key role in enhancing our system reliability, automating processes, and improving our deployment pipeline. Key Responsibilities: - Assist in maintaining and improving our CI/CD pipeline for automated testing, building, and deployment. - Collaborate with the development team to ensure seamless integration of software components. - Manage cloud infrastructure (AWS, GCP) and ensure systems are scalable, secure, and cost-effective. - Automate and optimize processes for software delivery and deployment using tools like Docker, Kubernetes, and Jenkins. - Monitor and maintain system performance, reliability, and uptime. - Assist in troubleshooting, diagnosing, and resolving system issues. - Ensure infrastructure and deployments are in compliance with best practices for security and cost optimization. Qualifications: - 1.6 year of experience as a DevOps Engineer or in a similar role. - Experience with cloud platforms (AWS, GCP). - Familiarity with CI/CD tools (Jenkins, GitLab CI, Github Actions, etc.). - Knowledge of containerization technologies (Docker, Kubernetes). - Basic scripting skills (Bash, Python, or similar). - Familiarity with infrastructure as code (Terraform, CloudFormation). - Strong communication and collaboration skills. - Ability to work independently and within a team in a fast-paced environment. Nice to Have: - Experience with monitoring tools like Prometheus, Grafana, or AWS CloudWatch. - Basic knowledge of networking, load balancing, and DNS. - Experience with version control systems like Git. Job Type: Full-time Pay: ₹20,000.00 - ₹35,000.00 per month Ability to commute/relocate: Noida, Uttar Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Immediate Joiner ! Experience: DevOps: 2 years (Required) AWS: 1 year (Required) Location: Noida, Uttar Pradesh (Required)

Posted 3 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning

Posted 3 days ago

Apply

7.0 years

0 Lacs

India

On-site

This job is with Allianz Commercial, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Experience 7 to 12 years' experience in Middleware Deep experience with JBoss and Tomcat, including customization for JEE2 applications. Strong Linux/Unix knowledge. Experience with Ansible for automation. Understanding of firewall rules and network architectures. Expertise in TLS authentication and LDAP. Familiarity with application user login mechanisms. Experience with SAML authentication (i.e., igb2b / dxp / vpf support). Knowledge of HTTP sessions via cookies. Proficiency in Java JEE, Tomcat, JBoss, and HTTP protocols. Understanding of Web Application Firewalls (WAF). Experience with application logins. Web application development expertise. Proficiency in Eclipse, Maven, and Git. Experience with JUnit testing and test-driven development. Strong SQL and PL/SQL knowledge. Experience in DevOps to automate CI/CD pipelines and ensure smooth deployment processes Nice to Have: Experience with Spring Boot and Azure/Cloud platforms. Knowledge in Agile development methodologies knowledge / experience Good Communication skills Code development and maintenance experience. Knowledge in Agile development methodologies knowledge / experience Demonstrates good analytical and systematic approach to problem solving. Understands and uses appropriate methods, tools and applications. Willingness to continuously learn and upgrade the skills. Having a basic understanding or exposure to AI tools would be a plus. Allianz Group is one of the most trusted insurance and asset management companies in the world. Caring for our employees, their ambitions, dreams and challenges, is what makes us a unique employer. Together we can build an environment where everyone feels empowered and has the confidence to explore, to grow and to shape a better future for our customers and the world around us. We at Allianz believe in a diverse and inclusive workforce and are proud to be an equal opportunity employer. We encourage you to bring your whole self to work, no matter where you are from, what you look like, who you love or what you believe in. We therefore welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation. Join us. Let's care for tomorrow. Note: Diversity of minds is an integral part of Allianz' company culture. One means to achieve diverse teams is a regular rotation of Allianz Executive employees across functions, Allianz entities and geographies. Therefore, the company encourages its employees to have motivation in gaining varied skills from different positions and to collect experiences from across Allianz Group.

Posted 3 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Automation Lead –Infrastructure & Scripting Location: Hyderabad (Hybrid) Experience : 6+ Job Type: Contract – 6 months About the Role: We are seeking a results-driven Automation Lead with strong expertise in Python, Ansible , and other scripting tools to drive automation initiatives across our IT infrastructure landscape. The ideal candidate will have a solid background in networking, Active Directory , and general infrastructure operations , along with a passion for solving complex problems through automation. Key Responsibilities: Lead the design, development, and deployment of automation solutions for infrastructure operations using Python, Ansible , and other tools. Identify manual processes and develop scripts/playbooks to automate configuration, provisioning, and monitoring. Collaborate with network, server, and platform teams to understand requirements and develop end-to-end automation workflows. Maintain and enhance existing automation frameworks, ensuring scalability and maintainability. Implement and manage configuration management, compliance, and orchestration strategies. Mentor junior engineers and establish automation best practices across teams. Integrate with CI/CD pipelines to streamline delivery and deployment processes. Monitor automation performance and provide continuous improvements and updates. Required Skills and Experience: 6+ years of experience in infrastructure engineering or automation. Strong hands-on experience with Python for scripting and automation. Expertise in Ansible for configuration management and orchestration. Experience with other scripting tools such as PowerShell, Bash, Shell scripting , etc. is a plus. Solid understanding of network fundamentals (switching, routing, VLANs, firewalls). Exposure to Active Directory, DNS, DHCP , and other Windows infrastructure services. Experience integrating with REST APIs for automation and monitoring purposes. Exposure to version control systems such as Git and CI/CD tools like Jenkins, GitLab CI, or etc. Strong troubleshooting and analytical skills with an automation-first mindset. Education : Bachelor's degree in computer science, Engineering, Information Technology, or a related field (or equivalent practical experience). If you are interested, please share your updated resume to prashanth@intellistaff.in

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job description: Job Description Role Purpose The purpose of this role is to prepare test cases and perform testing of the product/ platform/ solution to be deployed at a client end and ensure its meet 100% quality assurance parameters. ͏ Do Instrumental in understanding the test requirements and test case design of the product Authoring test planning with appropriate knowledge on business requirements and corresponding testable requirements Implementation of Wipro's way of testing using Model based testing and achieving efficient way of test generation Ensuring the test cases are peer reviewed and achieving less rework Work with development team to identify and capture test cases, ensure version Setting the criteria, parameters, scope/out-scope of testing and involve in UAT (User Acceptance Testing) Automate the test life cycle process at the appropriate stages through vb macros, scheduling, GUI automation etc To design and execute the automation framework and reporting Develop and automate tests for software validation by setting up of test environments, designing test plans, developing test cases/scenarios/usage cases, and executing these cases Ensure the test defects raised are as per the norm defined for project / program / account with clear description and replication patterns Detect bug issues and prepare file defect reports and report test progress No instances of rejection / slippage of delivered work items and they are within the Wipro / Customer SLA's and norms Design and timely release of test status dashboard at the end of every cycle test execution to the stake holders Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders ͏ Status Reporting and Customer Focus on an ongoing basis with respect to testing and its execution Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc On time deliveries - WSRs, Test execution report and relevant dashboard updates in Test management repository Updates of accurate efforts in eCube, TMS and other project related trackers Timely Response to customer requests and no instances of complaints either internally or externally ͏ NoPerformance ParameterMeasure1Understanding the test requirements and test case design of the productEnsure error free testing solutions, minimum process exceptions, 100% SLA compliance, # of automation done using VB, macros2Execute test cases and reportingTesting efficiency & quality, On-Time Delivery, Troubleshoot queries within TAT, CSAT score ͏ Mandatory Skills: Tosca Testsuite - Test Automation . Experience: 3-5 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 3 days ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Company Description At Vibe6, we harness the power of AI, ML, IoT, Web3, and Blockchain to drive digital transformation. Our mission is to innovate, automate, and elevate businesses with cutting-edge technology. From custom software and mobile apps to next-gen decentralized solutions, we empower enterprises to stay ahead in a rapidly evolving digital world. Vibe6 blends innovation with expertise to deliver scalable, high-performance solutions that redefine industries. Role Description This is a full-time on-site role for a Digital Marketing Executive located in Indore. The Senior Digital Marketing role will be responsible for developing and implementing digital marketing strategies, managing social media accounts, creating web content, and analyzing web analytics. The role involves creating compelling marketing content, enhancing online presence, and ensuring effective communication across digital platforms. Qualifications Marketing and Social Media Marketing skills Excellent Communication abilities Web Content Writing skills Proficiency in Web Analytics Ability to work collaboratively in an on-site environment Bachelor's degree in Marketing, Communications, or related field Experience in digital marketing and social media management is preferred Video Editing Tools Community Management Google Analytics

Posted 3 days ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Devops Engineer Location: Hybrid-Hyderabad/Mumbai/Pune/Bengaluru/Chennai About the Job: We are seeking a skilled and proactive DevOps Engineer with hands-on experience in AWS Cloud, CI/CD tools, containerization, and security scanning practices. This role requires a strong foundation in automation, cloud services, and end-to-end DevOps lifecycle support to ensure high availability, scalability, and security across environments. You will work closely with cross-functional teams to streamline deployment workflows, implement secure CI/CD pipelines, and support cloud-native solutions using modern infrastructure and monitoring tools. What you will do: Design, implement, and manage CI/CD pipelines using Jenkins or equivalent CI tools. Administer source control systems like Git, Bitbucket, SVN, and AWS CodeCommit. Integrate mandatory security tools like Acunetix and Veracode into pipelines; leverage SonarQube, Fortify, and MEND as additional security tools. Manage cloud infrastructure on AWS, including provisioning, configuration, and deployment using tools like Terraform and Ansible. Support container orchestration using AWS EKS and Kubernetes, including deployment, scaling, and monitoring of applications. Automate operational tasks and build scripts in Bash, Python, and other scripting languages. Utilize AWS services including Lambda, Load Balancer, S3, ECS, ECR, Cognito, SNS, CloudWatch, Route 53, API Gateway, EC2, Secret Manager, WAF, ElasticCache, and Amazon EventBridge. Maintain and enhance middleware platforms including WebSphere, WebLogic, and JBoss. Use tools such as Postman, Microsoft Graph, and text editors like VI/VIM for testing and configuration. Collaborate with development, QA, and operations teams to ensure reliable and secure delivery of software. Document workflows, configurations, and support knowledge transfer across teams. Who you are: Education & Experience: Bachelor’s degree in computer science, Engineering, or related discipline. 4–6 years of experience in DevOps or infrastructure engineering roles. Proven experience working with AWS and CI/CD tools in production environments. DevOps or cloud certifications (AWS, Kubernetes, Terraform, etc.) are a strong plus. Technical Skills: Primary Tools : Jenkins, CodeCommit, Git, Bitbucket, Artifactory (JFrog), SVN. Security Scanning : Acunetix, Veracode (Mandatory), SonarQube, MEND, Fortify (Preferred). Infrastructure as Code : Terraform, Ansible. AWS Cloud Services : EKS (mandatory), Lambda, Load Balancer, S3, ECS, ECR, Cognito, SNS, CloudWatch, Route 53, API Gateway, EC2, Secret Manager, WAF, ElasticCache, EventBridge. Containerization & Orchestration : Docker, Podman, Kubernetes. Build & Package : Maven, Ant, Gradle. Middleware Platforms : WebSphere, WebLogic, JBoss. Scripting : Shell, Bash, Perl, Python, Jenkins Shared Libraries, JSON, XML. Reporting & Tracking : JIRA, other ticketing systems. OS Environments : Linux, Windows. Databases : Oracle DB, MySQL. Soft Skills: Excellent problem-solving and analytical skills. Proactive mindset with attention to security, performance, and reliability. Ability to work independently and collaboratively in a dynamic environment. Strong organizational skills and attention to detail. Commitment to continuous learning and upskilling. English Language proficiency is required to effectively communicate in a professional environment. Excellent communication skills are a must. Strong problem-solving skills and a creative mindset to bring fresh ideas to the table. Should demonstrate confidence and self-assurance in their skills and expertise enabling them to contribute to team success and engage with colleagues and clients in a positive, assured manner. Should be accountable and responsible for deliverables and outcomes. Should demonstrate ownership of tasks, meet deadlines, and ensure high-quality results. Demonstrates strong collaboration skills by working effectively with cross-functional teams, sharing insights, and contributing to shared goals and solutions. Continuously explore emerging trends, technologies, and industry best practices to drive innovation and maintain a competitive edge.

Posted 3 days ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Roles & Responsibilities: Duties and Responsibilities include and are not limited to the following. · Develop/build IT solutions to meet business requirements. · Manage, evolve, build CI/CD pipeline · Integrate solutions with other applications and platforms outside the framework. · Design, develops, and implements reusable IaC components. · Write scripts to automate build and deployments on AWS Cloud and on premise data centers. · Automate, build and provide production systems support that may include duties such as deployment, configuration, monitoring and troubleshooting Linux servers. · Automate deployment of and support Linux and windows based infrastructure services (web, nfs, sftp, DNS, LDAP etc) · Automate and deployment of and support Cloud based network services (load balancers, routers, firewalls) · Orchestrate deployment of application & infrastructure clusters within a Public Cloud environment utilizing a Cloud Management Platform. · Performance monitoring and tuning of the Operating System and applications for most optimal operational efficiency. · Document existing and new public cloud deployments using Run Books and cloud architecture diagrams. · Implement processes to standardize best practices and procedures, capacity planning and risk mitigation. · Collaborate with technical/business teams assess requirements and recommend solutions. · Maintain QA and Production configuration using automation tools · Code and documents custom test automation frameworks. · Perform script maintenance and updates due to changes in requirements or implementations. · Set up and maintains the test environments for both manual and automated testing. · Build automated deployments using configuration management technology. · Automate deployment of new modules, upgrades, and fixes to the production environment. · Document and completes knowledge transfer to production support. · Work with Release Management to ensure modules are production ready. · Verify the functionality of components and services and ensures deployment meets client's expectations. Technical Requirements: · Bachelor’s degree in Programming/Systems or Computer Science or equivalent experience. · Typically requires overall 8 to 12 years of analysis and programming experience. · Must have Experience working in IaC environment and applications, systems or IT operations. · Experience working in an agile team environment. · Experience working with public cloud AWS is a must. · Experience working with Cloud Management Platform – RightScale Preferred. · Experience configuring and supporting Linux and Windows based infrastructure services (web, nfs, sftp, DNS, LDAP, pgp etc) · Experience with Continuous Integration tools such as team city, Jenkins preferred · Experience with configuration management tools such as Chef, Ansible, Puppet a must. · Experience with some aspects of computer security: network security, application security, security protocols, cryptography, IAM, Active Directory, and ADFS · Understanding of Load Balancers, TCP/IP, HTTP/HTTPS, SSL/TLS certificate management, DNS, and Network Routing · Experience with AWS services and plugins · Experience with container technologies, Docker, Kubernetes is required · Experience using ElasticSearch/ELK stack for application monitoring. · Worked with at least 2 to 3 application servers such as JBoss, IIS weblogic · Must have used scripting automation with tools such as Ruby, Python, Powershell, Javascript, · Knowledge of REST/SOAP APIs. · Knowledge of XML, and JSON file formats. · Demonstrated ability to analyze and interpret complex problems or processes, identify and understand requirements, and develop alternate solutions. · Excellent communications skills and the ability to effectively communicate findings both written and orally using both technical and non-technical terms. Preferred Qualities: · Bachelor’s degree in computer science, Engineering, or a related field (Master's degree preferred). · Typically requires overall 12+ Years of Experience in DevOps · Experience with AWS services and plugins · Experience configuring and supporting Linux and Windows based infrastructure services (web, nfs, sftp, DNS, LDAP, pgp etc) · Must have Experience working in IaC environment and applications, systems or IT operations. Additional Preferred Skills · Cloud management Platform using RightScale · Oracle Cloud Platform experience · AWS Associate or Professional level certification is a plus

Posted 3 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Position: As a Senior Service Reliability Engineer at Proofpoint you will develop a deep understanding of the various services and applications that come together to deliver Proofpoint’s next generation security products. You will be responsible for maintaining and extending the Elasticsearch and Splunk clusters used for critical near-real-time data analysis. In this role, you will be continually evaluating the performance of our Elasticsearch and Splunk clusters to spot developing problems, planning changes for upcoming high-load events, applying security fixes, testing and performing incremental upgrades, and extending and improving our monitoring and alert infrastructure Role: SRE Elasticsearch Administrator Location: Pune & Bengaluru Experience: 5years to 12years Job Type: Full Time Employment What You'll Do: In this role, you will be continually evaluating the performance of our Elasticsearch and Splunk clusters to spot developing problems, planning changes for upcoming high-load events, applying security fixes, testing and performing incremental upgrades, and extending and improving our monitoring and alert infrastructure. You’ll also be involved in maintaining other parts of the data pipeline, which may include server less or server-based systems for feeding data into the Elasticsearch pipeline. We’re continually trying to optimize our cost-vs-performance position, and so testing new types of hosts or configurations is an ongoing focus. We do much of our work with declarative tools such as Puppet, and various scripting mechanisms (depending on the target environment). In general, we want to automate as much as possible and aim for a ‘build once/run everywhere’ system. Some of our Elasticsearch clusters are in the public cloud; some are in Kubernetes clusters, and some are in private datacenter. This will be an opportunity to work with a variety of types of infrastructure and operations teams. Build long-lasting, effective partnerships across the organization to foster collaboration between Product, Engineering and Operations teams. Participate in an on-call rotation and be willing to jump on escalated issues as needed Expertise You'll Bring: Bachelor’s degree in computer science, information technology, engineering, or related discipline required Expertise in administration and management of Elasticsearch clusters. (Primary) Expertise in administration and management of Splunk clusters. (Secondary) Strong Knowledge in provisioning and Configuration Management tools like Puppet, Ansible, Rundeck, etc. Experience in building Automations and Infrastructure as Code using Terraform, Packer or CloudFormation templates. (Plus) Experience with monitoring and logging tools like Splunk, Prometheus, PagerDuty, etc. Experience with scripting languages like Python, Bash, Go, Ruby, Perl, etc. Experience with CI/CD tools like Jenkins, Pipelines, Artifactory, etc. An inquisitive mind with the ability to learn where the data exists in a large and disparate system and what that data means The skills to do effective troubleshooting, following a problem wherever it may lead. Benefits : Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a value-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 3 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

You'll be part of global teams across Bupa markets You'll get to work on building innovative digital health solutions About Our Client Bupa is a leading international healthcare company, established in 1947. You may know us as Niva Bupa in India, but globally Bupa has over 80,000 employees, 50m customers and an ambition to reach 100m customers by 2027. Job Description Understand and apply BUPA testing standards and processes throughout the delivery of the project. Based on the adoption of approved quality testing practises, demonstrate that the software application meets the Business & Functional requirements and is fit for purpose Performs a range of work activities with squad members to define, execute, document, and maintain test cases. Understand requirements and acceptance criteria, write comprehensive test cases, scenarios, automate tests and create test data. Should acquire automation first mindset. Design, execute and maintain UI, API, Mobile Automation scripts. Assist Test Lead with creation of testing artifacts such as test plan, test reports, etc based on defined scope and deliverables. Work collaboratively within the squad and the wider delivery teams during the build of new features/enhancements Log and manage defects identified during test execution and track them to closure in a timely manner and participate in defect triage meetings. Source the test data required for test execution based on the test scenarios/cases identified, by working with the requisite teams Assists Test Lead/Sr. Test Lead on the estimation of various testing activities The Successful Applicant Bachelor's degree in computer science or equivalent. Good knowledge of agile development processes. Hands-on experience in writing, executing, and debugging automated scripts like Playwright or Selenium (C# preferred). Strong experience with API automation (must-have). Experience with mobile automation tools such as Appium, TestProject, or Perfecto (nice to have). Automation experience on Windows-based applications (advantageous). Experience in various testing types such as functional, UI, API, and database testing. Basic to intermediate SQL skills, with the ability to write and execute SQL queries for database verification. Experience working in large-scale and complex environments (nice to have). What's on Offer Career Development:You'll be part of global teams across Bupa markets, supporting your own professional development through international collaborations, while at the same time invigorating the delivery of products and services to our customers and employees. Innovation and Learning: Our GCC offers a modern workspace, designed to support innovation and encourage our people to think big, take calculated risks, and continuously learn and grow. State-of-the Art Technologies: You'll get to work on building innovative digital health solutions, using the latest technologies, including AI, machine learning and cloud computing, helping to support your own growth and development. Thriving at work:We foster a work environment where employees can thrive while making a meaningful difference. We're creating a balanced and fulfilling workplace, where you'll feel valued, be encouraged to grow your career, and motivated to deliver our purpose - helping people to live longer, healthier, lives and make a better world. Contact: Anusha Raina Quote job ref: JN-072025-6791627

Posted 3 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Please read the job description before applying. Qualifications: Bachelor’s degree / Diploma or equivalent experience in Engineering (e.g., Electrical, Mechanical, Automation, Mechatronics, or related fields). Job Description & skills needed : Understanding of automation systems, including PLCs, SCADA, HMI, robotics, and industrial communication protocols (OPC, Modbus, Ethernet/IP) . Proficiency in PLC programming (Siemens, Allen-Bradley) and automation software tools. Experience in SCADA Report, Data log etc function. Should have an ability to identify and troubleshoot the malfunctions , replacing parts and components Keep good Coordination and Communication, good relationship with Team members and Client Should be familiar with safety procedures while carrying out electrical work Competence in data analysis tools (Python, SQL) for monitoring and optimization. Competence in Microsoft 365 tools such as ( Power Apps, Power Automate, and Dataverse ). Good knowledge & user experience on various industrial communication protocols. Knowledge on various application areas in PHARMA , Heat Treatment industry.

Posted 3 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Cloud/Linux Administrator For Gurgaon Location Job Summary: We are seeking a skilled and proactive Cloud/Linux Administrator to join our IT infrastructure team. The ideal candidate will have hands-on experience with AWS Cloud , RedHat Linux OS administration , and a strong background in patching and troubleshooting . You will be responsible for maintaining the health, performance, and security of our cloud and on-premise Linux environments. Key Responsibilities: Administer and maintain AWS cloud infrastructure , including EC2, S3, IAM, VPC, and related services. Perform RedHat Linux OS administration , including installation, configuration, upgrades, and performance tuning. Manage and automate patching processes for Linux systems to ensure compliance and security. Troubleshoot and resolve system issues related to performance, connectivity, and configuration. Monitor system health and respond to alerts and incidents in a timely manner. Implement and maintain system security, backup, and disaster recovery procedures. Collaborate with DevOps, Security, and Application teams to support deployment and infrastructure needs. Document system configurations, procedures, and troubleshooting steps. Required Skills & Qualifications: 3+ years of experience in Linux system administration , preferably with RedHat . Strong hands-on experience with AWS Cloud services . Proficiency in patch management tools and automation scripts (e.g., Ansible, Shell, Python). Solid understanding of networking concepts , firewalls, and security best practices. Experience with monitoring tools (e.g., CloudWatch, Nagios, Zabbix). Strong problem-solving and analytical skills. Excellent communication and documentation abilities. Preferred Qualifications: AWS Certification (e.g., AWS Certified SysOps Administrator, Solutions Architect). Red Hat Certified System Administrator (RHCSA) or Engineer (RHCE). Experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation.

Posted 3 days ago

Apply

7.0 - 10.0 years

8 - 16 Lacs

Vijay Nagar, Indore, Madhya Pradesh

On-site

Role: Sr. Data Engineer Location: Indore, Madhya Pradesh Experience: 7-10 Years Job Type: Full-time Job Summary: As a Data Engineer with a focus on Python, you'll play a crucial role in designing, developing, and maintaining data pipelines and ETL processes. You will work with large-scale datasets and leverage modern tools like PySpark, Airflow, and AWS Glue to automate and orchestrate data processes. Your work will support critical decision-making by ensuring data accuracy, accessibility, and efficiency across the organization Key Responsibilities: Design, build, and maintain scalable data pipelines using Python. Develop ETL processes for extracting, transforming, and loading data. Optimise SQL queries and database schemas for enhanced performance. Collaborate with data scientists, analysts, and stakeholders to understand data needs. Implement and monitor data quality checks to resolve any issues. Automate data processing tasks with Python scripts and tools. Ensure data security, integrity, and regulatory compliance. Document data processes, workflows, and system designs. Primary Skills: Python Proficiency: Experience with Python, including libraries such as Pandas, NumPy, and SQLAlchemy. PySpark: Hands-on experience in distributed data processing using PySpark. AWS Glue: Practical knowledge of AWS Glue for building serverless ETL pipelines. SQL Expertise: Advanced knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Data Pipeline Development: Proven experience in building and maintaining data pipeline and ETL processes. Cloud Data Platforms: Familiarity with cloud-based platforms like AWS Redshift, Google BigQuery, or Azure Synapse Data Warehousing: Knowledge of data warehousing and data modelling best practices. Version Control: Proficiency with Git. Preferred Skills: Big Data Technologies: Experience with tools like Hadoop or Kafka Data Visualization: Familiarity with visualisation tools (e.g., Tableau, Power BI). DevOps Practices: Understanding of CI/CD pipelines and DevOps practices. Data Governance: Knowledge of data governance and security best practices. Job Type: Full-time Pay: ₹800,000.00 - ₹1,600,000.00 per year Work Location: In person Application Deadline: 30/07/2025 Expected Start Date: 19/07/2025

Posted 3 days ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Alteryx Server Platform Architect Architect and administer distributed Alteryx Server deployments with multiple worker nodes ensuring high availability elasticity and rolebased execution queues Automate endtoend analytics pipelines leveraging Alteryx workflows macros and analytic apps tightly integrated with Denodo SAP REST APIs SFTP and enterprise data lakes Implement CICD for Alteryx workflows using version control Git APIbased deployments and scripting PythonPowerShell for Gallery object promotion and audit tracking Create data quality gates and approval workflows for Alteryx job publication aligning with enterprise SDLC and data stewardship protocols Administer user tenancy studio separation and integration with Active Directory groups and external identity providers ensuring least privilege and entitlement reviews Continuously analyze server performance with internal telemetry logs and usage heatmaps to recommend computer scaling job orchestration improvements and node tuning Unified Governance Security Compliance Enforce enterprisewide data access policies lineage tracking audit logging and regulatory compliance GDPR HIPAA SOX within both Denodo and Alteryx platforms Collaborate with Data Governance Office to define and implement data classification sensitivity tagging and usage controls for virtualized and transformed datasets Lead periodic platform risk assessments including penetration testing coordination vulnerability scanning and remediation planning in alignment with InfoSec policies Define KPIs and servicelevel indicators for platform health job success rates query latency and data provisioning time produce executivelevel dashboards for operational transparency Strategic Enablement Stakeholder Management Act as SME Subject Matter Expert and trusted advisor to enterprise data teams enabling domaindriven architecture adoption using Denodo and Alteryx Design and deliver advanced training programs certifications and reusable templates for developers analysts and business users to democratize data access while ensuring governance Evaluate emerging features from Denodo and Alteryx roadmaps lead POCs and drive platform evolution in partnership with vendors procurement and architecture governance boards Facilitate platform onboarding for new business units including use case discovery integration scoping provisioning and selfservice enablement EXP : 5 to 10 Years Location : Pan India

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies