Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
5 - 8 Lacs
Gurgaon
On-site
Work Experience: - 3-6 years post- Graduation Key responsibilities: - · Monitor development timelines and ensure development inline with New Model Trials planned at MSIL. Co-ordinate with different stakeholders within the company. · Costing, Negotiation and Sourcing for New Model parts as per the costing targets and sourcing timelines · Supply de-risking through Alternate source introduction, localization, multiple plants & lines etc. · Perform risk management to minimize project risks and develop a risk mitigation plan. · Followup, Monitor progress and adjust as needed to ensure successful completion of projects. Ensure effective monitoring and governance of all Third-Party arrangements including the timely completion of applicable risk assessments. Ensuring that right set of controls of TPRM are in place for day-to-day operations and ensure that they are effective in normal course of business. Monitoring of critical services for upcoming periodic risk reviews. Assessment and management of any ad hoc risk reviews triggered by market events etc. Maintaining important policies and procedures for ISPL TPRM Cybersecurity. Governance – Prepare & organize meetings across Global TPRM cybersecurity community to provide updates on GCP controls adherence. Develop and deliver content to Senior Management, Risk SME’s, Audit and Regulatory Representatives summarizing the results controls execution activities. Oversee and challenge the TPRM BAU process including – plan, identify and assess, control, and mitigate, test and validate, monitor, and report. Competencies: Sound knowledge of the concepts related to System & process · Good understanding and development know-how of various manufacturing process. · Knowledge of Manufacturing process, plant functioning & logistics · Data driven approach- analyze and propose strategies. · Strategic thinker to analyze and propose short-term and long-term solutions · Strong execution orientation and problem-solving approach Proficiency in using MS Office [MS Excel, Word, PPT, Power BI)
Posted 1 day ago
15.0 years
24 - 36 Lacs
Pitampura
On-site
We are seeking a Senior PHP Developer with extensive experience in product-based environments, strong team leadership, and advanced problem-solving skills. The ideal candidate is a hands-on coder with deep knowledge of Meta & Facebook APIs , AI integration , and server-side architecture . If you're passionate about building scalable systems, leading teams, and working on innovative projects, we’d love to hear from you. Experience: 15+ years Job description Responsibilities: Lead, mentor, and manage a team of PHP developers (minimum 4 years of team management experience). Architect, develop, and maintain robust and scalable PHP applications in a product-based setup. Integrate Meta/Facebook APIs for real-time data operations and social integrations. Collaborate with cross-functional teams including Product Managers, Designers, and DevOps. Identify and resolve complex technical problems with a strong focus on performance, scalability, and reliability. Implement and maintain AI modules and machine learning integrations within PHP applications. Manage and optimize server configurations and ensure system security and stability. Write clean, maintainable code and enforce best coding practices and code reviews. Stay current with the latest technologies, trends, and best practices in PHP, AI, and cloud/server management. Develop, test, and maintain web applications using PHP, Laravel, Core php, MySQL, HTML, CSS, and JavaScript. Collaborate with designers and stakeholders to comprehend project requirements and translate them into technical specifications. Engage in the entire software development lifecycle, including planning, designing, coding, testing, debugging, and deploying applications. Construct responsive and intuitive interfaces, ensuring compatibility across different browsers and optimal performance. Implement and integrate APIs, web services, and third-party libraries as needed. Conduct code reviews, resolve bugs, and enhance application performance. Work closely with the team to refine development processes and implement best practices. Stay informed about emerging technologies and industry trends, applying them to enhance development practices and solutions. Contribute to documentation, including technical specifications, user guides, and test cases. Requirements: 15+ years of experience in PHP development with a strong background in product-based companies. Proven expertise in Meta & Facebook APIs (Graph API, Marketing API, etc.). At least 4 years of experience in team leadership or project management roles. Strong understanding of OOPs , MVC architecture , RESTful APIs , and Laravel/Symfony/CodeIgniter frameworks. Deep understanding of server-side architecture , Linux , Apache/Nginx , MySQL , and cloud services (AWS/GCP preferred). Hands-on experience with AI tools or services (ChatGPT, ML APIs, AI integrations with web apps). Excellent problem-solving skills with a logical and analytical mindset. High level of commitment, reliability, and ability to work independently or in a team. Strong knowledge of PHP frameworks such as Laravel, Core php and CodeIgniter. Excellent communication skills, both verbal and written. Kindly fill the below form our team will contact you asap. https://forms.gle/sPy45dQYjMRS42wP6 Interested candidate may WhatsApp their CV on 9990931144. Regards, HR Team Job Types: Full-time, Permanent Pay: ₹2,400,000.00 - ₹3,600,000.00 per year Benefits: Cell phone reimbursement Paid sick time Paid time off Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): Do you have server knowledge? Do you know AI? What is your monthly current salary? What is your expected salary? What is your notice period? What is your current location? Experience: Team management: 1 year (Required) meta (Facebook API): 1 year (Required) Product based company : 4 years (Preferred) Work Location: In person
Posted 1 day ago
8.0 - 12.0 years
3 - 9 Lacs
Hyderābād
Remote
Category: Project Management Main location: India, Andhra Pradesh, Hyderabad Position ID: J0625-0579 Employment Type: Full Time Position Description: Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: Iam Forgercok Position: •Lead analyst/Iam forgerock Experience: 8- 12 Years Category: Software Development/ Engineering Shift: General/Rotational Main location: Hyderabad Bangalore, Chennai, Mumbai and pune Position ID: J0625-0579 Employment Type: Full Time Education Qualification: Any graduation or related field or higher with minimum 3 years of relevant experience. Position Description: We are looking for a skilled ForgeRock Developer who is experienced in identity and access management (IAM) and proficient in software development best practices. The ideal candidate should have strong Java expertise, data analysis capabilities, and a solid understanding of the ForgeRock Identity Platform (AM, IDM, DS, IG). Your future duties and responsibilities: Configure and manage authentication and authorization services within ForgeRock Advanced Identity Cloud (AIC). Design, implement, and optimize authentication journeys including multi-factor authentication (MFA) and risk-based flows. Set up, manage, and troubleshoot Remote Connector Servers (RCS) for external system integrations. Develop and maintain custom connectors using Groovy, Java, or JavaScript to sync identity data across systems. Define schema structures and attribute mappings, ensuring accurate identity data transformations and synchronizations. Create and debug scripted nodes to enhance authentication tree functionality and performance. Deploy ForgeRock components via CI/CD pipelines and manage infrastructure through Kubernetes. Ensure secure and scalable identity services using OAuth 2.0, OIDC, SAML, and federation protocols. Collaborate with cross-functional teams to troubleshoot issues and ensure smooth IAM operations in production environments. Continuously enhance operational performance and support incident response for identity services. Required qualifications to be successful in this role: Must-Have Skills ForgeRock AIC (Advanced Identity Cloud) Hands-on experience configuring and managing services in ForgeRock AIC. Authentication Journeys & Scripted Nodes Proficiency in designing custom flows using scripted nodes (JavaScript). Multi-Factor Authentication (MFA) Implementing TOTP, Push, SMS, WebAuthn/FIDO2, and adaptive MFA strategies. Remote Connector Server (RCS) Setup, management, and troubleshooting of RCS. Custom Connectors Development using Groovy, Java, or JavaScript for data sync and integration. Schema Creation & Attribute Mapping Defining object classes, custom attributes, and mapping rules. OAuth 2.0, OIDC, SAML Strong knowledge of identity federation and token lifecycle management. DevOps Pipelines CI/CD deployment experience using Azure DevOps or equivalent. Kubernetes & Containerization Deploying and maintaining ForgeRock in Kubernetes (e.g., ArgoCD, Helm). Good-to-Have Skills Operational IAM Experience Previous experience in enterprise-level IAM operations and support. Troubleshooting in Production Incident handling and issue resolution in live environments. Identity Governance Exposure to identity lifecycle management and governance frameworks. Cloud Infrastructure Familiarity with Azure, AWS, or GCP IAM-related services. ArgoCD & GitOps Experience using GitOps methodologies for identity deployment. Performance Tuning Optimizing scripted nodes, connectors, and RCS for performance. Collaboration & Communication Strong interpersonal skills to work with security teams, developers, and customers. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodation for people with disabilities in accordance with provincial legislation. Please let us know if you require reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Skills: English Java SQLite What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 day ago
5.0 years
3 - 8 Lacs
Hyderābād
On-site
Role: Senior DevOps Engineer Experience: 5+ years Location: Hyderabad / Coimbatore / Gurgaon Key Responsibilities: Maintain and evolve Terraform modules across core infrastructure services. Enhance GitHub Actions and GitLab CI pipelines with policy-as-code integrations. Automate Kubernetes secret management; transition from shared init containers to native mechanisms. Review, deploy, and manage Helm charts for service releases; own rollback reliability. Track and resolve environment drift; automate consistency checks across staging/production. Drive incident response tooling using Datadog and PagerDuty; actively participate in post-incident reviews. Assist with cost-optimization initiatives through ongoing resource sizing reviews. Implement and monitor SLA/SLO targets for critical services to ensure operational excellence. Skill Requirements: We encourage candidates with a strong foundational understanding and a willingness to grow-even if not all skills are met. Must-Have: Minimum 5 years of hands-on experience in DevOps or Platform Engineering roles. Deep expertise in Terraform, Terraform Cloud, and modular infrastructure design. Production experience managing Kubernetes clusters, preferably on Google Kubernetes Engine (GKE). Strong knowledge of CI/CD automation using GitHub Actions, ArgoCD, and Helm. Experience securing cloud-native environments using Google Secret Manager or HashiCorp Vault. Hands-on expertise in observability tooling (especially Datadog). Solid grasp of GCP networking, container workload security, and service configurations. Demonstrated ability to lead infrastructure initiatives and work cross-functionally on roadmap delivery. Desirable: Experience with GitOps and automated infrastructure policy enforcement. Familiarity with service mesh, workload identity, and multi-cluster deployments. Background in building DevOps functions or maturing legacy cloud/on-prem environments. Tools & Expectations: IaC - Terraform / Terraform Cloud – Maintain reusable infra components, handle drift/versioning across workspaces. CI/CD - GitHub / GitLab / GitHub Actions – Build secure pipelines, create reusable workflows, integrate scanning tools. App Packaging - Helm – Manage structured app packaging, configure upgrades and rollback strategies. Kubernetes - GKE – Operate core workloads, enforce RBAC, quotas, monitor pod lifecycles. Secrets - Google Secret Manager / Kubernetes Secrets – Automate sync, monitor access, enforce namespace boundaries. Observability - Datadog / PagerDuty – Implement alerting, support incident response and escalation mapping. Ingress & DNS - Cloudflare / DNS / WAF – Manage exposure policies and ingress routing via IaC. Security & Quality - Snyk / SonarQube / Wiz – Define thresholds, enforce secure and high quality deployments.
Posted 1 day ago
3.0 years
0 Lacs
New Delhi, Delhi, India
Remote
About Us At Web5 Solution , we're building scalable, high-performance systems that power world-class products. Our team thrives on innovation, collaboration, and a strong engineering culture. As we expand, we're looking for a Full Stack Developer with 3+ years of experience , primarily focused on backend development, who’s ready to take the next step in their career and grow into a Tech Lead role. What You'll Do Design, develop, and maintain robust backend systems and APIs. Collaborate with product managers and frontend developers to deliver high-quality solutions. Lead by example through clean code, solid design patterns, and engineering best practices. Participate in architectural decisions and help define the technical direction of the team. Mentor junior engineers and prepare for a transition into a Tech Lead role. Ensure system scalability, performance, and security across backend services. Must-Have Skills 3+ years of hands-on backend development experience in [e.g., Node.js, Python, Go, Java, etc.] Proficiency in building RESTful APIs, microservices, and working with databases (SQL/NoSQL) Strong understanding of system design, architecture, and cloud infrastructure (e.g., AWS, GCP, or Azure) Experience with CI/CD pipelines, version control (Git), and containerization (Docker/Kubernetes) Excellent problem-solving skills and a proactive mindset Strong communication skills and leadership potential Nice to Have Experience mentoring or leading small teams Exposure to agile methodologies Familiarity with DevOps or SRE practices Contributions to open-source or technical blogs Why Join Us? Opportunity to grow into a Tech Lead role within 6–12 months Work with a smart, driven, and supportive team Flexible work hours and remote-friendly culture Competitive salary and performance-based growth Learning budget and professional development support.
Posted 1 day ago
0 years
4 - 8 Lacs
Hyderābād
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Senior Associate Software Development Engineer is a developing subject matter expert, tasked with supporting the designing, developing, and testing of software systems, modules, or applications for software enhancements and new products including cloud-based or internet-related tools. This role is accountable for supporting detailed design for certain modules/sub-systems, doing prototype for multi-vendor infrastructure, and showcasing it internally or externally to clients. This role designs and develops functionality in a micro-services environment working with APIs, telemetry data, and running ML/AI algorithms on it, working with both structured and unstructured data. Key responsibilities: Receives instructions to design and develop solutions and functionality that drives the growth of business. Contributes to writing and testing code. Supports the execution of automated testing. Receives instructions from various stakeholders to participate in software deployment. Supports the delivery of software components while working in collaboration with the product team. Supports the integration and building of solutions through automation and coding, using 3rd party software. Receives instructions to craft, build, and debug large scale distributed systems. Supports writing, updating and maintaining the technical program, end-user documentation, and operational procedures. Assists with refactoring code. Contributes to the reviewing of code written by other developers. Performs any other related task as required. To thrive in this role, you need to have: Developing understanding of cloud architecture and services in multiple public clouds like AWS, GCP, Microsoft Azure, and Microsoft Office 365. Subject matter expert in programming languages such as C/C++, C#, Java, JavaScript, Python, Node.js, libraries and frameworks. Developing expertise of data structures, algorithms, and software design with strong analytical and debugging skills. Developing knowledge of micro services-based software architecture and experience with API product development. Developing expertise in SQL and no-SQL data stores including Elasticsearch, MongoDB, Cassandra. Developing understanding of container run time (Kubernetes, Docker, LXC/LXD). Developing proficiency with agile, lean practices and believes in test-driven development. Possess a can-do attitude and one that takes initiative. Excellent ability to work well in a diverse team with different backgrounds and experience levels. Excellent ability to thrive in a dynamic, fast-paced environment. Developing proficiency with CI/CD concepts and tools. Developing proficiency with cloud-based infrastructure and deployments. Excellent attention to detail. Academic qualifications and certifications: Bachelor's degree or equivalent in Computer Science, Engineering or a related field. Microsoft Certified Azure Fundamentals preferred. Relevant agile certifications preferred. Required experience: Moderate level experience working with geo-distributed teams through innovation, bootstrapping, pilot, and production phases with multiple stakeholders to the highest levels of quality and performance. Moderate level experience with tools across full software delivery lifecycle, for example. IDE, source control, CI, test, mocking, work tracking, defect management. Moderate level experience in Agile and Lean methodologies, Continuous Delivery / DevOps, Analytics / data-driven processes. Familiarity with working with large data sets and ability to apply proper ML/AI algorithms. Moderate level experience in developing micro-services and RESTful APIs. Moderate level experience in software development. Workplace type : Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.
Posted 1 day ago
0 years
0 Lacs
Hyderābād
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Summary: As a Data Engineer based out of our BMS Hyderabad you are part of the Data Platform team along with supporting the larger Data Engineering community, that delivers data and analytics capabilities for Data Platforms and Data Engineering Community. The ideal candidate will have a strong background in data engineering, DataOps, cloud native services, and will be comfortable working with both structured and unstructured data. Key Responsibilities The Data Engineer will be responsible for designing, building, and maintaining the data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs. Serves as the Subject Matter Expert on Data & Analytics Solutions. Accountable for delivering high quality, data products and analytic ready data solutions. Develop and maintain ETL/ELT pipelines for ingesting data from various sources into our data warehouse. Develop and maintain data models to support our reporting and analysis needs. Optimize data storage and retrieval to ensure efficient performance and scalability. Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements. Ensure data quality and integrity through data validation and testing. Implement and maintain security protocols to protect sensitive data. Stay up-to-date with emerging trends and technologies in data engineering and analytics Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to shape and adopt data and technology strategy. Accountable for evaluating Data enhancements and initiatives, assessing capacity and prioritization along with onshore and vendor teams. Knowledgeable in evolving trends in Data platforms and Product based implementation Manage and provide guidance for the data engineers supporting projects, enhancements, and break/fix efforts. Has end-to-end ownership mindset in driving initiatives through completion Comfortable working in a fast-paced environment with minimal oversight Mentors and provide career guidance to other team members effectively to unlock full potential. Prior experience working in an Agile/Product based environment. Provides strategic feedback to vendors on service delivery and balances workload with vendor teams. Qualifications & Experience Hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML is needed. Ability to craft and architect data solutions, automation pipelines to productionize solutions. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Glue, Redshift, Athena, lakeformation, etc.). Cloudera Data Platform, Tableau labs is a plus. Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, PySpark, R, PyTorch, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or GCP (Preferably strong in AWS) Strong analytical and problem-solving skills Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS site. Initiates challenging opportunities that build strong capabilities for self and team Demonstrates a focus on improving processes, structures, and knowledge within the team. Leads in analyzing current states, deliver strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion. AWS Data Engineering/Analytics certification is a plus. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 1 day ago
3.0 years
4 - 5 Lacs
Hyderābād
On-site
About Us: HighRadius, a renowned provider of cloud-based Autonomous Software for the Office of the CFO, has transformed critical financial processes for over 800+ leading companies worldwide.Trusted by prestigious organizations like 3M, Unilever, Anheuser-Busch InBev, Sanofi, Kellogg Company, Danone, Hershey's, and many others, HighRadius optimizes order-to-cash, treasury, and record-to-report processes, earning us back-to-back recognition in Gartner's Magic Quadrant and a prestigious spot in Forbes Cloud 100 List for three consecutive years. With a remarkable valuation of $3.1B and an impressive annual recurring revenue exceeding $100M, we experience a robust year-over-year growth of 24%. With a global presence spanning 8+ locations, we're in the pre-IPO stage, poised for rapid growth. We invite passionate and diverse individuals to join us on this exciting path to becoming a publicly traded company and shape our promising future. Job Summary: We are seeking a skilled and proactive DevOps Engineer to join our dynamic team. The ideal candidate will have hands-on experience in designing, implementing, and maintaining robust CI/CD pipelines, working with cloud platforms, and automating infrastructure and deployments. This role requires a high level of technical proficiency and problem-solving skills in a Linux-based environment. Location : Hyderabad Work Mode : Work from Office (5 days week) Key Responsibilities Design, develop, and maintain CI/CD pipelines using industry-standard tools Implement and manage automation scripts using Shell, Python, or Groovy Administer and optimize tools like Jenkins, Maven, SonarQube, Artifactory, and Nexus Collaborate with development and QA teams to streamline release processes Manage infrastructure and deployment on AWS or GCP Monitor system performance and troubleshoot issues in Linux-based environments Ensure security, scalability, and high availability of DevOps processes Required Skills & Experience: Experience 3-6 years Proficient in Shell scripting and at least one of Python or Groovy. Hands-on experience with: CI/CD tools: Jenkins, Maven, SonarQube Artifact management: Artifactory or Nexus Experience in cloud platforms: AWS or GCP (any one is mandatory) Strong background in Linux system administration Ability to troubleshoot complex deployment and environment issues
Posted 1 day ago
12.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Position: Capability Lead – Databricks (Director/ Enterprise Architect) Location: Chennai, Hyderabad, Bangalore or Noida (Hybrid)- No remote option available Duration: Full Time Reporting : Practice Head Budget: 30- 65 LPA (Depending on level of expertise) Notice Period: Immediate Joiner/ Currently Serving/ Notice is less than 60 days Level of experience: 12+ Years Shift Timings: 2 pm -11 pm IST. Overlap with EST time zone of 2 pm Job Summary As part of the leadership team for Data business, the role will be responsible for building and growing the Databricks capability within the organization. The role entails driving technical strategy, innovation, and solution delivery on the Databricks Unified Data Analytics platform. This leader will work closely with clients, delivery teams, technology partners, and internal stakeholders to define and deliver scalable, high-performance solutions using Databricks. Areas of Responsibility 1. Offering and Capability Development Design and enhance Databricks-based solutions, accelerators, and reusable frameworks Define architectural patterns and best practices for Lakehouse implementations Collaborate with Databricks alliance teams to grow partnership and co-sell opportunities 2. Technical Leadership Provide architectural oversight for Databricks engagements including Delta Lake, Unity Catalog, MLflow, and Structured Streaming Lead solution architecture and design in proposals, RFPs, and strategic pursuits Establish technical governance and conduct reviews to ensure standards compliance Act as a technical escalation point for complex use cases in big data and machine learning 3. Delivery Oversight Guide delivery teams through implementation best practices, optimization, and troubleshooting Ensure high-quality execution across Databricks programs with a focus on performance, scalability, and governance Drive consistent use of CI/CD, automation, and monitoring for Databricks workloads 4. Talent Development Build a specialized Databricks talent pool through recruitment, training, and mentoring Define certification and career paths aligned with Databricks and related ecosystem tools Lead internal community of practice sessions and promote knowledge sharing 5. Business Development Support Partner with sales and pre-sales to position Databricks-based analytics and AI/ML solutions Identify new use cases and business opportunities using the Databricks platform Participate in client discussions, workshops, and architecture review boards 6. Thought Leadership and Innovation Develop whitepapers, blogs, and PoVs showcasing advanced analytics and AI/ML use cases on Databricks Stay abreast of Databricks roadmap, product features, and industry developments Drive innovative solutioning using Databricks with modern data stack components Job Requirements 12–15 years of experience in Data Engineering, Analytics, or AI/ML, with 3–5 years of focused experience on Databricks Proficiency in architectural best practices in cloud around user management, data privacy, data security, performance and other non-functional requirements Proven experience delivering large-scale data and AI/ML workloads on Databricks Deep knowledge of Spark, Delta Lake, Python/Scala, SQL, and data pipeline orchestration Experience with MLOps, Feature Store, Unity Catalog, and model lifecycle management Certification preferred (e.g., Lakehouse Fundamentals, Data Engineer Associate/ Professional) Experience integrating Databricks with cloud platforms (Azure, AWS, or GCP) and BI tools Strong background in solution architecture, presales, and client advisory Excellent communication, stakeholder engagement, and leadership skills Exposure to data governance, security, compliance, and cost optimization in cloud analytics About Mastech InfoTrellis Mastech InfoTrellis is the Data and Analytics unit of Mastech Digital. At Mastech InfoTrellis, we have built intelligent Data Modernization practices and solutions to help companies harness the true potential of their data. Our expertise lies in providing timely insights from your data to make better decisions…FASTER. With our proven strategies and cutting-edge technologies, we foster intelligent decision-making, increase operational efficiency, and impact substantial business growth. With an unwavering commitment to building a better future, we are driven by the purpose of transforming businesses through data-powered innovation. (Who We Are | Mastech InfoTrellis) Mastech Digital is an Equal Opportunity Employer - All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status.
Posted 1 day ago
8.0 years
5 - 10 Lacs
Hyderābād
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. The Dir. Software Engineer position is accountable to deliver the highest quality technology solutions for UHG/Optum next generation platforms, applications, and solutions. As an integral member of the technology delivery team, the Dir. Software Engineer will be a primary owner for the design, coding, testing, debugging, documenting and supporting of software applications consistent with established specifications and business requirements to deliver business value. This individual will also be responsible for establishing solid relationships with business leadership and serving as a trusted partner to the business. Primary Responsibilities: Lead a global team (across India & US) for Enterprise Technology, providing software engineering solutions, emerging technology POCs, Architectural blueprints Develops a robust understanding of business, market, and customer needs and can leverage that knowledge to propose and drive new or improved solutions to business partners Drives the application of modern software engineering practices to improve team delivery and solution outcomes; ensures that teams adopt an “AI First” approach to all designs and solutions Champions the implementation of instrumentation and diagnostics to support maintainability and operations Champions the use of AI in solution designs as well as everyday tasks across the entire SDLC Applies broad knowledge of domain to optimize choices in architecture and design Leads team in identifying and implementing modern components, tools and technologies Champions “automation first” mentality throughout engineering and operations practices Accountable for the implementation of high-quality technology solutions Drives code reuse and contributes to strategy for span of control; minimizes redundant development Establishes partnerships across business and technology stakeholders to ensure the right problems are being solved the right way to meet customer needs Owns the improvement of user and customer metrics (NPS etc) within span of control Identifies initiatives that result in customer metric improvement and drives adoption across portfolio Initiates opportunities and influences stakeholders with ideas that drive business value Influences stakeholders to assess and manage the impacts of changes and drive decision making Recommends innovation in digital and channel strategies; Engages appropriate parties to influence change Designs solutions for every channel Explores possibilities to discover unknowns (looks for what you can't see) Coaches and provides feedback to others on development of problem-solving skills Identifies checks and balances to confirm effectiveness of solution Leads quality improvement processes Drives a quality culture by example, encouraging quality practices in others Initiates and implements strategies to embed quality practices in our team culture Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Computer Science or related technical field 8+ years of experience overseeing teams that build software solutions and services in large scale environments 8+ years of experience with design patterns, data structures and test automation 8+ years of experience in building APIs and highly scalable applications Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build process, testing and operations. Also has success in integrating AI throughout the SDLC Solid technical leadership with experience building software engineering teams Background in building platforms, products in large, complex & distributed enterprise environments Proven excellent verbal and written communication skills Preferred Qualifications: Experience in health care Experience working with US stakeholders, and working in matrix organizations Hands-on experience with cloud computing platforms such as Azure or AWS or GCP, and technologies like Node.JS, Graph QL, Dockers, Kubernetes, Kafka, React, Elastic Search, SQl or No SQL Database At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 day ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are a fast-growing, product-led fintech company on a mission to transform the financial ecosystem using cutting-edge technology, user-centric design, and deep domain expertise. We are looking for an experienced Engineering Manager to lead and scale our MERN stack development team while building high-performance, scalable, and secure financial applications. Key Responsibilities Team Leadership & Management Lead, mentor, and manage a team of 6–10 full-stack engineers (MERN stack). Drive performance management, team development, and career progression. Collaborate with product managers, designers, and QA to ensure timely delivery of high-quality features. Technical Ownership Own the architecture, design, and implementation of core fintech product modules using MongoDB, Express.js, React.js, and Node.js. Review and enforce coding standards, CI/CD pipelines, and software quality best practices. Ensure system scalability, security, performance, and availability. Project Delivery Translate business requirements into scalable tech solutions. Ensure sprint planning, estimation, execution, and stakeholder communication. Proactively identify risks and bottlenecks and implement mitigations. Product & Innovation Focus Work closely with leadership on technology strategy and product roadmaps. Foster a culture of innovation, continuous learning, and engineering excellence. Requirements 10+ years of software development experience, with at least 3+ years in team leadership roles. Proven track record of working in product-based fintech companies. Deep hands-on experience in the MERN stack (MongoDB, Express.js, React.js, Node.js). Strong understanding of cloud-native architectures (AWS/GCP/Azure). Proficiency in REST APIs, microservices, Docker, and container orchestration. Exposure to security best practices in fintech (e.g., data encryption, secure auth flows). Strong debugging, optimization, and analytical skills. Excellent communication, interpersonal, and stakeholder management skills. Ability to work in a fast-paced, startup-like environment with a product ownership mindset. Nice to Have Experience with TypeScript, GraphQL, or WebSockets. Exposure to DevOps practices and observability tools. Prior experience building lending, payments, or investment platforms. What You Can Expect In Return ESOPs basis performance Health insurance Statutory benefits like PF & Gratuity Flexible Working structure Professional development opportunities Collaborative and inclusive work culture About The Company EduFund is an early-stage platform that helps Indian parents plan for their child's higher education in advance. Our unique technology is inspired by years of asset management experience as well as personal experience in funding higher education. EduFund team is filled with chai lovers, problem solvers, ridiculous jokes, and immeasurable passion towards our work. Our founding team has had the privilege of working at companies like Reliance, Goldman Sachs, CRISIL, InCred, Upstoxx, LeverageEdu, HDFC, and many others. We have raised a seed round from notable investors such as Anchorage Capital Partners, ViewTrade, AngelList and other angels. We are headquartered in Ahmedabad, with teams in Mumbai and Pune. Website - https://www.edufund.in/ Skills: node.js,container orchestration,leadership,typescript,express.js,devops practices,gcp,fintech,mern stack,microservices,mongodb,docker,secure auth flows,azure,data encryption,websockets,graphql,observability tools,aws,rest apis,communication,react.js
Posted 1 day ago
0 years
0 Lacs
Mohali
On-site
Job Title: Associate Software Engineer(Trainee) Location: Mohali (On-site) Company: Prologic Technologies Experience Required: 0-6 months Employment Type: Full-time (In Office) About the Role Prologic Technologies is hiring a passionate Python Developer with experience in Machine Learning & AI to build intelligent and scalable solutions for global clients. You’ll collaborate with data scientists and product teams to bring ML models into production, automate workflows, and deliver measurable impact using modern AI tools and frameworks. Key Responsibilities Development & Integration Design, develop, and deploy ML/AI models using Python Integrate AI solutions into web apps, APIs, and internal tools Work on NLP, recommendation systems, classification, and predictive models Data Handling Build pipelines for data collection, cleaning, and transformation. Work with structured and unstructured data (text, images, logs). Optimize performance for large-scale datasets Model Training & Optimization Fine-tune models using real-world data Track and improve model accuracy, latency, and efficiency. Use libraries like TensorFlow, PyTorch, Scikit-learn, Hugging Face AI Automation & Deployment Use tools like MLflow, FastAPI, Docker for model lifecycle management. Automate tasks with AI agents or LLMs (OpenAI, Langchain, etc.) Build reusable modules, scripts, and workflows Must-Have Skills Strong Python coding skills. Hands-on experience with ML/AI frameworks: TensorFlow, PyTorch, Scikit-learn. Experience with model deployment using APIs, Docker, or cloud. Familiarity with NLP, computer vision, or deep learning. Git, Jupyter, Pandas, NumPy, REST APIs Nice to Have Experience with OpenAI APIs, Hugging Face Transformers, or LangChain. Exposure to cloud platforms (AWS/GCP/Azure). Knowledge of MLOps or AutoML pipelines Soft Skills We Value Curiosity to learn and experiment with AI. Ownership mindset and attention to detail. Ability to work independently and in teams. Clear communication and documentation habits Ready to Build the Future with AI? Email your resume and a brief write-up (max 300 words) on: A machine learning model you’ve built and deployed. Tools or techniques you used for optimization or automation Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 1 day ago
3.0 years
7 - 9 Lacs
Mohali
On-site
Responsibilities : · Develop and deploy machine learning models to optimize HVAC setpoints, energy consumption, and operational performance. · Design algorithms for predictive maintenance, fault detection, and dynamic thresholding specific to chillers, pumps, cooling towers, and air handling units. · Work with domain experts and controls engineers to integrate data-driven solutions into existing BMS, SCADA, or edge computing platforms. · Analyze historical and real-time sensor data (temperature, pressure, flow, energy) to identify patterns, anomalies, and optimization opportunities. · Build scalable pipelines for data ingestion, cleaning, normalization, and feature engineering using time-series data from HVAC systems. · Conduct what-if analyses and energy simulations to validate model outputs and savings estimates. · Create visualizations, dashboards, and reports that clearly communicate insights and recommendations to technical and non-technical stakeholders. · Collaborate with software engineers to productize algorithms within cloud or on-prem solutions. Requirements : · Bachelor’s or Master’s degree in Data Science, Computer Science, Mechanical Engineering, Energy Systems, or related field. · 3+ years experience applying data science techniques for industrial systems, preferably with HVAC or energy optimization projects. · Strong skills in Python / R / SQL, with libraries such as pandas, scikit-learn, TensorFlow/PyTorch, XGBoost, Neural Network and RL model development. · Proven experience with time-series analysis, forecasting, and anomaly detection. · Knowledge of HVAC equipment, control strategies, and energy efficiency principles. · Hands-on experience with BMS/SCADA/OPC/Modbus/OPC UA data integration. · Familiarity with cloud platforms (AWS, GCP, Azure) and/or edge computing frameworks. · Excellent problem-solving skills, with ability to translate operational challenges into data science solutions. Job Type: Full-time Pay: ₹700,000.00 - ₹900,000.00 per year Schedule: Day shift Application Question(s): What is your expected CTC and notice period? Experience: Total: 3 years (Required) Python: 2 years (Required) SQL: 1 year (Required) Work Location: In person
Posted 1 day ago
5.0 years
0 Lacs
Thiruvananthapuram Taluk, India
Remote
Position: Associate Technical Product Manager (IOT) Experience: 5+ years in development, with leadership experience Location: Remote Department: Product Management About the Role We are looking for an Associate Product Manager with a strong technical background in IoT development, backend systems, and databases , along with familiarity with DevOps practices . The ideal candidate will have 5+ years of hands-on development experience , including leading engineering teams, and will now transition into a product-focused role. You will work closely with engineering, design, and business teams to drive the development of scalable IoT and backend solutions. Key Responsibilities Product Strategy & Roadmap: Define and execute the product vision for IoT and backend systems, ensuring alignment with business goals. Technical Leadership: Leverage deep expertise in IoT protocols (MQTT, CoAP), cloud platforms (AWS/Azure/GCP), backend services (APIs, microservices), and databases (SQL/NoSQL) to guide engineering decisions. Cross-functional Collaboration: Work with engineering, DevOps, and QA teams to ensure seamless product development and deployment. Requirement Gathering: Translate business needs into technical specs, user stories, and actionable tasks for development teams. Performance Optimization: Monitor system performance, scalability, and reliability, suggesting improvements in architecture and DevOps pipelines. Market & Tech Research: Stay updated on IoT trends, backend technologies, and DevOps best practices to drive innovation. Agile Execution: Lead sprint planning, backlog grooming, and release management in an Agile/Scrum environment. Required Skills & Qualifications 5+ years of experience in software development, with at least 2 years in IoT and backend systems . Strong expertise in: IoT Development (embedded systems, sensor integration, edge computing). Backend Technologies (Node.js/Python/Java, REST/GraphQL APIs, microservices). Databases (PostgreSQL, MongoDB, TimescaleDB, or similar). Cloud Platforms (AWS IoT Core, Azure IoT Hub, Google Cloud IoT). Familiarity with DevOps tools (Docker, Kubernetes, CI/CD pipelines, Terraform). Experience leading development teams and mentoring engineers. Strong analytical skills with expertise in data-driven decision-making . Excellent communication skills to bridge technical and non-technical stakeholders. Preferred Qualifications Prior experience in product management or technical product ownership . Knowledge of AI/ML applications in IoT (predictive maintenance, anomaly detection). Certifications in AWS/Azure/GCP, DevOps, or IoT . Why Join Us? Opportunity to shape cutting-edge IoT and backend products. Work with a talented, cross-functional team in a fast-paced environment. Career growth in product management with a strong technical foundation.
Posted 1 day ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are a technical team specializing in building scalable data engineering platforms, developing cloud-native tools, and driving business transformation. Join us to lead the next wave of innovation, optimization, and automation in the data space. Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Grade T5 As a Full Stack Lead , you will spearhead the design, development, and delivery of end-to-end technical solutions. You'll work closely with cross-functional teams, including data engineering, product management, and stakeholders, to create robust, scalable platforms and tools that optimize workflows and enhance operational efficiency. This role requires a balance of technical expertise, leadership acumen, and a strategic mindset to drive innovation and align with organizational goals. Key Responsibilities Leadership and Strategy Provide technical leadership to a team of full-stack developers and engineers. Define the architectural vision for projects and align with business goals like automation, optimization, and transformation. Collaborate with stakeholders to prioritize, plan, and execute high-impact initiatives. Mentor team members to elevate technical capabilities and foster a growth mindset. End-to-End Development Lead the design and development of scalable web applications, tools, and dashboards. Build and maintain highly available APIs, microservices, and cloud-based solutions. Design and optimize front-end (React/Angular) and back-end (Node.js/Java/Python) solutions with a focus on performance, security, and scalability. Cloud and DevOps Integration Architect cloud-native solutions using platforms like Azure, AWS or GCP. Oversee CI/CD pipelines to ensure rapid deployment cycles with high-quality outcomes. Implement containerization (Docker, Kubernetes) for seamless deployment and scalability. Data and Automation Focus Collaborate with data engineering teams to integrate data pipelines, warehouses, and visualization tools. Identify automation opportunities to streamline workflows and eliminate manual dependencies. Champion the use of data analytics to drive decision-making and optimization. Continuous Improvement Research emerging technologies to recommend new tools or frameworks. Establish and uphold coding standards, best practices, and technical governance. Conduct code reviews, troubleshoot issues, and manage technical debt proactively. Required Skills And Experience Technical Expertise Frontend: Expertise in JavaScript/TypeScript, React/Angular, HTML/CSS. Backend: Proficiency in Node.js, Java, Python, Dot Net or equivalent technologies. Cloud: Strong experience with AWS, Azure, or GCP (compute, storage, networking). DevOps: Familiarity with CI/CD pipelines, Docker, Kubernetes, and GitOps. Databases: Experience with relational (PostgreSQL/MySQL) and NoSQL databases. Testing: Strong knowledge of unit testing, integration testing, and automated testing frameworks. Leadership and Communication Proven experience leading and mentoring technical teams. Ability to communicate complex technical concepts to non-technical stakeholders. Track record of managing end-to-end project lifecycles, ensuring timely and quality delivery. Mindset and Vision Strategic thinker with a focus on innovation, automation, and optimization. Passionate about building scalable and robust technical solutions. Eager to embrace and advocate for cutting-edge technologies and methodologies. Preferred Qualifications Experience in the data engineering ecosystem (e.g., ETL pipelines, data warehouses, or real-time analytics). Familiarity with AI/ML integration or workflow automation tools. Knowledge of enterprise security best practices. Prior experience working in Agile/Scrum environments. FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.
Posted 1 day ago
12.0 years
25 - 35 Lacs
Madurai
On-site
Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra “Clients Vision is our Mission”. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Techmangohttps://www.techmango.net/ Job Title: GCP Data Architect Location: Madurai Experience: 12+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or enterprise data platforms. Minimum 3–5 years of hands-on experience in GCP Data Service. Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner. Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema). Experience with real-time data processing, streaming architectures, and batch ETL pipelines. Good understanding of IAM, networking, security models, and cost optimization on GCP. Prior experience in leading cloud data transformation projects. Excellent communication and stakeholder management skills. Preferred Qualifications: GCP Professional Data Engineer / Architect Certification. Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics. Exposure to AI/ML use cases and MLOps on GCP. Experience working in agile environments and client-facing roles. What We Offer: Opportunity to work on large-scale data modernization projects with global clients. A fast-growing company with a strong tech and people culture. Competitive salary, benefits, and flexibility. Collaborative environment that values innovation and leadership. Job Type: Full-time Pay: ₹2,500,000.00 - ₹3,500,000.00 per year Application Question(s): Current CTC ? Expected CTC ? Notice Period ? (If you are serving Notice period please mention the Last working day) Experience: GCP Data Architecture : 3 years (Required) BigQuery: 3 years (Required) Cloud Composer (Airflow): 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person
Posted 1 day ago
3.0 years
3 - 4 Lacs
Chennai
On-site
Job Title: Freelance Technical Trainer Job Location: Chennai -On site Experience required: 3 -5 Job Description: We are seeking an experienced and enthusiastic Freelance Trainer to conduct on-site technical workshops and training programs at colleges. The trainer should have 3–5 years of industry or academic experience in one or more of the following areas: Embedded Systems, CAD Tools, AI & ML, Data Analytics, Generative AI, and DevOps . The role involves engaging with students through hands-on, practical sessions and bridging the gap between academic learning and industry needs. Key Responsibilities: Deliver interactive and hands-on training sessions at college campuses. Explain complex topics in a simplified and practical manner. Guide students through real-world projects, tools, and case studies . Coordinate with college faculty and administration for session planning. Evaluate student participation and provide feedback on learning outcomes. Stay updated with current trends and technologies to deliver relevant content. Technical Domains: Embedded Systems: Microcontrollers, Arduino, IoT, C/C++, Embedded C CAD Engineering: AutoCAD, SolidWorks, CATIA, Mechanical Design Projects Artificial Intelligence & Machine Learning: Python, Deep Learning, NLP, model building Data Analytics: Python, Power BI, SQL, Data Visualization, Excel-based dashboards Generative AI: ChatGPT, Prompt Engineering, OpenAI tools, use-case demos DevOps: Git, Docker, Jenkins, CI/CD, Cloud basics (AWS/GCP), automation tools. Required Skills: Strong subject knowledge with real-time project experience . Prior experience conducting workshops/seminars in colleges (preferred). Excellent presentation and communication skills . Ability to engage students and encourage active participation . Job Types: Full-time, Freelance Contract length: 3 months Pay: ₹30,000.00 - ₹40,000.00 per month Application Question(s): Need only offline Experience: Technical trainer: 3 years (Preferred) Location: Chennai, Tamil Nadu (Preferred) Willingness to travel: 25% (Preferred) Work Location: In person Application Deadline: 17/07/2025 Expected Start Date: 25/07/2025
Posted 1 day ago
15.0 years
3 - 8 Lacs
Chennai
On-site
Company Description NielsenIQ is a consumer intelligence company that delivers the Full View™, the world’s most complete and clear understanding of consumer buying behavior that reveals new pathways to growth. Since 1923, NIQ has moved measurement forward for industries and economies across the globe. We are putting the brightest and most dedicated minds together to accelerate progress. Our diversity brings out the best in each other so we can leave a lasting legacy on the work that we do and the people that we do it with. NielsenIQ offers a range of products and services that leverage Machine Learning and Artificial Intelligence to provide insights into consumer behavior and market trends. This position opens the opportunity to apply the latest state of the art in AI/ML and data science to global and key strategic projects. Job Description At NielsenIQ Technology, we are evolving the Discover platform, a unified, global, open data cloud ecosystem. Organizations around the world rely on our data and insights to innovate and grow. As a Platform Architect, you will play a crucial role in defining the architecture of our platforms to realize the company’s business strategy and objectives. You will collaborate closely with colleagues in Architecture and Engineering to take architecture designs from concept to delivery. As you gain knowledge of our platform, the scope of your role will expand to include an end-to-end focus. Key Responsibilities: Architect new products using NIQ’s core platforms. Assess the viability of new technologies as alternatives to existing platform selections. Drive innovations through proof of concepts and support technology migration from planning to production. Produce high-level approaches for platform components to guide component architects. Create, maintain, and promote reference architectures for key areas of the platform. Collaborate with Product Managers, Engineering Managers, Tech Leads, and Site Reliability Engineers (SREs) to govern the architecture design. Create High-Level Architectures (HLAs) and Architecture Decision Records (ADRs) for new requirements. Maintain architecture designs and diagrams. Provide architecture reviews for new intakes. Qualifications 15+ years of experience, including a strong engineering background with 5+ years in architecture/design roles. Hands-on experience building scalable enterprise platforms. Proficiency with SQL. Experience with relational databases such as PostgreSQL, document-oriented databases such as MongoDB, and search engines such as Elasticsearch. Proficiency in Java, Python, and/or JavaScript. Familiarity with common frameworks like Spring, OpenAPI, PySpark, React, Angular, etc. Background in TypeScript and Node.js a plus. Bachelor's degree in computer science or a related field (required; master’s preferred). Strong knowledge in Azure and GCP public cloud providers desirable. Good knowledge of Azure Cloud technologies, including Azure Databricks, Azure Data Factory, and Azure cloud storage (ADLS/Azure Blob). Experience with Snowflake is a definite plus. Good knowledge of Google Cloud Platform (GCP) services, including BigQuery, Workflows, Kubernetes Engine and Cloud Storage. Good understanding of Containers/Kubernetes and CI/CD. Knowledge of BI tools and analytics features is a plus. Advanced knowledge of data structures, algorithms, and designing for performance, scalability, and availability. Experience in agile software development practices. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 day ago
7.0 years
0 Lacs
India
On-site
Job Description Join our team as a Generative AI Architect in the AI Practice! We are looking for a visionary individual to lead the design, development, and deployment of cutting-edge Generative AI solutions, driving business value across our products and services. Key Responsibilities : Lead generative AI initiatives in text, ASR, TTS & STT generation and develop AI pipelines and ML models. Design scalable AI systems using state-of-the-art generative models like LLMs and diffusion models. Collaborate with cross-functional teams to integrate generative AI into business workflows. Define best practices for model lifecycle management, focusing on data preparation, training, evaluation, deployment, and monitoring. Guide ethical AI development with a focus on fairness, interpretability, and compliance. Qualifications: Bachelor’s or Master’s degree in Computer Science or related field. 7+ years of AI/ML development experience, with 2+ years in generative AI. Hands-on experience with large language models and diffusion models. Strong proficiency in Python and ML frameworks. Experience with Azure, GCP, and containerization. Familiarity with AI architecture patterns and model fine-tuning. Knowledge of data privacy laws and AI ethics principles. Preferred: Experience deploying GenAI applications in production. Knowledge of vector databases and retrieval-augmented generation. Experience with CI/CD pipelines for ML Job Type: Full-time Work Location: In person
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary The Artificial Intelligence & Engineering (AI&E) The Artificial Intelligence & Engineering (AI&E) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. Role: Testing - ETL or Data or DW Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. As a ETL/ DW Tester, you will be responsible to analyze nonfunctional requirements, design test scenarios/cases/scripts, RTM, perform test execution, document defects, triage defects, document results and attain sign-off of the design & results. The work you will do includes: Participating in business requirements discussions with client, onshore or offshore teams and attaining any clarifications to design test scenarios/cases/scripts Designing clean, efficient, and well-documented ETL and integration test scenarios/ scripts and maintaining industry & client standards, based on business requirement analysis. Able to understand Data mapping and able to create Data mappings. Provisioning the test data required to perform test execution. Performing Integration/ ETL/ Data migration test execution, logging defects, tracking and triaging defects to closure, and documenting test results Creating performance test results/reports Tracking and resolving dependencies that impact test activities Reporting and escalating any risks/issues which are blocking test activities Qualification Skills / Project Experience: Must Have: 3-6 years of hands-on experience in testing data validation scenarios and Data ingestion, pipelines, and transformation processes. Hands-on experience on Cloud platform like AWS, Azure, GCP etc. Hands On experience and strong knowledge of complex SQL queries including aggregation logic. An understanding of big data engineering tools and how they can be used strategically (e.g. Spark, Hive, Hadoop, Dask). Experience in analyzing Data Mappings, Functional requirements and converting them into test scenarios / cases. Experience in different test phases with standardized QA Processes Experience with QA specific tools for test management, test execution progress, defect tracking and triaging. Experience in different SDLC/STLC lifecycles and methodologies. Understanding of Test Strategy & Test Planning Experience in defect logging, tracking, and closing defects Experience in test status reporting and managing a small team of 2-3 members (for tenured consultants) Strong understanding of different software development life cycles (Agile, waterfall) and contemporary software quality assurance processes and automated tools Knowledge and 3+ year experience with modern testing development practices and integrated testing products such as Pyspark based automation, Market tools like (QuerySerge, Talend, ETL Validator etc.) and their integration with tools such as Gitlab, etc. Experience with scripting languages like Unix Shell or Python for automating ETL testing. Hands-on experience on data modelling and data profiling tool. Perform analysis of data migration from various sources to target system in Cloud platform/ On prem environment. Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Good to Have: 2+ yrs. of experience in API testing (JSON,XML), API automation, creation of virtualized services, service virtualization testing using any of the industry tools like CA Dev Test/Parasoft/ CA Lisa Ability to perform estimation of test activities using quantitative methods Knowledge and experience working with Microsoft Office tool Experience on analyzing slow performing database queries / execution plan analysis. Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3–6 years of experience working with ETL testing and Data migration testing. Location: BLR/HYD The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300589
Posted 1 day ago
10.0 years
42 - 49 Lacs
Bengaluru
On-site
We are seeking a Senior Manager / Cloud Infrastructure Architect with deep expertise in Azure and/or GCP, and a strategic understanding of multi-cloud environments. You will lead the design, implementation, and optimization of cloud platforms that power data-driven, AI/GenAI-enabled enterprises. You will drive engagements focused on platform modernization, infrastructure-as-code, DevSecOps, and security-first architectures — aligning with business goals across Fortune 500 and mid-size clients. Key Responsibilities: Cloud Strategy & Architecture Shape and lead enterprise-grade cloud transformation initiatives across Azure, GCP, and hybrid environments. Advise clients on cloud-native, multi-cloud, and hybrid architectures aligned to performance, scalability, cost, and compliance goals. Architect data platforms, Lakehouses, and AI-ready infrastructure leveraging cloud-native services. AI & Data Infrastructure Enablement Design and deploy scalable cloud platforms to support Generative AI, LLM workloads, and advanced analytics. Implement data mesh and lakehouse patterns on cloud using services like Azure Synapse, GCP BigQuery, Databricks, Vertex AI, etc. Required Skills & Experience: 10+ years in cloud, DevOps, or infrastructure roles, with at least 4+ years as a cloud architect or platform engineering leader. Deep knowledge of Azure or GCP services, architecture patterns, and platform ops; multi-cloud experience is a strong plus. Proven experience with Terraform, Terragrunt, CI/CD (Azure DevOps, GitHub Actions, Cloud Build), and Kubernetes. Exposure to AI/ML/GenAI infra needs (GPU setup, MLOps, hybrid clusters, etc.) Familiarity with data platform tools: Azure Synapse, Databricks, BigQuery, Delta Lake, etc. Hands-on with security tools like Vault, Key Vault, Secrets Manager, and governance via policies/IAM. Excellent communication and stakeholder management skills. Preferred Qualifications: Certifications: Azure Solutions Architect Expert, GCP Professional Cloud Architect, Terraform Associate Experience working in AI, Data Science, or Analytics-led organizations or consulting firms. Background in leading engagements in regulated industries (finance, healthcare, retail, etc.) Key Skill: Landing Zone patterns, Data - Data landing zone. Multi-cloud Databricks, Snowflake, Dataproc, Bigquery, Azure HDInsights, AWS Redshift, EMR AI - GenAI platforms, AI Foundry, AWS Bedrock, AWS Sagemaker, GCP Vertex AI. Security. Scalability, Cloud agnostic, Cost efficient, Multi-cloud architecture. Job Type: Full-time Pay: ₹4,200,000.00 - ₹4,900,000.00 per year Schedule: Day shift Work Location: In person
Posted 1 day ago
7.0 years
0 Lacs
Bengaluru
On-site
About Us With electric vehicles expected to be nearly 30% of new vehicle sales by 2025 and more than 50% by 2040, electric mobility is becoming a reality. ChargePoint (NYSE: CHPT) is at the center of this revolution, powering one of the world's leading EV charging networks and a comprehensive set of hardware, software and mobile solutions for every charging need across North America and Europe. We bring together drivers, businesses, automakers, policymakers, utilities and other stakeholders to make e-mobility a global reality. Since our founding in 2007, ChargePoint has focused solely on making the transition to electric easy for businesses, fleets and drivers. ChargePoint offers a once-in-a-lifetime opportunity to create an all-electric future and a trillion-dollar market. At ChargePoint, we foster a positive and productive work environment by committing to live our values of Be Courageous, Charge Together, Love our Customers, Operate with Openness, and Relentlessly Pursue Awesome. These values guide how we show up every day, align, and work together to build a brighter future for all of us. Join the team that is building the EV charging industry and make your mark on how people and goods will get everywhere they need to go, in any context, for generations to come. Reports To Senior Software Engineering Manager What You Will Be Doing As a Staff Software Engineer on the Driver Services team, you will help architect and maintain mission-critical systems supporting our more than 800K EV drivers across multiple continents. In this role, you will: Contribute to the evolution of a modern, scalable microservices architecture serving global driver use cases. Build reliable APIs and services that integrate with SQL and NoSQL databases (MySQL, PostgreSQL, MongoDB, Elasticsearch). Develop asynchronous event pipelines using Kafka , RabbitMQ , and similar technologies. Collaborate with product managers, architects, QA, and engineers across the US, Europe, and India to deliver high-quality solutions. Embrace best practices in testing, observability, monitoring, and CI/CD. Participate in code reviews and technical design discussions. Our services process hundreds of thousands of EV driver transactions per day. This position offers the opportunity to work at the intersection of clean technology and enterprise software, helping to shape the future of electric transportation. As we expand our presence in India, this position presents significant growth potential for those interested in technical leadership. You'll play a key role in building and mentoring our Indian engineering team. What You Will Bring to ChargePoint Bachelor's degree in Computer Science or equivalent experience. 7+ years of backend software development experience. Strong expertise in Java and Spring Boot, including production-grade service development. Experience with microservices and understanding of distributed systems fundamentals. Solid knowledge of SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Elasticsearch). Hands-on experience with message queues like Kafka or RabbitMQ. Strong problem-solving and debugging skills. Excellent communication skills and ability to collaborate across global, distributed teams. Nice to Have Experience with Golang or PHP . Familiarity with cloud-native platforms (AWS, GCP, or Azure) and container orchestration (Kubernetes). Prior experience in mobility, IoT, or infrastructure-focused products. Location Bangalore, India We are committed to an inclusive and diverse team. ChargePoint is an equal opportunity employer. We do not discriminate based on race, color, ethnicity, ancestry, national origin, religion, sex, gender, gender identity, gender expression, sexual orientation, age, disability, veteran status, genetic information, marital status or any legally protected status. If there is a match between your experiences/skills and the Company needs, we will contact you directly. ChargePoint is an equal opportunity employer. Applicants only - Recruiting agencies do not contact.
Posted 1 day ago
8.0 years
6 - 9 Lacs
Bengaluru
Remote
We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you’ll do SAP Enterprise Cloud Services (ECS) organization is responsible for providing cloud hosted infrastructure, technical & application managed services to our SAP private cloud customers. The Client Delivery Manager is the main customer-facing representative of SAP ECS organization ensuring full delivery accountability of the engagement and customer satisfaction throughout the customer lifecycle. What you bring As Client Delivery Manager you will be responsible for the following tasks: Own and grow the client engagement for SAP Enterprise Cloud Services and act as the voice of the client within SAP. Accountable for entire SAP ECS engagement, lead the engagement with supporting ECS functions and roles to deliver as per contract scope and in line with customer expectations. Offer comprehensive knowledge on SAP S/4HANA architecture, conversion, migration path, methodology and tools. Understand the SAP high availability or disaster recovery architecture, network and virtual technologies (load-balancer, virtual machine) Setup proactive service plan and conduct regular service review meetings with clients (operational and strategic topics). Act as an (de-)escalation point for delivery-related topics (Incidents, Service Requests and other customer requirements). Ensure seamless alignment across multiple ECS and other SAP internal and external stakeholders. Identify top issues, define & execute service plan activities an Support commercial change request management in the client lifecycle, perform contract compliance and risk management (project and business risks). Support the positioning of additional ECS offerings and support contract renewal in alignment with SAP sales and presales teams. What do you need to bring: Minimum 8+ years of SAP Technical administration and operations of SAP Solutions (preferably in the domain of SAP Basis) Fluency in English is mandatory Proven track record in managing client engagements, e.g. in Service Delivery Management, Consulting, or Pre-Sales settings. Strong customer orientation with a focus on relationship-, expectation- and de-escalation management. comprehensive knowledge on SAP S/4HANA architecture, conversion, migration path, methodology and tools. Knowledge of IT trends, their impact on business strategies, and SAP’s strategy and service portfolio. Ability to work effectively as a virtual member of a dynamic and dispersed team (remote) SAP Basis, IT Service Management, Project Management, Cloud , Hyperscalers certification (AWS, Azure, GCP) is a plus Meet your team ECS organization is a global organization and the regional CDM teams are located across 6 regions. We are highly diverse and positive spirited bunch of colleagues. Next to our obsession for customer satisfaction we value internal knowledge sharing & collaboration as well as make ECS organization a little better every day. #sapecscareers Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 400071 | Work Area: Information Technology | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.
Posted 1 day ago
7.0 - 10.0 years
4 - 7 Lacs
Bengaluru
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Join Kyndryl as a Data Analyst where you will unlock the power of data to drive strategic decisions and shape the future of our business. As a key member of our team, you will harness your expertise in basic statistics, business fundamentals, and communication to uncover valuable insights and transform raw data into rigorous visualizations and compelling stories. In this role, you will have the opportunity to work closely with our customers as part of a top-notch team. You will dive deep into vast IT datasets, unraveling the mysteries hidden within, and discovering trends and patterns that will revolutionize our customers' understanding of their own landscapes. Armed with your advanced analytical skills, you will draw compelling conclusions and develop data-driven insights that will directly impact their decision-making processes. Your role goes beyond traditional data analysis. You will be a trusted advisor, utilizing your domain expertise, critical thinking, and consulting skills to unravel complex business problems and translate them into innovative solutions. Your proficiency in cutting-edge software tools and technologies will empower you to gather, explore, and prepare data – ensuring it is primed for analysis, business intelligence, and insightful visualizations. Collaboration will be at the heart of your work. As a Data Analyst at Kyndryl, you will collaborate closely with cross-functional teams, pooling together your collective expertise to gather, structure, organize, and clean data. Together, we will ensure the data is in its finest form, ready to deliver actionable insights. Your unique ability to communicate and empathize with stakeholders will be invaluable. By understanding the business objectives and success criteria of each project, you will align your data analysis efforts seamlessly with our overarching goals. With your mastery of business valuation, decision-making, project scoping, and storytelling, you will transform data into meaningful narratives that drive real-world impact. At Kyndryl, we believe that data holds immense potential, and we are committed to helping you unlock that potential. You will have access to vast repositories of data, empowering you to delve deep to determine root causes of defects and variation. By gaining a comprehensive understanding of the data and its specific purpose, you will be at the forefront of driving innovation and making a difference. If you are ready to unleash your analytical ability, collaborate with industry experts, and shape the future of data-driven decision making, then join us as a Data Analyst at Kyndryl. Together, we will harness the power of data to redefine what is possible and create a future filled with limitless possibilities. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Key Responsibilities: Design and develop interactive dashboards and reports using tools like Power BI , Tableau , or Looker . Build and maintain ETL/ELT pipelines to support analytics and reporting use cases. Work with stakeholders to gather business requirements and translate them into technical specifications. Perform data modeling (star/snowflake schemas), data wrangling , and transformation . Ensure data quality , accuracy , and integrity across reporting layers. Collaborate with Data Architects, Analysts, and Engineers to design and implement data solutions that scale. Automate report generation, data refreshes, and alerting workflows. Maintain version control, CI/CD practices, and documentation for dashboards and data pipelines. Technical Skills Required: Required overall 7 to 10 years experience and 5 years Strong experience with BI tools : Power BI, Tableau, or similar Proficiency in SQL for data querying and transformations. Hands-on experience with ETL tools (e.g., Azure Data Factory, SSIS, Talend, dbt) Experience with data warehouses (e.g., Snowflake, Azure Synapse, BigQuery, Redshift) Programming/scripting knowledge in Python or PySpark (for data wrangling or pipeline development) Familiarity with cloud platforms : Azure, AWS, or GCP (Azure preferred) Understanding of data governance , security , and role-based access control (RBAC) Preferred Qualifications: Experience with CI/CD tools (e.g., GitHub Actions, Azure DevOps) Exposure to data lake architectures , real-time analytics, and streaming data (Kafka, Event Hub, etc.) Knowledge of DAX , MDX , or custom scripting for BI calculations Cloud certifications (e.g., Azure Data Engineer, AWS Data Analytics). Soft Skills: Strong analytical mindset and attention to detail Effective communication and stakeholder management skills Ability to manage multiple tasks and priorities in fast-paced environments Self-starter with a proactive problem-solving approach Preferred Skills and Experience Degree in a quantitative discipline, such as industrial engineering, finance, or economics Knowledge of data analysis tools and programming languages (e.g. Looker, Power BI, QuickSight, BigQuery, Azure Synapse, Python, R, or SQL) Professional certification, e.g., ASQ Six Sigma Cloud platform certification, e.g., AWS Certified Data Analytics – Specialty, Google Cloud Looker Business Analyst, or Microsoft Certified: Power BI Data Analyst Associate Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 day ago
0 years
28 - 34 Lacs
Bengaluru
On-site
We are seeking a skilled and strategic Cloud Architect with deep expertise in Azure (preferred), GCP, and AWS to lead cloud transformation initiatives. The ideal candidate will have a strong background in Infrastructure as Code (IaC), cloud security, governance, and CI/CD pipelines, with a good understanding of Data and AI workloads. This role demands excellent stakeholder management and the ability to thrive in challenging and ambiguous environments. Key Responsibilities: Design and implement scalable, secure, and cost-effective cloud architectures across Azure, GCP, and AWS. Lead cloud strategy and transformation initiatives aligned with business goals. Develop and maintain IaC using Terraform, Bicep, and ARM templates. Implement and manage cloud security using Azure Policy, Key Vault, and Defender for Cloud. Establish CI/CD pipelines using GitHub Actions and Azure DevOps. Define and enforce governance models including RBAC, custom policies, and Zero Trust architectures. Collaborate with data and AI teams to support infrastructure needs for advanced workloads. Optimize cloud cost management and ensure compliance with organizational policies. Provide technical leadership and mentorship to engineering teams. Engage with stakeholders to understand requirements, communicate solutions, and drive adoption. Required Skills & Qualifications: Proven experience with Azure (preferred), GCP, and AWS. Strong proficiency in Terraform, Bicep, and ARM templates. Hands-on experience with Azure Policy, Key Vault, and Defender for Cloud. Expertise in CI/CD tools: GitHub Actions, Azure DevOps. Deep understanding of cloud governance, RBAC, and Zero Trust models. Familiarity with cloud infrastructure for Data and AI workloads (preferred). Excellent stakeholder management and communication skills. Ability to work effectively in challenging and ambiguous environments. Strong problem-solving and analytical skills. Preferred Certifications: Microsoft Certified: Azure Solutions Architect Expert Google Cloud Certified: Professional Cloud Architect AWS Certified Solutions Architect – Professional Certified Terraform Associate (HashiCorp) Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,400,000.00 per year Schedule: Day shift Work Location: In person
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31300 Jobs | Dublin
Wipro
16502 Jobs | Bengaluru
EY
10539 Jobs | London
Accenture in India
10399 Jobs | Dublin 2
Uplers
8481 Jobs | Ahmedabad
Amazon
8475 Jobs | Seattle,WA
IBM
7957 Jobs | Armonk
Oracle
7438 Jobs | Redwood City
Muthoot FinCorp (MFL)
6169 Jobs | New Delhi
Capgemini
5811 Jobs | Paris,France