Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, development, evaluate and deploy innovative and highly scalable models for predictive learning Research and implement novel machine learning and statistical approaches Work closely with software engineering teams to drive real-time model implementations and new feature creations Work closely with business owners and operations staff to optimize various business operations Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Mentor other scientists and engineers in the use of ML techniques Basic Qualifications Experience programming in Java, C++, Python or related language Experience with SQL and an RDBMS (e.g., Oracle) or Data Warehouse Preferred Qualifications Experience implementing algorithms using both toolkits and self-developed code Have publications at top-tier peer-reviewed conferences or journals Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A3020367
Posted 22 hours ago
5.0 years
0 Lacs
India
Remote
Job Title: Technical Hands-on AI Senior Developer/Lead Location: Remote, India based. Company: Data-Hat AI Type: Full-time Contractor Experience Level: Senior (5+ years) About Data-Hat AI Data-Hat AI is a cutting-edge AI company building enterprise-grade Generative and Agentic AI solutions for high-impact industries such as pharma, retail, finance, and government. As a recognized leader in AI innovation, we are seeking a Technical Hands-on AI Lead to spearhead the design and development of advanced AI systems. This is a deeply technical role (not a people-management position) ideal for someone who thrives in fast-paced environments and enjoys building from the ground up. AI Agents, LLMs, RAGs, Voice interaction, Speech to text and text to speech (STT and TTS) Whisper and other APIs, etc. are important to this role. Role Description This is a full-time remote role for an AI Tech Lead (Hands-On). The AI Tech Lead will join an elite team - they will be peers to others with similar qualifications. They will be responsible for leading technical AI projects, developing and implementing advanced AI models, and collaborating with cross-functional teams. Day-to-day tasks include designing AI solutions, coding, debugging, conducting research, and staying updated with the latest AI advancements. The role also involves mentoring junior team members and ensuring the integration of best practices in AI development. Key Responsibilities Design and build agentic AI systems using orchestration tools such as LangChain, LangGraph, or similar. Architect and implement end-to-end AI/ML systems, from data pipelines to model deployment at scale. Lead technical initiatives and collaborate with cross-functional teams to deliver production-ready AI solutions. Develop, refine, and maintain LLM orchestration pipelines, prompt engineering frameworks, and multi-agent architectures. Whisper API, Speech to text, STT, Text to Speech, STT, Unreal Engine, Avatars, etc. Implement MLOps best practices, including: Automated model training and deployment CI/CD pipelines for ML Model versioning, monitoring, and retraining Write clean, production-grade Python code and mentor others on code quality and debugging. Build and manage infrastructure in cloud environments (AWS, GCP, or Azure), using Docker and Kubernetes. Rapidly prototype, test, and optimize Generative AI and Agentic AI systems in agile sprints. Context-switch across multiple concurrent projects while maintaining high-quality output. Stay on the frontier of AI advancements and integrate cutting-edge tools and techniques. Contribute to technical strategy, architecture decisions, and organizational best practices. Present and communicate complex technical ideas to technical peers and executive stakeholders. Required Skills & Experience 5+ years in AI/ML architecture and development, with a track record of end-to-end delivery. 1+ years in AI Agent development 4+ years of hands-on Python programming experience. 2+ years of experience in Generative AI (LLMs, image/audio generation) and agent-based AI systems. Strong expertise in AI and machine learning algorithms and frameworks Strong command of MLOps tooling, ML CI/CD, and monitoring frameworks. Proficient with cloud-native development and containerized environments (Docker, Kubernetes). Proven expertise in AI orchestration, autonomous agent design, and task automation using modern frameworks. Comfortable operating in a fast-moving, startup-style environment. Exceptional communication and presentation skills, with the ability to explain complex AI systems clearly. Demonstrated analytical thinking and creative problem-solving skills. Experience with vector databases, RAG architectures, or hybrid AI search systems. Why Join Us Work on cutting-edge AI products shaping the future of intelligent enterprises. Collaborate with a high-caliber team of AI experts and innovators. Flexible working environment with opportunities to impact real-world use cases globally. To Apply: Send your CV and a brief note about your relevant experience to [hiring@data-hat.com] with the subject line: AI Lead Application - [Your Name]
Posted 23 hours ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Title: Senior Databricks Architect Location : Hybrid ( Chennai, TN) Key Responsibilities: The role requires a strong focus on engaging with clients and showcasing thought leadership. The ideal candidate will be skilled in developing insightful content and presenting it effectively to diverse audiences Cloud Architecture & Design: • Understand customers’ overall data platform, business and IT priorities and success measures to design data solutions that drive business value. • Design scalable, high-performance, end-to-end data architectures, solutions and pipelines using Databricks Lakehouse architecture and related technologies that includes data ingestion, processing, storage and analytical capabilities. • Work with major cloud providers to integrate Databricks solutions seamlessly into customer’s enterprise environments. • Assess and validate non-functional attributes and build solutions that exhibit high levels of performance, security, scalability, maintainability, and reliability. • Optimize Databricks clusters and queries for efficiency, reliability, and cost-effectiveness Technical Leadership: • Guide technical teams in best practices on cloud adoption, migration and application modernization and provide thought leadership and insights into existing and emerging Databricks capabilities. • Ensure long term technical viability and optimization of cloud deployments by identifying and resolving bottlenecks proactively thereby ensuring high availability and efficient resource utilization. Stakeholder Collaboration: • Work closely with business leaders, developers, and operations teams to ensure alignment of technological solutions with business goals. • Work with prospective and existing customers to implement POCs/MVPs and guide through to deployment, operationalization, and troubleshooting. • Identify, communicate, and mitigate the assumptions, issues, and risks that occur throughout the project lifecycle. • Ability to judge and strike a balance between what is strategically logical and what can be accomplished realistically. Innovation and Continuous Improvement: • Stay updated with the latest advancements in Databricks and big data technologies and drive their adoption. • Implement innovative solutions to improve data processing, storage, and analytics efficiency. • Identify opportunities to enhance existing data engineering processes using AI and machine learning tools. Documentation: • Create comprehensive blueprints, architectural diagrams, technical collaterals, assets and implementation plans for Databricks solutions. Required Qualifications: Education: • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Experience: • Minimum of 8 years in architecture roles, including at least 3-5 years working with Databricks. Technical Expertise: • Strong proficiency in Spark, Python, Scala, SQL, and cloud platforms (Azure, AWS, GCP). • Strong proficiency in Databricks workflows, Lakehouse architecture, Delta Tables, and MLflow. • Proficiency in architectural best practices in cloud around user management, data privacy, data security, performance and other non-functional requirements. • Familiarity in building AI/ML models on cloud solutions using Databricks. Soft Skills: • Strong analytical, problem-solving and troubleshooting skills. • Excellent communication and ability to mentor and inspire teams. • Strong leadership abilities with experience managing cross-functional teams. Preferred Skills: • Databricks certification as a professional or architect. • Experience with BFSI or Healthcare or Retail domain. • Experience with hybrid cloud environments and multi-cloud strategies. • Experience with data governance principles, data privacy and security. • Experience with data visualization tools like Power BI or Tableau.
Posted 23 hours ago
8.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Title: Senior Data Architect - GCP Location: Chennai, TN ( Hybrid ) Key Responsibilities: The role requires a strong focus on engaging with clients and showcasing thought leadership. The ideal candidate will be skilled in developing insightful content and presenting it effectively to diverse audiences Cloud Architecture & Design: • Understand customers’ overall data platform, business and IT priorities and success measures to design data solutions that drive business value. • Architect, develop and implement robust, scalable and secure end-to-end data engineering, integration and warehousing solutions in GCP platform. • Assess and validate non-functional attributes and build solutions that exhibit high levels of performance, security, scalability, maintainability, and reliability. Technical Leadership: • Guide technical teams in best practices on cloud adoption, migration and application modernization and provide thought leadership and insights into emerging GCP services. • Ensure long term technical viability and optimization of cloud deployments by identifying and resolving bottlenecks proactively thereby ensuring high availability and efficient resource utilization. • Collaborate with teams to assess the effort and feasibility of proposed solutions, leveraging technical expertise to predict potential challenges and variances. Stakeholder Collaboration: • Work closely with business leaders, developers, and operations teams to ensure alignment of technological solutions with business goals. • Work with prospective and existing customers to implement POCs/MVPs and guide through to deployment, operationalization, and troubleshooting. • Identify, communicate, and mitigate the assumptions, issues, and risks that occur throughout the project lifecycle. • Ability to judge and strike a balance between what is strategically logical and what can be accomplished realistically. Innovation and Continuous Improvement: • Stay updated with the latest advancements in GCP services and tools. • Implement innovative solutions to improve data processing, storage, and analytics efficiency. • Identify opportunities to enhance existing data engineering processes using AI and machine learning tools. Documentation: • Create comprehensive blueprints, architectural diagrams, technical collaterals, assets and implementation plans for GCP-based solutions. Required Qualifications: Education: • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Experience: • Minimum of 8-10 years in architecture roles, including at least 3-5 years working with GCP. Technical Expertise: • Strong proficiency in GCP data warehousing & engineering services (BigQuery, DataProc, CloudFusion, Composer, etc.), containerization (Kubernetes, Docker), API-based microservices architecture, CI/CD pipelines, and infrastructure-as-code tools like Terraform. • Strong proficiency in data modelling, ETL/ELT, data integration data warehousing concepts. • Proficiency in architectural best practices in cloud around user management, data privacy, data security, performance and other non-functional requirements. • Programming skills in languages such as PySpark, Python, Java, or Scala. • Familiarity with building AI/ML models on cloud solutions built in GCP. Soft Skills: • Strong analytical, problem-solving and troubleshooting skills. • Excellent communication and ability to mentor and inspire teams. • Strong leadership abilities with experience managing cross-functional teams. Preferred Skills: • Solution architect and/or data engineer certifications from GCP. • Experience with BFSI or Healthcare or Retail domain. • Experience with hybrid cloud environments and multi-cloud strategies. • Experience with data governance principles, data privacy and security. • Experience with data visualization tools like Power BI or Tableau.
Posted 23 hours ago
10.0 years
0 Lacs
Bengaluru East, Karnataka, India
Remote
All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud®, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud® is IT Simplified. About the Role: We are seeking a Staff Product Manager with deep expertise in AI, Data Science, and Cybersecurity to lead the development of a transformative Security Data Fabric and Exposure Management Platform (ISPM, ITDR etc). In a world of siloed security tools and scattered data, your mission is to turn data chaos into clarity—helping organizations see, understand, and act on their cyber risk with precision and speed. The JumpCloud access and authentication team is changing the way IT admins and users authenticate to their JumpCloud managed IT resources for a frictionless experience to get work done. The days of the traditional corporate security perimeter are over. Remote work – and the domainless enterprise – are here to stay. As such, we believe securing all endpoints is at the crux of establishing trust, granting resource access, and otherwise managing a modern workforce. Our Cloud Directory Platform supports diverse IT endpoints from devices, SSO applications, infrastructure servers, RADIUS, and LDAP is making it easy for IT admins to manage the authentication required from MFA to zero trust using conditional access based on Identity Trust, Network Trust, Geolocation Trust, and Device Trust based on X509 certificates. If you want to build on this success and drive the future of authentication at JumpCloud come join us. You’ll be at the forefront of designing a next-generation data platform that: Creates a Security Data Fabric to unify signals from across the attack surface Uses AI to resolve entities and uncover hidden relationships Drives real-time Exposure Management to reduce risk faster than adversaries can act You will be responsible for: Define and drive the product strategy for the Security Data Fabric and Exposure Management platform (ISPM, ITDR etc) , aligned with customer needs and business goals Engage with CISOs, security analysts, and risk leaders to deeply understand pain points in exposure management and cyber risk visibility. Translate strategic objectives into clear, actionable product requirements that leverage AI/ML and data science to unify and contextualize security signals Collaborate closely with engineering, data science, UX, sales, and security research to deliver scalable and performant solutions Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics You Have: 10+ years of experience in product management, with at least 5 years in cybersecurity or enterprise AI/data products Deep understanding of AI/ML, data science, entity resolution, and knowledge graphs in practical applications Experience building or integrating security analytics, threat detection, vulnerability management, or SIEM/XDR solutions Ability to untangle the interconnectedness of the complex authentication mess and simplify the same to drive the cross-functional team in the same direction Proven ability to define and deliver complex B2B platforms, especially in data-heavy, high-stakes environments Excellent communication and storytelling skills to align cross-functional teams and influence stakeholders Nice to have: Experience with graph databases, ontologies, or large-scale entity disambiguation Familiarity with security standards (MITRE ATT&CK, CVSS, etc.) and frameworks (NIST CSF, ISO 27001 etc) Prior experience launching products in cloud-native or hybrid enterprise environments Degree in Computer Science, Information Systems or Engineering. MBA is a plus Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. This role is remote in the country of India. You must be located in and authorized to work in India to be considered for this role. Language: JumpCloud® has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud®, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud® is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud®'s three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud®. Please note JumpCloud® is not accepting third party resumes at this time. JumpCloud® is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote
Posted 23 hours ago
5.0 years
0 Lacs
India
Remote
We are looking for Hands-on Senior AI and Data Scientist with deep experience in time series analysis, signal processing, anomaly detection, or IoT? This is a 100% remote Monday to Friday opportunity. Develop the ecosystem of integrated AI models across our IoT devices Own the technical strategic vision for data science within connected products Collaborate with business stakeholders and cross-functional teams across the organization to identify and prioritize high-impact use cases for AI/ML applications Drive rapid prototyping efforts, focusing on learning from experiments, iterating on solutions Demonstrate innovative and creative thinking to develop novel AI/ML approaches and solutions Partner with product teams in order to embed AI/ML model solutions into physical and digital products Conduct in-depth data analysis and feature engineering to extract valuable insights and improve model performance Identify key insights from large data sources; interpret and communicate insights from data and analysis to product, service, and business managers Continuously monitor and optimize AI/ML models to ensure their accuracy, reliability, and performance in production environments Stay up-to-date with the latest advancements in AI/ML technologies and apply them to drive innovation and maintain a competitive edge Lead projects with both hands-on coding/analysis and thought leadership with a minimum viable product and minimum lovable product mindset leveraging the best technologies and methodologies appropriate for the business need and maturity Proactively identify and communicate the challenges, opportunities, risks, and threats associated with project work to ensure the timely completion of the entire product Qualifications 5+ years of hands-on experience developing and deploying AI/ML models in a production environment 3+ years of experience as a technical lead driving the development of multiple related products, driving the technical strategic vision in collaboration with more junior data scientists Specialist in signal processing, machine learning on IoT devices, and/or anomaly detection within sensor networks Proven track record of delivering high-impact AI/ML projects across multiple projects and organizations Master of machine learning algorithms, deep learning frameworks, time-series modeling, and statistical modeling techniques Fluent in Python and SQL Seasoned in DS python skills and modules: sklearn, pandas, tensorflow, pytorch, numpy, etc. Experience with end-to-end feature development (owning feature definition, roadmap development, and experimentation) Experience distilling informal customer requirements into problem definitions, dealing with ambiguity, and competing objectives Experience with big data technologies (e.g., Snowflake, Google BigQuery) and cloud computing platforms (e.g., AWS, GCP, Azure) Excellent problem-solving skills and ability to think creatively to develop innovative AI/ML solutions Strong communication and collaboration skills to effectively work with cross-functional teams and stakeholders Master's degree in Computer Science, Statistics, Mathematics, Engineering, Fluid Dynamics, or a related quantitative field (we will consider exceptional candidates without advanced degrees)
Posted 1 day ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud®, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud® is IT Simplified. About the Role: We are seeking a Staff Product Manager with deep expertise in AI, Data Science, and Cybersecurity to lead the development of a transformative Security Data Fabric and Exposure Management Platform (ISPM, ITDR etc). In a world of siloed security tools and scattered data, your mission is to turn data chaos into clarity—helping organizations see, understand, and act on their cyber risk with precision and speed. The JumpCloud access and authentication team is changing the way IT admins and users authenticate to their JumpCloud managed IT resources for a frictionless experience to get work done. The days of the traditional corporate security perimeter are over. Remote work – and the domainless enterprise – are here to stay. As such, we believe securing all endpoints is at the crux of establishing trust, granting resource access, and otherwise managing a modern workforce. Our Cloud Directory Platform supports diverse IT endpoints from devices, SSO applications, infrastructure servers, RADIUS, and LDAP is making it easy for IT admins to manage the authentication required from MFA to zero trust using conditional access based on Identity Trust, Network Trust, Geolocation Trust, and Device Trust based on X509 certificates. If you want to build on this success and drive the future of authentication at JumpCloud come join us. You’ll be at the forefront of designing a next-generation data platform that: Creates a Security Data Fabric to unify signals from across the attack surface Uses AI to resolve entities and uncover hidden relationships Drives real-time Exposure Management to reduce risk faster than adversaries can act You will be responsible for: Define and drive the product strategy for the Security Data Fabric and Exposure Management platform (ISPM, ITDR etc) , aligned with customer needs and business goals Engage with CISOs, security analysts, and risk leaders to deeply understand pain points in exposure management and cyber risk visibility. Translate strategic objectives into clear, actionable product requirements that leverage AI/ML and data science to unify and contextualize security signals Collaborate closely with engineering, data science, UX, sales, and security research to deliver scalable and performant solutions Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics You Have: 10+ years of experience in product management, with at least 5 years in cybersecurity or enterprise AI/data products Deep understanding of AI/ML, data science, entity resolution, and knowledge graphs in practical applications Experience building or integrating security analytics, threat detection, vulnerability management, or SIEM/XDR solutions Ability to untangle the interconnectedness of the complex authentication mess and simplify the same to drive the cross-functional team in the same direction Proven ability to define and deliver complex B2B platforms, especially in data-heavy, high-stakes environments Excellent communication and storytelling skills to align cross-functional teams and influence stakeholders Nice to have: Experience with graph databases, ontologies, or large-scale entity disambiguation Familiarity with security standards (MITRE ATT&CK, CVSS, etc.) and frameworks (NIST CSF, ISO 27001 etc) Prior experience launching products in cloud-native or hybrid enterprise environments Degree in Computer Science, Information Systems or Engineering. MBA is a plus Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. This role is remote in the country of India. You must be located in and authorized to work in India to be considered for this role. Language: JumpCloud® has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud®, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud® is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud®'s three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud®. Please note JumpCloud® is not accepting third party resumes at this time. JumpCloud® is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote
Posted 1 day ago
10.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud®, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud® is IT Simplified. About the Role: We are seeking a Staff Product Manager with deep expertise in AI, Data Science, and Cybersecurity to lead the development of a transformative Security Data Fabric and Exposure Management Platform (ISPM, ITDR etc). In a world of siloed security tools and scattered data, your mission is to turn data chaos into clarity—helping organizations see, understand, and act on their cyber risk with precision and speed. The JumpCloud access and authentication team is changing the way IT admins and users authenticate to their JumpCloud managed IT resources for a frictionless experience to get work done. The days of the traditional corporate security perimeter are over. Remote work – and the domainless enterprise – are here to stay. As such, we believe securing all endpoints is at the crux of establishing trust, granting resource access, and otherwise managing a modern workforce. Our Cloud Directory Platform supports diverse IT endpoints from devices, SSO applications, infrastructure servers, RADIUS, and LDAP is making it easy for IT admins to manage the authentication required from MFA to zero trust using conditional access based on Identity Trust, Network Trust, Geolocation Trust, and Device Trust based on X509 certificates. If you want to build on this success and drive the future of authentication at JumpCloud come join us. You’ll be at the forefront of designing a next-generation data platform that: Creates a Security Data Fabric to unify signals from across the attack surface Uses AI to resolve entities and uncover hidden relationships Drives real-time Exposure Management to reduce risk faster than adversaries can act You will be responsible for: Define and drive the product strategy for the Security Data Fabric and Exposure Management platform (ISPM, ITDR etc) , aligned with customer needs and business goals Engage with CISOs, security analysts, and risk leaders to deeply understand pain points in exposure management and cyber risk visibility. Translate strategic objectives into clear, actionable product requirements that leverage AI/ML and data science to unify and contextualize security signals Collaborate closely with engineering, data science, UX, sales, and security research to deliver scalable and performant solutions Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics You Have: 10+ years of experience in product management, with at least 5 years in cybersecurity or enterprise AI/data products Deep understanding of AI/ML, data science, entity resolution, and knowledge graphs in practical applications Experience building or integrating security analytics, threat detection, vulnerability management, or SIEM/XDR solutions Ability to untangle the interconnectedness of the complex authentication mess and simplify the same to drive the cross-functional team in the same direction Proven ability to define and deliver complex B2B platforms, especially in data-heavy, high-stakes environments Excellent communication and storytelling skills to align cross-functional teams and influence stakeholders Nice to have: Experience with graph databases, ontologies, or large-scale entity disambiguation Familiarity with security standards (MITRE ATT&CK, CVSS, etc.) and frameworks (NIST CSF, ISO 27001 etc) Prior experience launching products in cloud-native or hybrid enterprise environments Degree in Computer Science, Information Systems or Engineering. MBA is a plus Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. This role is remote in the country of India. You must be located in and authorized to work in India to be considered for this role. Language: JumpCloud® has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud®, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud® is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud®'s three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud®. Please note JumpCloud® is not accepting third party resumes at this time. JumpCloud® is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Senior DevOps Engineer at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. As a part of team of developers, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. To be successful as a Senior DevOps Engineer you should have experience with: Basic/ Essential Qualifications Experience working with containers, Kubernetes and related technologies on Cloud platform(s) – AWS preferably Experience working in setting up the cloud infrastructure using Cloud Formation Experience of DevOps tooling such as Jenkins, Bitbucket, Nexus, Gitlab, Jira etc. Experience in working & configuring wide range of AWS services such as API Gateway, Lambda, ECS, Sagemaker, Bedrock, EC2, RDS etc. Experience on virtual server hosting (EC2), container management (Kubernetes, ECS or EKS) as well as Windows and Linux Operating System Network experience, aware of cloud network patterns such as VPC, network interconnect, subnets, peering, firewalls, etc. Some Other Highly Valued Skills Includes Strong programming experience in Python Experience working with ML libraries e.g., scikit-learn, TensorFlow, PyTorch. Proficient with Jenkins, Bitbucket/Gitlab and Git Workflows Exposure to working within a controlled environment such as banking and financial services Experience on Docker and at least one Docker Container orchestration – Amazon ECS/EKS, Kubernetes. Relevant AWS certification(s) You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window)
Posted 1 day ago
10.0 years
0 Lacs
Greater Delhi Area
Remote
All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud®, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud® is IT Simplified. About the Role: We are seeking a Staff Product Manager with deep expertise in AI, Data Science, and Cybersecurity to lead the development of a transformative Security Data Fabric and Exposure Management Platform (ISPM, ITDR etc). In a world of siloed security tools and scattered data, your mission is to turn data chaos into clarity—helping organizations see, understand, and act on their cyber risk with precision and speed. The JumpCloud access and authentication team is changing the way IT admins and users authenticate to their JumpCloud managed IT resources for a frictionless experience to get work done. The days of the traditional corporate security perimeter are over. Remote work – and the domainless enterprise – are here to stay. As such, we believe securing all endpoints is at the crux of establishing trust, granting resource access, and otherwise managing a modern workforce. Our Cloud Directory Platform supports diverse IT endpoints from devices, SSO applications, infrastructure servers, RADIUS, and LDAP is making it easy for IT admins to manage the authentication required from MFA to zero trust using conditional access based on Identity Trust, Network Trust, Geolocation Trust, and Device Trust based on X509 certificates. If you want to build on this success and drive the future of authentication at JumpCloud come join us. You’ll be at the forefront of designing a next-generation data platform that: Creates a Security Data Fabric to unify signals from across the attack surface Uses AI to resolve entities and uncover hidden relationships Drives real-time Exposure Management to reduce risk faster than adversaries can act You will be responsible for: Define and drive the product strategy for the Security Data Fabric and Exposure Management platform (ISPM, ITDR etc) , aligned with customer needs and business goals Engage with CISOs, security analysts, and risk leaders to deeply understand pain points in exposure management and cyber risk visibility. Translate strategic objectives into clear, actionable product requirements that leverage AI/ML and data science to unify and contextualize security signals Collaborate closely with engineering, data science, UX, sales, and security research to deliver scalable and performant solutions Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics You Have: 10+ years of experience in product management, with at least 5 years in cybersecurity or enterprise AI/data products Deep understanding of AI/ML, data science, entity resolution, and knowledge graphs in practical applications Experience building or integrating security analytics, threat detection, vulnerability management, or SIEM/XDR solutions Ability to untangle the interconnectedness of the complex authentication mess and simplify the same to drive the cross-functional team in the same direction Proven ability to define and deliver complex B2B platforms, especially in data-heavy, high-stakes environments Excellent communication and storytelling skills to align cross-functional teams and influence stakeholders Nice to have: Experience with graph databases, ontologies, or large-scale entity disambiguation Familiarity with security standards (MITRE ATT&CK, CVSS, etc.) and frameworks (NIST CSF, ISO 27001 etc) Prior experience launching products in cloud-native or hybrid enterprise environments Degree in Computer Science, Information Systems or Engineering. MBA is a plus Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. This role is remote in the country of India. You must be located in and authorized to work in India to be considered for this role. Language: JumpCloud® has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud®, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud® is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud®'s three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud®. Please note JumpCloud® is not accepting third party resumes at this time. JumpCloud® is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote
Posted 1 day ago
10.0 years
0 Lacs
Pune, Maharashtra, India
Remote
All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud®, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud® is IT Simplified. About the Role: We are seeking a Staff Product Manager with deep expertise in AI, Data Science, and Cybersecurity to lead the development of a transformative Security Data Fabric and Exposure Management Platform (ISPM, ITDR etc). In a world of siloed security tools and scattered data, your mission is to turn data chaos into clarity—helping organizations see, understand, and act on their cyber risk with precision and speed. The JumpCloud access and authentication team is changing the way IT admins and users authenticate to their JumpCloud managed IT resources for a frictionless experience to get work done. The days of the traditional corporate security perimeter are over. Remote work – and the domainless enterprise – are here to stay. As such, we believe securing all endpoints is at the crux of establishing trust, granting resource access, and otherwise managing a modern workforce. Our Cloud Directory Platform supports diverse IT endpoints from devices, SSO applications, infrastructure servers, RADIUS, and LDAP is making it easy for IT admins to manage the authentication required from MFA to zero trust using conditional access based on Identity Trust, Network Trust, Geolocation Trust, and Device Trust based on X509 certificates. If you want to build on this success and drive the future of authentication at JumpCloud come join us. You’ll be at the forefront of designing a next-generation data platform that: Creates a Security Data Fabric to unify signals from across the attack surface Uses AI to resolve entities and uncover hidden relationships Drives real-time Exposure Management to reduce risk faster than adversaries can act You will be responsible for: Define and drive the product strategy for the Security Data Fabric and Exposure Management platform (ISPM, ITDR etc) , aligned with customer needs and business goals Engage with CISOs, security analysts, and risk leaders to deeply understand pain points in exposure management and cyber risk visibility. Translate strategic objectives into clear, actionable product requirements that leverage AI/ML and data science to unify and contextualize security signals Collaborate closely with engineering, data science, UX, sales, and security research to deliver scalable and performant solutions Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics You Have: 10+ years of experience in product management, with at least 5 years in cybersecurity or enterprise AI/data products Deep understanding of AI/ML, data science, entity resolution, and knowledge graphs in practical applications Experience building or integrating security analytics, threat detection, vulnerability management, or SIEM/XDR solutions Ability to untangle the interconnectedness of the complex authentication mess and simplify the same to drive the cross-functional team in the same direction Proven ability to define and deliver complex B2B platforms, especially in data-heavy, high-stakes environments Excellent communication and storytelling skills to align cross-functional teams and influence stakeholders Nice to have: Experience with graph databases, ontologies, or large-scale entity disambiguation Familiarity with security standards (MITRE ATT&CK, CVSS, etc.) and frameworks (NIST CSF, ISO 27001 etc) Prior experience launching products in cloud-native or hybrid enterprise environments Degree in Computer Science, Information Systems or Engineering. MBA is a plus Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. This role is remote in the country of India. You must be located in and authorized to work in India to be considered for this role. Language: JumpCloud® has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud®, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud® is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud®'s three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud®. Please note JumpCloud® is not accepting third party resumes at this time. JumpCloud® is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote
Posted 1 day ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description We invite you to be part of a high-impact team that is shaping the future of pricing at Amazon. The Digital Pricing team is at the forefront of leveraging AI and machine learning to compute millions of prices across our digital products. We are looking for a Software Development Engineer to help design and build the next generation of our AI-powered pricing systems. As a SDE, You'll will develop next generation pricing systems that process millions of prices daily. Our platform combines pricing strategies with advanced AI models, including Large Language Models (LLMs), GenAI and custom neural networks, to make real-time pricing decisions that directly impact our business and customers. Joining the Digital Pricing team means taking on a role where technical expertise meets real-world impact. From the outset, you’ll work with advanced AI technologies to develop solutions that power pricing across Amazon’s extensive digital products. This team is not only building innovative pricing systems—but also redefining how millions of customers experience Amazon’s digital products. The technical challenges you'll tackle are unique. You'll work with advanced AWS infrastructure, design distributed systems at massive scale, and implement AI solutions that directly impact Amazon's digital business. Whether you're optimising model performance, or developing new pricing systems, you'll have the resources and support to succeed. By joining Digital Pricing, you're not just changing teams – you're accelerating your career while solving complex challenges that matter. We're committed to helping you become the best engineer you can be, all while working on systems that shape Amazon's future. A day in the life We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Your growth matters to us. You will have a structured path to achieve your career aspirations, backed by dedicated mentorship from senior technical leaders. You'll have access to specialised AI/ML training programs, conference opportunities, and innovation time to explore new technologies. Our engineers regularly present their innovations, and work to senior leadership. Our team's culture emphasises both individual growth and collective success. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Experience with cloud computing platforms (preferably AWS) Solid understanding of data structures, algorithms, and software design principles Preferred Qualifications Understanding of AI model optimisation techniques Experience with large language models (LLMs) Familiarity with machine learning frameworks (PyTorch, TensorFlow) Experience with AI/ML deployment platforms (Amazon Q) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A3020355
Posted 1 day ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description We invite you to be part of a high-impact team that is shaping the future of pricing at Amazon. The Digital Pricing team is at the forefront of leveraging AI and machine learning to compute millions of prices across our digital products. We are looking for a Software Development Engineer to help design and build the next generation of our AI-powered pricing systems. As a SDE, You'll will develop next generation pricing systems that process millions of prices daily. Our platform combines pricing strategies with advanced AI models, including Large Language Models (LLMs), GenAI and custom neural networks, to make real-time pricing decisions that directly impact our business and customers. Joining the Digital Pricing team means taking on a role where technical expertise meets real-world impact. From the outset, you’ll work with advanced AI technologies to develop solutions that power pricing across Amazon’s extensive digital products. This team is not only building innovative pricing systems—but also redefining how millions of customers experience Amazon’s digital products. The technical challenges you'll tackle are unique. You'll work with advanced AWS infrastructure, design distributed systems at massive scale, and implement AI solutions that directly impact Amazon's digital business. Whether you're optimising model performance, or developing new pricing systems, you'll have the resources and support to succeed. By joining Digital Pricing, you're not just changing teams – you're accelerating your career while solving complex challenges that matter. We're committed to helping you become the best engineer you can be, all while working on systems that shape Amazon's future. A day in the life We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Your growth matters to us. You will have a structured path to achieve your career aspirations, backed by dedicated mentorship from senior technical leaders. You'll have access to specialised AI/ML training programs, conference opportunities, and innovation time to explore new technologies. Our engineers regularly present their innovations, and work to senior leadership. Our team's culture emphasises both individual growth and collective success. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Experience with cloud computing platforms (preferably AWS) Solid understanding of data structures, algorithms, and software design principles Preferred Qualifications Understanding of AI model optimisation techniques Experience with large language models (LLMs) Familiarity with machine learning frameworks (PyTorch, TensorFlow) Experience with AI/ML deployment platforms (Amazon Q) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A3020333
Posted 1 day ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Join us as Assistant Vice President – Data Analyst, for the Financial Crime Operations Data Domain to implement data quality process and procedures, ensuring that data is reliable and trustworthy, then extract actionable insights from it to help the organisation improve its operation, and optimise resources. Accountabilities Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification. Execution of data cleansing and transformation tasks to prepare data for analysis. Designing and building data pipelines to automate data movement and processing. Development and application of advanced analytical techniques, including machine learning and AI, to solve complex business problems. Documentation of data quality findings and recommendations for improvement. To Be Successful In This Role, You Should Have: Experience in Data Management, Data Governance including records management. Ability to review business processes from data lens and identify critical upstream and downstream components especially in financial services organisation – understanding of models, EUDAs etc. Strong understanding of Data Governance, Data Quality & Controls, Data Lineage and Reference Data/Metadata Management including relevant policies and frameworks. A clear understanding of the elements of an effective control environment, enterprise risk management framework, operational risk or other principal risk frameworks Experience of managing stakeholders directly & indirectly and across geographies & cultures. Strong understanding and practical exposure to application of BCBS 239 principles and related frameworks Commercially astute, demonstrates a consultative, yet pragmatic approach with integrity to solving issues, focusing on areas of significance and value to the business. A strong understanding of Risk and Control environment/control frameworks/op risk, including understanding of second and third line functions and impact across people, process and technology. analytical techniques and tools to extract meaningful insights from complex data sets and drive data- Strategic Leadership: Provide strategic direction and leadership for data analysis initiatives, ensuring alignment with organizational and program goals. Functional understanding of financial crime and fraud data domains would be preferred. Data Governance: Oversee data governance policies and procedures to ensure data integrity, security, and compliance with regulatory requirements. Stakeholder Collaboration: Collaborate with cross-functional teams to identify data needs and deliver actionable insights. Advanced Analytics: Utilize advanced driven decision-making Deliver best in class insights to enable stakeholders to make informed business decisions and support data quality issue remediation. Perform robust reivew and QA of key deliverables being sent out by the team to stakeholders Demonstrate a collaborative communication style, promoting trust and respect with a range of stakeholders including Operational Risk/Chief Controls Office/ Chief Data Office/ Financial Crime Operations subject matter experts (SMEs), Chief Data Office, Risk Information Services, Technology Some Other Desired Skills Include: Graduate in any discipline Effective communication and presentation skills. Experience in Data Management/ Data Governance/ Data Quality Controls, Governance, Reporting and Risk Management preferably in a financial services organisation Experience in Data Analytics and Insights (using latest tools and techniques e.g. Python, Tableau, Tableau Prep, Power Apps, Aletryx ), analytics on structured and unstructured data Experience on data bases and data science/ analytics tools and techniques like SQL, AI and ML (on live projects and not just academic projects) Proficient in MS Office – PPT, Excel, Word & Visio Comprehensive understanding of Risk, Governance and Control Frameworks and Processes Location - Noida Purpose of the role To implement data quality process and procedures, ensuring that data is reliable and trustworthy, then extract actionable insights from it to help the organisation improve its operation, and optimise resources. Accountabilities Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification. Execution of data cleansing and transformation tasks to prepare data for analysis. Designing and building data pipelines to automate data movement and processing. Development and application of advanced analytical techniques, including machine learning and AI, to solve complex business problems. Documentation of data quality findings and recommendations for improvement. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window)
Posted 1 day ago
5.0 years
0 Lacs
Kanchipuram, Tamil Nadu, India
On-site
Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better. We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you we would love to have you join us! Job Description Job Summary As a Business Systems Project Lead in the Asia Pacific region, based in Chennai, India, you will join the Global Digital Manufacturing team responsible for digital transformation across our locations worldwide. Using the latest technological solutions such as Artificial Intelligence (AI), Machine Learning (ML), Augmented Reality (AR), Manufacturing Execution Systems (MES), Industrial Internet of Things (IIoT), Cyber-Physical Systems (CPS), you will lead digital transformation plans for our locations in India. you will be report to Software Eng Systems Mgmt You will work in Onsite work mode based in our Chennai plant in Kancheepuram, India. Your Responsibilities Development and Implementation: Lead the development and deployment of industrial technology projects, advanced production techniques, and Industry 4.0 solutions with the global roadmap. Use expertise in modern technologies to guide. Solution Evaluation: Evaluate multiple technological solutions, assessing both technical capabilities and business viability. Deliver recommendations based on comprehensive analyses. Implementation Roadmaps: Develop and execute detailed implementation roadmaps, coordinating with company partners to align deliverables. Technical Documentation: Author comprehensive technical documentation, including specifications, process flows, and validation protocols. Monitor project progress through tracking and reporting. System Design and Maintenance: Architect, document, and maintain essential systems for automated manufacturing equipment, ensuring design and operational reliability. MES Integration: Lead the integration of automated manufacturing equipment with Manufacturing Execution Systems (MES). Provide expert guidance on MES system architecture and implementation. Engineering Expertise: Deliver advanced engineering expertise to ensure the operational reliability of Industry 4.0 systems, including IoT, Advanced Analytics, AR, and systems integration. Improve programmes. Knowledge Management: Capture and communicate lessons learned to provide technical support for project rollouts in other plants. Maintain a repository of best practices. Customer Feedback Analysis: Solicit and document Voice of Customer feedback from manufacturing sites to inform continuous improvement efforts. Global Collaboration: Collaborate with software engineers and business systems specialists from other regions (NA/LA/AP) to develop and standardise scalable system functionalities, ensuring global consistency and best practice sharing. Education The Essentials - You Will Have: Bachelor's degree in computer engineering, Computer Science, Engineering / Mechatronics, or equivalent. Postgraduate or certification in project management (e.g., PMP, Prince). Professional Experience Minimum of 5 years of experience in leading innovation projects, including Industry 4.0 related projects, 5+ years of experience in industrial manufacturing operations. Experience with MES deployment. Experience with integrating automated manufacturing equipment with MES systems. Experience in project management (minimum 3 years). Experience with data structures, data models, or relational database design (SQL databases). Technical Skills Knowledge of big data and data analytics tools. Experience with Industry 4.0 concepts. Familiarity with an Agile environment (e.g., Scrum, Kanban). Knowledge of Lean Manufacturing principles (e.g., 5S, TPM, Autonomous Maintenance, SMED). Other Requirements Ability to travel globally (up to 25%). The Preferred - You Might Also Have Design and maintain systems for automated manufacturing equipment. Integrate automated manufacturing equipment with MES. Provide technical advice on MES and manufacturing data systems. What We Offer Our benefits package includes … Comprehensive mindfulness programmes with a premium membership to Calm Company volunteer and donation matching programme – Your volunteer hours or personal cash donations to an eligible charity can be matched with a charitable donation. Employee Assistance Program Personalized wellbeing programs through our OnTrack program On-demand digital course library for professional development and other local benefits! At Rockwell Automation we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right person for this or other roles.
Posted 1 day ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Senior Data Scientist Responsibilities: Partner with Internal and External stakeholders and contribute to building Formal Models with Pipelines Have the eagle Eye View of the whole system and provide innovative solutions Work with UI/UX Teams to provide an end-to-end solution Qualification: BE, Tech, MCA, MSc, MTech, MBA AI or ML certificate from institutes like IISC, IIT etc is a plus Technical Skills: Languages – Python Database – MSSQL, Oracle Cloud – Azure Version control - Github Experience Required: 5+ years in the industry Experience in building AI predictive and Prescriptive Analysis Experience in transformer models like BERT, ROBERTA, LSTM, RNN is mandatory Familiarity with various generative AI models, such as generative adversarial networks (GANs) and LLAMA is preferred is a plus Experience in Integrating models to .NET or Java application is a plus
Posted 1 day ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Do you want to be a part of changing healthcare? Oracle is excited to be using our resources, knowledge, and expertise—as well as our successes in other industries—and applying them to healthcare to make a meaningful impact. As people, we all participate in healthcare, it’s deeply personal, and we put the human at the center of each of our decisions. Improving healthcare for all requires bringing unique perspectives and expertise together to holistically tackle the biggest problems in global health including physician burnout, patient access to data, and barriers to quality care. Oracle Health Applications & Infrastructure (OHAI) is developing patient- and provider-centric solutions rapidly and securely. We leverage the power of Oracle Cloud Infrastructure (OCI) to deliver robust, scalable solutions across patient, provider, payer, public health, and life sciences sectors. At OHAI, you’ll work with experts across industries and have access to cutting-edge technologies. We apply artificial intelligence, machine learning, large language models, learning networks, and data intelligence in an applied, scalable, and embedded way. Join us in creating people-centric healthcare experiences. About the Team: As part of the Oracle Health Foundations Organization, you’ll join a high-impact team focused on using machine learning and intelligent automation to improve the performance and reliability of Oracle Health's cloud platforms. We're building systems that detect anomalies, predict incidents, and enable proactive intervention at scale. As a Machine Learning Engineer, you’ll contribute to the design and delivery of production-grade ML models and services that enhance system observability, incident prediction, and product resilience. You’ll work closely with software engineers, site reliability engineers, and product teams to develop, deploy, and improve ML-based solutions in a cloud-native environment. This role offers a strong growth path toward deeper technical ownership and leadership. Responsibilities Responsibilities: Build and deploy machine learning models that detect anomalies, predict incidents, and support automated reliability features. Develop production-ready software and services to integrate ML models with observability and operational pipelines. Contribute to the full ML lifecycle—from data ingestion and model training to validation, deployment, and monitoring—using modern MLOps tools and practices. Collaborate with other engineering and product teams to align ML solutions with business and system requirements. Analyze large-scale telemetry data (logs, metrics, traces) to identify patterns, root causes, and opportunities for improvement. Help maintain and evolve our ML infrastructure, data pipelines, and observability integrations. Stay current with advancements in applied machine learning, particularly time series modeling, anomaly detection, and reliability-focused ML. Requirements: 5+ years of industry experience in software engineering or applied machine learning, with experience deploying ML models in production. Proficiency in Python and experience with ML frameworks such as TensorFlow, PyTorch, or scikit-learn. Solid understanding of the end-to-end ML lifecycle and experience with MLOps practices (model packaging, deployment, monitoring, etc.). Experience working with observability data (e.g., logs, metrics, traces) and time series data. Experience developing APIs, backend services, and working in cloud-native environments (OCI, AWS, GCP, or Azure). Strong knowledge of SQL and exposure to distributed data processing tools (e.g., Spark, BigQuery, Kafka, Flink). Comfortable working on cross-functional teams and contributing to technical discussions and code reviews. Bachelor’s, Master’s, or PhD in Computer Science, Data Science, or a related technical field is preferred. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 day ago
10.0 - 15.0 years
0 Lacs
India
On-site
Technical Sales – Protection engineers full-time position | India We are a rapidly growing international software company headquartered in Munich, Germany. We provide advanced, specialized, valuable solutions and support to customer organizations worldwide to transform data into real-world intelligence for critically important business and technical decisions. We specialize in Enterprise Asset Management (EAM), Asset Performance Management (APM), Protection & IED database management system, Network Model Management (NMM), Outage System Management (OMS) software, and analytics. IPS®SYSTEMS offers outstanding and innovative solutions for the Global Energy Industry and other sectors. IPS®SYSTEMS offers outstanding and innovative solutions for the Global Energy Industry and other sectors. We work with many other companies and corporations, such as Megger, CGI, Atos, Accenture, Wipro, etc. Do you want to contribute to the digital future? The IPS Global Group is looking for a Technical Sales Expert – Protection engineers : Your Responsibilities Provide technical sales support of IPS software solutions to the Sales Team in the domain of “Protection solutions” in Power Utilities Act as a trusted consultant to the customer in understanding their needs and proposing a solution along with the sales team Maintain a relationship with the customer post-sales for customer satisfaction Enter & maintain accurate issue descriptions and detailed updates within our CRM system Support solution and product, configuration, implementation, testing, and documentation Assist and support in the planning and implementation of our solution at the customer site, plus solution and product support, configuration, implementation, testing, and documentation Working closely with the product development teams, outside sales, and product management Team with Sales to ensure growth in the Indian market, including participation in building a strategy for growth Your Energy Degree in Electrical engineering or any related field Experience of about 10-15 years working in a Power Transmission/Distribution utility or in a company that provides software and services in the domain of “Protection/ Protection studies” Passion for Digitalization, Energy Transition, and trends that drive the rapidly changing power utility industry Knowledge of electrical power assets (transformers, motors, generators, switchgear, cables, etc.) Knowledge of Grid/Grid technologies; Power System studies; Protocols (IEC etc); software like PSS/E; CAPE, ETAP, DiGSILENT etc. Good Communication and active listening skills with the ability to communicate at all levels within an organization and demonstrated analytical and problem-solving skills. High attention to detail and accuracy with a strong focus on the quality of work What we offer An interesting and varied position in a world-leading, internationally active company where you will have the opportunity to be involved in projects worldwide Working with the latest technology like ML & AI Culture of trust, teamwork, empowerment, and constructive feedback Possibility for further career advancement in a successful growing company Your development – our commitment: career opportunities based on each profile and supporting each employee to achieve their professional and career goals
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Do you want to be a part of changing healthcare? Oracle is excited to be using our resources, knowledge, and expertise—as well as our successes in other industries—and applying them to healthcare to make a meaningful impact. As people, we all participate in healthcare, it’s deeply personal, and we put the human at the center of each of our decisions. Improving healthcare for all requires bringing unique perspectives and expertise together to holistically tackle the biggest problems in global health including physician burnout, patient access to data, and barriers to quality care. Oracle Health Applications & Infrastructure (OHAI) is developing patient- and provider-centric solutions rapidly and securely. We leverage the power of Oracle Cloud Infrastructure (OCI) to deliver robust, scalable solutions across patient, provider, payer, public health, and life sciences sectors. At OHAI, you’ll work with experts across industries and have access to cutting-edge technologies. We apply artificial intelligence, machine learning, large language models, learning networks, and data intelligence in an applied, scalable, and embedded way. Join us in creating people-centric healthcare experiences. About the Team: As part of the Oracle Health Foundations Organization, you’ll join a high-impact team focused on using machine learning and intelligent automation to improve the performance and reliability of Oracle Health's cloud platforms. We're building systems that detect anomalies, predict incidents, and enable proactive intervention at scale. As a Machine Learning Engineer, you’ll contribute to the design and delivery of production-grade ML models and services that enhance system observability, incident prediction, and product resilience. You’ll work closely with software engineers, site reliability engineers, and product teams to develop, deploy, and improve ML-based solutions in a cloud-native environment. This role offers a strong growth path toward deeper technical ownership and leadership. Responsibilities Responsibilities: Build and deploy machine learning models that detect anomalies, predict incidents, and support automated reliability features. Develop production-ready software and services to integrate ML models with observability and operational pipelines. Contribute to the full ML lifecycle—from data ingestion and model training to validation, deployment, and monitoring—using modern MLOps tools and practices. Collaborate with other engineering and product teams to align ML solutions with business and system requirements. Analyze large-scale telemetry data (logs, metrics, traces) to identify patterns, root causes, and opportunities for improvement. Help maintain and evolve our ML infrastructure, data pipelines, and observability integrations. Stay current with advancements in applied machine learning, particularly time series modeling, anomaly detection, and reliability-focused ML. Requirements: 5+ years of industry experience in software engineering or applied machine learning, with experience deploying ML models in production. Proficiency in Python and experience with ML frameworks such as TensorFlow, PyTorch, or scikit-learn. Solid understanding of the end-to-end ML lifecycle and experience with MLOps practices (model packaging, deployment, monitoring, etc.). Experience working with observability data (e.g., logs, metrics, traces) and time series data. Experience developing APIs, backend services, and working in cloud-native environments (OCI, AWS, GCP, or Azure). Strong knowledge of SQL and exposure to distributed data processing tools (e.g., Spark, BigQuery, Kafka, Flink). Comfortable working on cross-functional teams and contributing to technical discussions and code reviews. Bachelor’s, Master’s, or PhD in Computer Science, Data Science, or a related technical field is preferred. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 day ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Do you want to be a part of changing healthcare? Oracle is excited to be using our resources, knowledge, and expertise—as well as our successes in other industries—and applying them to healthcare to make a meaningful impact. As people, we all participate in healthcare, it’s deeply personal, and we put the human at the center of each of our decisions. Improving healthcare for all requires bringing unique perspectives and expertise together to holistically tackle the biggest problems in global health including physician burnout, patient access to data, and barriers to quality care. Oracle Health Applications & Infrastructure (OHAI) is developing patient- and provider-centric solutions rapidly and securely. We leverage the power of Oracle Cloud Infrastructure (OCI) to deliver robust, scalable solutions across patient, provider, payer, public health, and life sciences sectors. At OHAI, you’ll work with experts across industries and have access to cutting-edge technologies. We apply artificial intelligence, machine learning, large language models, learning networks, and data intelligence in an applied, scalable, and embedded way. Join us in creating people-centric healthcare experiences. About the Team: As part of the Oracle Health Foundations Organization, you’ll join a high-impact team focused on using machine learning and intelligent automation to improve the performance and reliability of Oracle Health's cloud platforms. We're building systems that detect anomalies, predict incidents, and enable proactive intervention at scale. As a Machine Learning Engineer, you’ll contribute to the design and delivery of production-grade ML models and services that enhance system observability, incident prediction, and product resilience. You’ll work closely with software engineers, site reliability engineers, and product teams to develop, deploy, and improve ML-based solutions in a cloud-native environment. This role offers a strong growth path toward deeper technical ownership and leadership. Responsibilities Responsibilities: Build and deploy machine learning models that detect anomalies, predict incidents, and support automated reliability features. Develop production-ready software and services to integrate ML models with observability and operational pipelines. Contribute to the full ML lifecycle—from data ingestion and model training to validation, deployment, and monitoring—using modern MLOps tools and practices. Collaborate with other engineering and product teams to align ML solutions with business and system requirements. Analyze large-scale telemetry data (logs, metrics, traces) to identify patterns, root causes, and opportunities for improvement. Help maintain and evolve our ML infrastructure, data pipelines, and observability integrations. Stay current with advancements in applied machine learning, particularly time series modeling, anomaly detection, and reliability-focused ML. Requirements: 5+ years of industry experience in software engineering or applied machine learning, with experience deploying ML models in production. Proficiency in Python and experience with ML frameworks such as TensorFlow, PyTorch, or scikit-learn. Solid understanding of the end-to-end ML lifecycle and experience with MLOps practices (model packaging, deployment, monitoring, etc.). Experience working with observability data (e.g., logs, metrics, traces) and time series data. Experience developing APIs, backend services, and working in cloud-native environments (OCI, AWS, GCP, or Azure). Strong knowledge of SQL and exposure to distributed data processing tools (e.g., Spark, BigQuery, Kafka, Flink). Comfortable working on cross-functional teams and contributing to technical discussions and code reviews. Bachelor’s, Master’s, or PhD in Computer Science, Data Science, or a related technical field is preferred. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 day ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Do you want to be a part of changing healthcare? Oracle is excited to be using our resources, knowledge, and expertise—as well as our successes in other industries—and applying them to healthcare to make a meaningful impact. As people, we all participate in healthcare, it’s deeply personal, and we put the human at the center of each of our decisions. Improving healthcare for all requires bringing unique perspectives and expertise together to holistically tackle the biggest problems in global health including physician burnout, patient access to data, and barriers to quality care. Oracle Health Applications & Infrastructure (OHAI) is developing patient- and provider-centric solutions rapidly and securely. We leverage the power of Oracle Cloud Infrastructure (OCI) to deliver robust, scalable solutions across patient, provider, payer, public health, and life sciences sectors. At OHAI, you’ll work with experts across industries and have access to cutting-edge technologies. We apply artificial intelligence, machine learning, large language models, learning networks, and data intelligence in an applied, scalable, and embedded way. Join us in creating people-centric healthcare experiences. About the Team: As part of the Oracle Health Foundations Organization, you’ll join a high-impact team focused on using machine learning and intelligent automation to improve the performance and reliability of Oracle Health's cloud platforms. We're building systems that detect anomalies, predict incidents, and enable proactive intervention at scale. As a Machine Learning Engineer, you’ll contribute to the design and delivery of production-grade ML models and services that enhance system observability, incident prediction, and product resilience. You’ll work closely with software engineers, site reliability engineers, and product teams to develop, deploy, and improve ML-based solutions in a cloud-native environment. This role offers a strong growth path toward deeper technical ownership and leadership. Responsibilities Responsibilities: Build and deploy machine learning models that detect anomalies, predict incidents, and support automated reliability features. Develop production-ready software and services to integrate ML models with observability and operational pipelines. Contribute to the full ML lifecycle—from data ingestion and model training to validation, deployment, and monitoring—using modern MLOps tools and practices. Collaborate with other engineering and product teams to align ML solutions with business and system requirements. Analyze large-scale telemetry data (logs, metrics, traces) to identify patterns, root causes, and opportunities for improvement. Help maintain and evolve our ML infrastructure, data pipelines, and observability integrations. Stay current with advancements in applied machine learning, particularly time series modeling, anomaly detection, and reliability-focused ML. Requirements: 5+ years of industry experience in software engineering or applied machine learning, with experience deploying ML models in production. Proficiency in Python and experience with ML frameworks such as TensorFlow, PyTorch, or scikit-learn. Solid understanding of the end-to-end ML lifecycle and experience with MLOps practices (model packaging, deployment, monitoring, etc.). Experience working with observability data (e.g., logs, metrics, traces) and time series data. Experience developing APIs, backend services, and working in cloud-native environments (OCI, AWS, GCP, or Azure). Strong knowledge of SQL and exposure to distributed data processing tools (e.g., Spark, BigQuery, Kafka, Flink). Comfortable working on cross-functional teams and contributing to technical discussions and code reviews. Bachelor’s, Master’s, or PhD in Computer Science, Data Science, or a related technical field is preferred. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 day ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Do you want to be a part of changing healthcare? Oracle is excited to be using our resources, knowledge, and expertise—as well as our successes in other industries—and applying them to healthcare to make a meaningful impact. As people, we all participate in healthcare, it’s deeply personal, and we put the human at the center of each of our decisions. Improving healthcare for all requires bringing unique perspectives and expertise together to holistically tackle the biggest problems in global health including physician burnout, patient access to data, and barriers to quality care. Oracle Health Applications & Infrastructure (OHAI) is developing patient- and provider-centric solutions rapidly and securely. We leverage the power of Oracle Cloud Infrastructure (OCI) to deliver robust, scalable solutions across patient, provider, payer, public health, and life sciences sectors. At OHAI, you’ll work with experts across industries and have access to cutting-edge technologies. We apply artificial intelligence, machine learning, large language models, learning networks, and data intelligence in an applied, scalable, and embedded way. Join us in creating people-centric healthcare experiences. About the Team: As part of the Oracle Health Foundations Organization, you’ll join a high-impact team focused on using machine learning and intelligent automation to improve the performance and reliability of Oracle Health's cloud platforms. We're building systems that detect anomalies, predict incidents, and enable proactive intervention at scale. As a Machine Learning Engineer, you’ll contribute to the design and delivery of production-grade ML models and services that enhance system observability, incident prediction, and product resilience. You’ll work closely with software engineers, site reliability engineers, and product teams to develop, deploy, and improve ML-based solutions in a cloud-native environment. This role offers a strong growth path toward deeper technical ownership and leadership. Responsibilities Responsibilities: Build and deploy machine learning models that detect anomalies, predict incidents, and support automated reliability features. Develop production-ready software and services to integrate ML models with observability and operational pipelines. Contribute to the full ML lifecycle—from data ingestion and model training to validation, deployment, and monitoring—using modern MLOps tools and practices. Collaborate with other engineering and product teams to align ML solutions with business and system requirements. Analyze large-scale telemetry data (logs, metrics, traces) to identify patterns, root causes, and opportunities for improvement. Help maintain and evolve our ML infrastructure, data pipelines, and observability integrations. Stay current with advancements in applied machine learning, particularly time series modeling, anomaly detection, and reliability-focused ML. Requirements: 5+ years of industry experience in software engineering or applied machine learning, with experience deploying ML models in production. Proficiency in Python and experience with ML frameworks such as TensorFlow, PyTorch, or scikit-learn. Solid understanding of the end-to-end ML lifecycle and experience with MLOps practices (model packaging, deployment, monitoring, etc.). Experience working with observability data (e.g., logs, metrics, traces) and time series data. Experience developing APIs, backend services, and working in cloud-native environments (OCI, AWS, GCP, or Azure). Strong knowledge of SQL and exposure to distributed data processing tools (e.g., Spark, BigQuery, Kafka, Flink). Comfortable working on cross-functional teams and contributing to technical discussions and code reviews. Bachelor’s, Master’s, or PhD in Computer Science, Data Science, or a related technical field is preferred. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
What makes Techjays an inspiring place to work At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We are looking for a Lead Python Developer with expertise in designing, developing, and implementing secure backend services using Python and to lead our dynamic and fast-paced team. You will work closely with cross-functional teams to deliver scalable and reliable solutions. Years of Experience : 5 - 8 years Location : Remote Key Skills: Backend Development (Expert): Python, Django/Flask, RESTful APIs, Websockets Cloud Technologies (Proficient): AWS (EC2, S3, Lambda), GCP (Compute Engine, Cloud Storage, Cloud Functions), CI/CD pipelines with Jenkins/GitLab CI or Github Actions Databases (Advanced): PostgreSQL, MySQL, MongoDB AI/ML (Familiar): Basic understanding of Machine Learning concepts, experience with RAG, Vector Databases (Pinecone or ChromaDB or others) Tools (Expert): Git, Docker, Linux Roles and Responsibilities: Design, development, and implementation of highly scalable and secure backend services using Python and Django. Architect and develop complex features for our AI-powered platforms Write clean, maintainable, and well-tested code, adhering to best practices and coding standards. Collaborate with cross-functional teams, including front-end developers, data scientists, and product managers, to deliver high-quality software. Mentor junior developers and provide technical guidance. What we offer: Best in class packages Paid holidays and flexible paid time away Casual dress code & flexible working environment Work in an engaging, fast paced environment with ample opportunities for professional development. Medical Insurance covering self & family up to 4 lakhs per person. Diverse and multicultural work environment Be part of an innovation-driven culture that provides the support and resources needed to succeed.
Posted 1 day ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
About the job: What makes Techjays an inspiring place to work At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We’re looking for a skilled Data Analytics Engineer to help us build scalable, data-driven solutions that support real-time decision-making and deep business insights. You’ll play a key role in designing and delivering analytics systems that leverage Power BI , Snowflake , and SQL , helping teams across the organization make data-informed decisions with confidence. Experience : 3 to 8 Years Primary Skills: Power BI / Tableau, SQL, Data Modeling, Data Warehousing, ETL/ELT Pipelines, AWS Glue, AWS Redshift, GCP BigQuery, Azure Data Factory, Cloud Data Pipelines, DAX, Data Visualization, Dashboard Development Secondary Skills: Python, dbt, Apache Airflow, Git, CI/CD, DevOps for Data, Snowflake, Azure Synapse, Data Governance, Data Lineage, Apache Beam, Data Catalogs, Basic Machine Learning Concepts Job Location: Coimbatore Key Responsibilities : Develop and maintain scalable, robust ETL/ELT data pipelines across structured and semi-structured data sources. Collaborate with data scientists, analysts, and business stakeholders to identify data requirements and transform them into efficient data models. Design and deliver interactive dashboards and reports using Power BI and Tableau. Implement data quality checks, lineage tracking, and monitoring solutions to ensure high reliability of data pipelines. Optimize SQL queries and BI reports for performance and scalability. Work with cloud-native tools in AWS (e.g., Glue, Redshift, S3), or GCP (e.g., BigQuery, Dataflow), or Azure (e.g., Data Factory, Synapse). Automate data integration and visualization workflows. Required Qualifications: Bachelor's or Master’s degree in Computer Science, Information Systems, Data Science, or a related field. 3+ years of experience in data engineering or data analytics roles. Proven experience with Power BI or Tableau – including dashboard design, DAX, calculated fields, and data blending. Proficiency in SQL and experience in data modeling and relational database design. Hands-on experience with data pipelines and orchestration using tools like Airflow, dbt, Apache Beam, or native cloud tools. Experience working with one or more cloud platforms – AWS, GCP, or Azure. Strong understanding of data warehousing concepts and tools such as Snowflake, BigQuery, Redshift, or Synapse. Preferred Skills: Experience with scripting in Python or Java for data processing. Familiarity with Git, CI/CD, and DevOps for data pipelines. Exposure to data governance, lineage, and catalog tools. Basic understanding of ML pipelines or advanced analytics is a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Detail-oriented with a proactive approach to troubleshooting and optimization. What we offer: Best in packages Paid holidays and flexible paid time away Casual dress code & flexible working environment Medical Insurance covering self & family up to 4 lakhs per person. Work in an engaging, fast-paced environment with ample opportunities for professional development. Diverse and multicultural work environment Be part of an innovation-driven culture that provides the support and resources needed to succeed.
Posted 1 day ago
20.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Sunnyvale, CA, USA; Atlanta, GA, USA; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, Statistics, Engineering, or equivalent practical experience. 20 years of experience in technical program management or software engineering, including leading transformational initiatives at a senior level. 15 years of experience leading 2 of the following teams: business intelligence, data science, data engineering and platform teams. Preferred qualifications: Master's degree in Computer Science, Mathematics, Statistics, or Engineering. Experience in multiple roles at scale such as data engineering, solutions engineer, systems, or software architect in data center or cloud computing environments, applying deep systems architecture expertise. About The Job Behind everything our users see online is the architecture built by the Technical Infrastructure team to keep it running. From developing and maintaining our data centers to building the next generation of Google platforms, we make Google's product portfolio possible. We're proud to be our engineers' engineers and love voiding warranties by taking things apart so we can rebuild them. We're always on call to keep our networks up and running, ensuring our users have the best and fastest experience possible. Google's projects, like our users, span the globe and require leaders to cultivate a strategic vision with impact beyond the software function. As the Senior Director Software Systems, End-to-End Systems & Business Systems and Analytics (E2ESBA), you will lead and build global teams of TPMs, Solutions Engineers, BSAs, AEs, TSEs, and SWEs. You will collaborate closely with ML Systems and Cloud AI (MSCA) Software Engineering partner teams and Cloud Supply Chain Operations (CSCO) users to develop system software, product strategy, and systems architecture. Your leadership will drive transformational innovation for platforms at scale, incorporating AI, reducing variability, and improving cycle time across the data center fleet by challenging conventional wisdom and influencing technical postures. This role is pivotal in establishing customer-focused, single-threaded ownership for data systems within CSCO and systems for the data centers, balancing initiatives while evolving execution strategy, organization, and talent to adapt to external forces. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. The US base salary range for this full-time position is $272,000-$383,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google . Responsibilities Lead and develop high-performing global multi-disciplinary teams (TPMs, Solutions Engineers, BSAs, AEs, TSEs, SWEs). Collaborate with engineering leaders to define and articulate strategic plans for server operations systems and enterprise data/analytics, influencing best practices and product excellence. Lead strategic planning and execution forums to prioritize systems products, engineering features, and integrating disparate strategies. Develop and maintain end-to-end systems architecture and optimizing testability, performance, reliability, and cost. Oversee TPMs, BSAs, AEs, and A51 TSEs/SWEs for CSCO systems delivery. Lead Delivery (TPMs), Data Engineering, ODW Applications, Data Science and Product, and PLX platform software (SWEs). Advocate E2ESBA's vision for Google data centers and supply chain as a global standard, driving 10x thinking and AI innovation for touchless, measured user experiences, acting as a recognized leader. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane