Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior AI Cloud Operations Engineer Seniority: 4-5 OffShore Profile Summary: We're looking for a Senior AI Cloud Operations Engineer to start building a new for AI Cloud Operations team, starting with this strategic position. We are searching for an experienced Senior AI Cloud Operations Engineer with deep expertise in AI technologies to lead our cloud-based AI infrastructure management. This role is integral to ensuring our AI systems scalability, reliability, and performance, enabling us to deliver cutting-edge solutions. The ideal candidate will have a robust understanding of machine learning frameworks, cloud services architecture, and operations management. Key Responsibilities: Cloud Architecture Design: Design, architect, and manage scalable cloud infrastructure tailored for AI workloads, leveraging platforms like AWS, Azure, or Google Cloud. System Monitoring and Optimization: Implement comprehensive monitoring solutions to ensure high availability and swift performance, utilizing tools like Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: Work closely with data scientists to operationalize AI models, ensuring seamless integration with existing systems and workflows. Familiarity with tools such as MLflow or TensorFlow Serving can be beneficial. Automation and Orchestration: Develop automated deployment pipelines using orchestration tools like Kubernetes and Terraform to streamline operations and reduce manual interventions. Security and Compliance: Ensure that all cloud operations adhere to security best practices and compliance standards, including data privacy regulations like GDPR or HIPAA. Documentation and Reporting: Create and maintain detailed documentation of cloud configurations, procedures, and operational metrics to foster transparency and continuous improvement. Performance Tuning: Conduct regular performance assessments and implement strategies to optimize cloud resource utilization and reduce costs without compromising system effectiveness. Issue Resolution: Rapidly identify, diagnose, and resolve technical issues, minimizing downtime and ensuring maximum uptime. Qualifications: Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred. Professional Experience: 5+ years of extensive experience in cloud operations, particularly within AI environments. Demonstrated expertise in deploying and managing complex AI systems in cloud settings . Technical Expertise: Deep knowledge of cloud platforms (AWS, Azure, Google Cloud) including their AI-specific services such as AWS SageMaker or Google AI Platform. AI/ML Proficiency: In-depth understanding of AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, along with experience in ML model lifecycle management. Infrastructure as Code: Proficiency in infrastructure-as-code tools such as Terraform and AWS CloudFormation to automate and manage cloud deployment processes. Containerization and Microservices: Expertise in managing containerized applications using Docker and orchestrating services with Kubernetes. Soft Skills: Strong analytical, problem-solving, and communication skills, with the ability to work effectively both independently and in collaboration with cross-functional teams. Preferred Qualifications Advanced certifications in cloud services, such as AWS Certified Solutions Architect or Google Cloud Professional Data Engineer. Experience in advanced AI techniques such as deep learning or reinforcement learning. Knowledge of emerging AI technologies and trends to drive innovation within existing infrastructure. List of Used Tools: Cloud Provider: Azure, AWS or Google. Performance & monitor: Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: MLflow or TensorFlow Serving Automation and Orchestration: Kubernetes and Terraform Security and Compliance: Data privacy regulations like GDPR or HIPAA. Job Title: Senior AI Cloud Operations Engineer Seniority: 4-5 OffShore Profile Summary: We're looking for a Senior AI Cloud Operations Engineer to start building a new for AI Cloud Operations team, starting with this strategic position. We are searching for an experienced Senior AI Cloud Operations Engineer with deep expertise in AI technologies to lead our cloud-based AI infrastructure management. This role is integral to ensuring our AI systems scalability, reliability, and performance, enabling us to deliver cutting-edge solutions. The ideal candidate will have a robust understanding of machine learning frameworks, cloud services architecture, and operations management. Key Responsibilities: Cloud Architecture Design: Design, architect, and manage scalable cloud infrastructure tailored for AI workloads, leveraging platforms like AWS, Azure, or Google Cloud. System Monitoring and Optimization: Implement comprehensive monitoring solutions to ensure high availability and swift performance, utilizing tools like Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: Work closely with data scientists to operationalize AI models, ensuring seamless integration with existing systems and workflows. Familiarity with tools such as MLflow or TensorFlow Serving can be beneficial. Automation and Orchestration: Develop automated deployment pipelines using orchestration tools like Kubernetes and Terraform to streamline operations and reduce manual interventions. Security and Compliance: Ensure that all cloud operations adhere to security best practices and compliance standards, including data privacy regulations like GDPR or HIPAA. Documentation and Reporting: Create and maintain detailed documentation of cloud configurations, procedures, and operational metrics to foster transparency and continuous improvement. Performance Tuning: Conduct regular performance assessments and implement strategies to optimize cloud resource utilization and reduce costs without compromising system effectiveness. Issue Resolution: Rapidly identify, diagnose, and resolve technical issues, minimizing downtime and ensuring maximum uptime. Qualifications: Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred. Professional Experience: 5+ years of extensive experience in cloud operations, particularly within AI environments. Demonstrated expertise in deploying and managing complex AI systems in cloud settings . Technical Expertise: Deep knowledge of cloud platforms (AWS, Azure, Google Cloud) including their AI-specific services such as AWS SageMaker or Google AI Platform. AI/ML Proficiency: In-depth understanding of AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, along with experience in ML model lifecycle management. Infrastructure as Code: Proficiency in infrastructure-as-code tools such as Terraform and AWS CloudFormation to automate and manage cloud deployment processes. Containerization and Microservices: Expertise in managing containerized applications using Docker and orchestrating services with Kubernetes. Soft Skills: Strong analytical, problem-solving, and communication skills, with the ability to work effectively both independently and in collaboration with cross-functional teams. Preferred Qualifications Advanced certifications in cloud services, such as AWS Certified Solutions Architect or Google Cloud Professional Data Engineer. Experience in advanced AI techniques such as deep learning or reinforcement learning. Knowledge of emerging AI technologies and trends to drive innovation within existing infrastructure. List of Used Tools: Cloud Provider: Azure, AWS or Google. Performance & monitor: Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: MLflow or TensorFlow Serving Automation and Orchestration: Kubernetes and Terraform Security and Compliance: Data privacy regulations like GDPR or HIPAA.
Posted 4 days ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Job Description – External: We are hiring a Senior Data Engineer with deep expertise in Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics to join our high-performing team. The ideal candidate will have a proven track record in designing, building, and optimizing big data pipelines and architectures while leveraging their technical proficiency in cloud-based data engineering. This role requires a strategic thinker who can bridge the gap between raw data and actionable insights, enabling data-driven decision-making for large-scale enterprise initiatives. A strong foundation in distributed computing, ETL frameworks, and advanced data modeling is crucial. The individual will work closely with data architects, analysts, and business teams to deliver scalable and efficient data solutions. Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Required Qualifications: Experience: 12+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance tools like Azure Purview. Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. JobCategory:BigData
Posted 4 days ago
7.0 years
0 Lacs
India
On-site
Welcome to Radin Health A premier Healthcare IT Software as a Service (SaaS) provider specializing in revolutionizing radiology workflow processes. Our cloud-based solutions encompass Radiology Information Systems (RIS), Picture Archiving and Communication Systems (PACS), Voice Dictation (Dictation AI) and Radiologist Workflow Management (RADIN Select), all powered by Artificial Intelligence. We are an innovative, forward-thinking Company with AI-Powered Solutions. Join Our Team! We Are Looking for Talent We are seeking a highly skilled AI Engineer with proven experience in healthcare document intelligence. You will lead the development and optimization of machine learning models for document classification and OCR-based data extraction , helping us extract structured data from prescriptions, insurance cards, consent forms, orders, and other medical records. You will be part of a fast-paced, cross-functional team working to integrate AI seamlessly into healthcare operations while maintaining the highest standards of accuracy, security, and compliance. Key Responsibilities Model Development: Design, train, and deploy ML/DL models for classifying healthcare documents and extracting structured data (e.g., patient info, insurance details, physician names, procedures). OCR Integration & Tuning: Work with OCR engines like Tesseract, AWS Textract, or Google Vision to extract text from scanned images and PDFs, enhancing accuracy via post-processing and pre-processing techniques. Document Classification: Build and refine document classification models using supervised learning and NLP techniques, with real-world noisy healthcare data. Data Labeling & Annotation: Create tools and workflows for large-scale labeling; collaborate with clinical experts and data annotators to improve model precision. Model Evaluation & Improvement: Measure model performance using precision, recall, F1 scores, and deploy improvements based on real-world production feedback. Pipeline Development: Build scalable ML pipelines for training, validation, inference, and monitoring using frameworks like PyTorch, TensorFlow, and MLFlow. Collaboration: Work closely with backend engineers, product managers, and QA teams to integrate models into healthcare products and workflows. Required Skills & Qualifications Bachelor's or Master’s in Computer Science, AI, Data Science, or related field. 7+ years experience in machine learning, with at least 3 years in healthcare AI applications. Strong experience with OCR technologies (Tesseract, AWS Textract, Azure Form Recognizer, Google Vision API). Proven track record in training and deploying classification models for healthcare documents. Experience with Python (NumPy, Pandas, Scikit-learn), deep learning frameworks (PyTorch, TensorFlow), and NLP libraries (spaCy, Hugging Face, etc.). Understanding of HIPAA-compliant data handling and healthcare terminology. Familiarity with real-world document types such as referrals, AOBs, insurance cards, and physician notes. Preferred Qualifications Experience working with noisy scanned documents and handwritten text. Exposure to EHR/EMR systems and HL7/FHIR integration. Knowledge of labeling tools like Label Studio or Prodigy. Experience with active learning or human-in-the-loop systems. Contributions to healthcare AI research or open-source projects.
Posted 4 days ago
8.0 - 12.0 years
35 - 50 Lacs
Bengaluru
Work from Office
My profile - linkedin.com/in/yashsharma1608 Position : AI Architect ( Gen AI ) Experience : 8 - 10 years Notice Period : Immediate to 15 days. Budget upto - 45 to 50 LPA Location : Bangalore. Note : - (any developer with minimum 3 to 4 years into AI), SaaS company mandatory. Product Based company Mandatory Discuss the feasibility of AI/ML use cases along with architectural design with business teams and translate the vision of business leaders into realistic technical implementation Play a key role in defining the AI architecture and selecting appropriate technologies from a pool of open-source and commercial offerings Design and implement robust ML infrastructure and deployment pipelines Establish comprehensive MLOps practices for model training, versioning, and deployment Lead the development of HR-specialized language models (SLMs) Implement model monitoring, observability, and performance optimization frameworks Develop and execute fine-tuning strategies for large language models Create and maintain data quality assessment and validation processes Design model versioning systems and A/B testing frameworks Define technical standards and best practices for AI development Optimize infrastructure for cost, performance, and scalability Required Qualifications 7+ years of experience in ML/AI engineering or related technical roles 3+ years of hands-on experience with MLOps and production ML systems Demonstrated expertise in fine-tuning and adapting foundation models Strong knowledge of model serving infrastructure and orchestration Proficiency with MLOps tools (MLflow, Kubeflow, Weights & Biases, etc.) Experience implementing model versioning and A/B testing frameworks Strong background in data quality methodologies for ML training Proficiency in Python and ML frameworks (PyTorch, TensorFlow, Hugging Face) Experience with cloud-based ML platforms (AWS, Azure, Google Cloud) Proven track record of deploying ML models at scale Preferred Qualifications Experience developing AI applications for enterprise software domains Knowledge of distributed training techniques and infrastructure Experience with retrieval-augmented generation (RAG) systems Familiarity with vector databases (Pinecone, Weaviate, Milvus) Understanding of responsible AI practices and bias mitigation Bachelor's or Master's degree in Computer Science, Machine Learning, or related field What We Offer Opportunity to shape AI strategy for a fast-growing HR technology leader Collaborative environment focused on innovation and impact Competitive compensation package Professional development opportunities Flexible work arrangements Qualified candidates who are passionate about applying cutting-edge AI to transform HR technology are encouraged to apply
Posted 4 days ago
0.0 - 2.0 years
5 - 8 Lacs
Gurugram
Work from Office
Build the future of AI video with Django, PostgreSQL, Redis, DSPy, MLflow & GCP. Debug tough issues, ship fast, and own features end-to-end. Must love AI tools, learn fast, and thrive under pressure. Coding task Final interview. Share your GitHub!
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary We are seeking a highly motivated AI/ML Engineer to design, develop, and deploy machine learning solutions that solve real-world problems. The ideal candidate should have strong foundations in machine learning algorithms , Python programming , and experience with model development , data pipelines , and production deployment in cloud or on-prem environments. Key Responsibilities Design and implement machine learning models and AI solutions for business use cases Build and optimize data preprocessing pipelines for training and inference Train, evaluate, and fine-tune supervised, unsupervised, and deep learning models Collaborate with data engineers, product teams, and software developers Deploy ML models into production using APIs, Docker, or cloud-native tools Monitor model performance and retrain/update models as needed Document model architectures, experiments, and performance metrics Research and stay updated on new AI/ML trends and tools Required Skills And Experience Strong programming skills in Python (NumPy, Pandas, Scikit-learn, etc.) Experience with deep learning frameworks like TensorFlow, Keras, or PyTorch Solid understanding of machine learning algorithms, data structures, and statistics Experience with NLP, computer vision, or time series analysis is a plus Familiarity with tools like Jupyter, MLflow, or Weights & Biases Understanding of Docker, Git, and RESTful APIs Experience with cloud platforms such as AWS, GCP, or Azure Strong problem-solving and communication skills Nice To Have Experience with MLOps tools and concepts (CI/CD for ML, model monitoring) Familiarity with big data tools (Spark, Hadoop) Knowledge of FastAPI, Flask, or Streamlit for ML API development Understanding of transformer models (e.g., BERT, GPT) or LLM integration Education Bachelor’s or Master’s degree in Computer Science, Data Science, AI, or a related field Certifications in Machine Learning/AI (e.g., Google ML Engineer, AWS ML Specialty) are a plus
Posted 4 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
THIS REQUIREMENT IS FOR A CLIENT!!! Job Overview: We are seeking a skilled Machine Learning Engineer with a strong grasp of ML algorithms , techniques, and best practices. This role offers the opportunity to design, build, and deploy scalable machine learning solutions in a dynamic environment. Responsibilities: Strong understanding of ML algorithms, techniques, and best practices. Strong understanding of Databricks, Azure AI services and other ML platforms and cloud computing platforms (e.g., AWS, Azure, GCP) and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Strong understanding of Mlflow or Kubeflow frameworks Strong programming skills in python and Data analytical expertise Experience in building Gen AI based solutions like chatbots using RAG approaches Expertise in any of the gen ai frameworks such as Lang chain/ Lang graph, autogen, crewai, etc. Requirements: Proven experience as a Machine Learning Engineer, Data Scientist, or similar role, with a focus on product matching, image matching, and LLM. Solid understanding of machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with product matching algorithms and image recognition techniques. Experience with natural language processing and large language models (LLMs) such as GPT, BERT, or similar architectures. Optimize and fine-tune models for performance and scalability.. Collaborate with cross-functional teams to integrate ML solutions into products. Stay updated with the latest advancements in AI and machine learning. Please feel free to drop your resume to pushpa.belliappa@tekworks.in
Posted 4 days ago
12.0 years
0 Lacs
Hyderābād
On-site
Job Description: Job Description – External: We are hiring a Senior Data Engineer with deep expertise in Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics to join our high-performing team. The ideal candidate will have a proven track record in designing, building, and optimizing big data pipelines and architectures while leveraging their technical proficiency in cloud-based data engineering. This role requires a strategic thinker who can bridge the gap between raw data and actionable insights, enabling data-driven decision-making for large-scale enterprise initiatives. A strong foundation in distributed computing, ETL frameworks, and advanced data modeling is crucial. The individual will work closely with data architects, analysts, and business teams to deliver scalable and efficient data solutions. Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Required Qualifications: Experience: 12+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance tools like Azure Purview. Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 4 days ago
3.0 - 5.0 years
0 Lacs
Hyderābād
Remote
Company Description It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today — ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone. Job Description About the Team The Finance Analytics & Insights (FA&I) team is transforming how Finance operates by embedding AI/ML into the core of our decision-making processes. We are building intelligent, scalable data products that power use cases across forecasting, anomaly detection, case summarization, and agentic automation. Our global team includes data product managers, analysts, and engineers who are passionate about delivering measurable business value. Role Overview We are seeking a highly motivated and analytically strong ML Engineer to join our India-based team. This role will support the development and scaling of AI/ML-powered data products that drive strategic insights across Finance. As an IC3-level individual contributor, you will work closely with the Data Product Manager and Insights Analyst to build AI/ML solutions that deliver measurable business value. Key Responsibilities Design, build, and deploy machine learning models that support use cases such as: Forecasting Anomaly detection Case summarization Agentic AI assistants Partner with the Insights Analyst to perform feature engineering, exploratory data analysis, and hypothesis testing Build and iterate on proof-of-concepts (POCs) to validate model design and demonstrate business value Collaborate with the Data Product Manager to align model development with product strategy and business outcomes Own and manage the Databricks instance for the FA&I team—partnering with the DT Data & Analytics team to define a roadmap of capabilities, test and validate new features, and ensure the platform supports scalable ML development and deployment Ensure models are production-ready, scalable, and maintainable—working closely with DT and D&A teams to integrate into enterprise platforms Monitor model performance, implement feedback loops, and retrain models as needed Contribute to agile product development processes including sprint planning, backlog grooming, and user story creation Qualifications Required Skills & Experience 3–5 years of experience in machine learning engineering, data science, or applied AI roles Strong proficiency in Python and ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch) Solid understanding of feature engineering, model evaluation, and MLOps practices Experience working with large datasets using SQL and Snowflake Familiarity with Databricks for model development and orchestration Experience with CI/CD pipelines, version control (Git), and ML workflow tools Ability to translate business problems into ML solutions and communicate technical concepts to non-technical stakeholders Experience working in agile teams and collaborating with product managers, analysts, and engineers Preferred Qualifications Experience working in or supporting Finance or Accounting teams Prior experience deploying models in production environments and integrating with enterprise systems Familiarity with GenAI, prompt engineering, or LLM-based applications is a plus Experience with MLflow, Azure ML, or similar platforms Comfort with async collaboration tools and practices, including Teams, recorded video demos, and documentation-first communication Experience working in a global, cross-functional environment with stakeholders across time zones Key Behaviors & Mindsets Builder’s Mentality: You love turning ideas into working models and iterating quickly to improve them. Collaborative Engineer: You work closely with analysts and product managers to co-create solutions that solve real business problems. Customer-Centric: You care deeply about the end user and build models that are interpretable, actionable, and aligned with business needs. Bias for Action: You move fast, test often, and focus on delivering value—not just code. Global Mindset: You thrive in a distributed team and proactively align to US morning hours (PST overlap) to keep momentum across geographies. Async-First Communicator: You’re comfortable working in a hybrid async environment—leveraging Teams, recorded demos, and documentation to keep work moving forward. Growth-Oriented: You’re always learning—whether it’s a new algorithm, tool, or business domain—and you help others grow too. Additional Information Work Personas We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work and their assigned work location. Learn more here. To determine eligibility for a work persona, ServiceNow may confirm the distance between your primary residence and the closest ServiceNow office using a third-party service. Equal Opportunity Employer ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements. Accommodations We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact globaltalentss@servicenow.com for assistance. Export Control Regulations For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities. From Fortune. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.
Posted 4 days ago
0.0 - 3.0 years
9 - 11 Lacs
Hyderābād
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Summary: The Data Scientist I will play a crucial role in supporting operational analytics across Global Product & Supply (GPS) to ensure products continue to serve most pressing GPS analytics needs, with potential opportunities to build new advanced analytics capabilities such as predictive modelling, simulation, and optimization. The Data Scientist I should have a strong interest in solving business problems, and an eagerness to work on all parts of the analytics value chain, from partnering with IT on data pipelines to operationalizing predictive models in the service of our patients around the world. Roles & Responsibilities Conduct analysis and interpretation of complex data sets to derive meaningful insights and recommendations based on an understanding of GPS priorities, critical issues, and value levers. Collaborate with stakeholders to identify business problems, goals, and KPIs to to design, establish and maintain data pipelines, models and business facing reports and dashboards. Collaborate proactively with IT teams to develop and enhance data infrastructure, data pipelines, and analytical tools for efficient data collection, processing, and analysis. Prepare reports, dashboards, and presentations to communicate analyses to stakeholders at various levels of the organization. Follow technical best practices in building, maintaining, and enhancing analytics output with scalable solutions, including code version control, pipeline management, deployment, and documentation. Work hours that provide sufficient overlap with standard east coast US working hours. Skills and competencies Experience with developing predictive & prescriptive machine learning / artificial intelligence models for classification, regression and time-series problems. Experience with MLOps principles and tools (MLflow, Kubeflow, or similar MLOps platforms) is a plus. Solid understanding of digital analytics tools and platforms and version control. Strong communication skills with the ability to present complex information to non-technical stakeholders in a clear manner. Strong business acumen and strategic thinking, with the ability to translate analytical findings into actionable insights and recommendations. Experience Bachelor's or Master's degree in an analytical, engineering, operations research or scientific discipline. Proven experience (typically 0-3 years) in a data and analytics role, including direct development experience. Experience working with large datasets, data visualization tools, statistical software packages and platforms (specifically, Python, advanced SQL,AWS, GitHub, Tableau) Experience in the GPS/biopharma industry a plus. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 4 days ago
10.0 years
4 - 8 Lacs
Calcutta
On-site
Responsibilities : About Lexmark: Founded in 1991 and headquartered in Lexington, Kentucky, Lexmark is recognized as a global leader in print hardware, service, software solutions and security by many of the technology industry’s leading market analyst firms. Lexmark creates cloud-enabled imaging and IoT solutions that help customers in more than 170 countries worldwide quickly realize business outcomes. Lexmark’s digital transformation objectives accelerate business transformation, turning information into insights, data into decisions, and analytics into action. Lexmark India, located in Kolkata, is one of the research and development centers of Lexmark International Inc. The India team works on cutting edge technologies & domains like cloud, AI/ML, Data science, IoT, Cyber security on creating innovative solutions for our customers and helping them minimize the cost and IT burden in providing a secure, reliable, and productive print and imaging environment. At our core, we are a technology company – deeply committed to building our own R&D capabilities, leveraging emerging technologies and partnerships to bring together a library of intellectual property that can add value to our customer's business. Caring for our communities and creating growth opportunities by investing in talent are woven into our culture. It’s how we care, grow, and win together. Job Description/Responsibilities: We are looking for a highly skilled and strategic Data Architect with deep expertise in the Azure Data ecosystem. This role requires a strong command over Azure Databricks, Azure Data Lake, Azure Data Factory, data warehouse design, SQL optimization, and AI/ML integration. The Data Architect will design and oversee robust, scalable, and secure data architectures to support advanced analytics and machine learning workloads. Qualification: BE/ME/MCA with 10+ Years in IT Experience. Must Have Skills/Skill Requirement: Define and drive the overall Azure-based data architecture strategy aligned with enterprise goals. Architect and implement scalable data pipelines, data lakes, and data warehouses using Azure Data Lake, ADF, and Azure SQL/Synapse. Provide technical leadership on Azure Databricks (Spark, Delta Lake, Notebooks, MLflow etc.) for large-scale data processing and advanced analytics use cases. Integrate AI/ML models into data pipelines and support end-to-end ML lifecycle (training, deployment, monitoring). Collaborate with cross-functional teams including data scientists, DevOps engineers, and business analysts. Evaluate and recommend tools, platforms, and design patterns for data and ML infrastructure. Mentor data engineers and junior architects on best practices and architectural standards. Strong experience with data modeling, ETL/ELT frameworks, and data warehousing concepts. Proficient in SQL, Python, PySpark. Solid understanding of AI/ML workflows and tools. Exposure on Azure DevOps. Excellent communication and stakeholder management skills. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now!
Posted 4 days ago
0 years
6 - 8 Lacs
Calcutta
On-site
Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 768921
Posted 4 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for a ML Ops Engineer to join our Technology team at Clarivate. You will get the opportunity to work in a cross-cultural work environment while working on the latest web technologies with an emphasis on user-centered design. About You (Skills & Experience Required) Bachelor’s or master’s degree in computer science, Engineering, or a related field. 5+ years of experience in machine learning, data engineering, or software development. Good experience in building data pipelines, data cleaning, and feature engineering is essential for preparing data for model training. Knowledge of programming languages (Python, R), and version control systems (Git) is necessary for building and maintaining MLOps pipelines. Experience with MLOps-specific tools and platforms (e.g., Kubeflow, MLflow, Airflow) can streamline MLOps workflows. DevOps principles, including CI/CD pipelines, infrastructure as code (IaaC), and monitoring is helpful for automating ML workflows. Experience with atleast one of the cloud platforms (AWS, GCP, Azure) and their associated services (e.g., compute, storage, ML platforms) is essential for deploying and scaling ML models. Familiarity with container orchestration tools like Kubernetes can help manage and scale ML workloads efficiently. It would be great if you also had, Experience with big data technologies (Hadoop, Spark). Knowledge of data governance and security practices. Familiarity with DevOps practices and tools. What will you be doing in this role? Model Deployment & Monitoring Oversee the deployment of machine learning models into production environments. Ensure continuous monitoring and performance tuning of deployed models. Implement robust CI/CD pipelines for model updates and rollbacks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Communicate project status, risks, and opportunities to stakeholders. Provide technical guidance and support to team members. Infrastructure & Automation Design and manage scalable infrastructure for model training and deployment. Automate repetitive tasks to improve efficiency and reduce errors. Ensure the infrastructure meets security and compliance standards. Innovation & Improvement Stay updated with the latest trends and technologies in MLOps. Identify opportunities for process improvements and implement them. Drive innovation within the team to enhance the MLOps capabilities. About The Team You would be part of our incredible data science team of Intellectual property (IP) group & work closely with product and technology teams spreads across various locations worldwide. You would be working on interesting IP data and interesting challenges to create insights and drive business acumen to add value to our world class products and services. Hours of Work This is a permanent position with Clarivate.9 hours per day including lunch break. you should be flexible with working hours to align with globally distributed teams and stakeholders. At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Jarvis Business Solutions is a leading eCommerce and CRM company specializing in implementing and delivering solutions for small enterprises to large enterprises. With expertise in SAP Hybris Commerce and Salesforce CRM, Jarvis serves clients globally by providing innovative eCommerce and CRM solutions. Role Description This is a full-time role for an AI/ML Lead Engineer at Jarvis Business Solutions. Education & Experience: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. PhD is a plus. 5+ years of industry experience in AI/ML, with at least 1+ year in a Technical Leadership or Managerial role. Technical Skills: Proficiency in Python and ML libraries such as TensorFlow, PyTorch, Scikit-learn . Strong background in machine learning, deep learning, and statistical modeling . Experience with cloud platforms (AWS, GCP, or Azure) and MLOps tools (e.g., MLflow, Kubeflow, Airflow). Familiarity with data engineering tools like Spark, Kafka, or Databricks is a plus. Hands-on experience with CI/CD for ML models and model monitoring in production . Interested you can directly apply or share your updated cv to sowjanyap@jarvisbusiness.io
Posted 4 days ago
15.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Job description Job Title: Tech Lead (AI/ML) – Machine Learning & Generative AI Location: Nagpur (Hybrid / On-site) Experience: 8–15 years Employment Type: Full-time Job Summary: We are seeking a highly experienced Python Developer with a strong background in traditional Machine Learning and growing proficiency in Generative AI to join our AI Engineering team. This role is ideal for professionals who have delivered scalable ML solutions and are now expanding into LLM-based architectures, prompt engineering, and GenAI productization. You’ll be working at the forefront of applied AI, driving both model performance and business impact across diverse use cases. Key Responsibilities: Design and develop ML-powered solutions for use cases in classification, regression, recommendation, and NLP. Build and operationalize GenAI solutions, including fine-tuning, prompt design, and RAG implementations using models such as GPT, LLaMA, Claude, or Gemini. Develop and maintain FastAPI-based services that expose AI models through secure, scalable APIs. Lead data modeling, transformation, and end-to-end ML pipelines, from feature engineering to deployment. Integrate with relational (MySQL) and vector databases (e.g., ChromaDB, FAISS, Weaviate) to support semantic search, embedding stores, and LLM contexts. Mentor junior team members and review code, models, and system designs for robustness and maintainability. Collaborate with product, data science, and infrastructure teams to translate business needs into AI capabilities. Optimize model and API performance, ensuring high availability, security, and scalability in production environments. Core Skills & Experience: Strong Python programming skills with 5+ years of applied ML/AI experience. Demonstrated experience building and deploying models using TensorFlow, PyTorch, scikit-learn, or similar libraries. Practical knowledge of LLMs and GenAI frameworks, including Hugging Face, OpenAI, or custom transformer stacks. Proficient in REST API design using FastAPI and securing APIs in production environments. Deep understanding of MySQL (query performance, schema design, transactions). Hands-on with vector databases and embeddings for search, retrieval, and recommendation systems. Strong foundation in software engineering practices: version control (Git), testing, CI/CD. Preferred/Bonus Experience: Deployment of AI solutions on cloud platforms (AWS, GCP, Azure). Familiarity with MLOps tools (MLflow, Airflow, DVC, SageMaker, Vertex AI). Experience with Docker, Kubernetes, and container orchestration. Understanding of prompt engineering, tokenization, LangChain, or multi-agent orchestration frameworks. Exposure to enterprise-grade AI applications in BFSI, healthcare, or regulated industries is a plus. What We Offer: Opportunity to work on a cutting-edge AI stack integrating both classical ML and advanced GenAI. High autonomy and influence in architecting real-world AI solutions. A dynamic and collaborative environment focused on continuous learning and innovation.
Posted 4 days ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Job Description – Data Scientist– Credit Risk Modelling Domain: Retail Banking / Credit Cards Location: Mumbai Experience: 3–5 years Industry: Banking / Financial Services Why would you like to join us? TransOrg Analytics specializes in Data Science, Data Engineering and Generative AI, providing advanced analytics solutions to industry leaders and Fortune 500 companies across India, the US, APAC and the Middle East. We leverage data science to streamline, optimize, and accelerate our clients' businesses. Visit www.transorg.com to learn more about us. What do we expect from you? Build and validate credit risk models, including application scorecards and behavior scorecards (B-score), aligned with business and regulatory requirements. Use advanced machine learning algorithms such as Logistic Regression, XGBoost, and Clustering to develop interpretable and high-performance models. Translate business problems into data-driven solutions using robust statistical and analytical methods. Collaborate with cross-functional teams, including credit policy, risk strategy, and data engineering, to ensure effective model implementation and monitoring. Maintain clear, audit-ready documentation for all models and comply with internal model governance standards. Track and monitor model performance, proactively suggesting recalibrations or enhancements as needed. What do you need to excel at? Writing efficient and scalable code in Python, SQL, and PySpark for data processing, feature engineering, and model training. Working with large-scale structured and unstructured data in a fast-paced banking or fintech environment. Deploying and managing models using MLFlow, with a strong understanding of version control and model lifecycle management. Understanding retail banking products, especially credit card portfolios, customer behavior, and risk segmentation. Communicating complex technical outcomes clearly to non-technical stakeholders and senior management. Applying a structured problem-solving approach and delivering insights that drive business value. What are we looking for? Bachelor’s or master’s degree in Statistics, Mathematics, Computer Science, or a related quantitative field. 3–5 years of experience in credit risk modelling, preferably in retail banking or credit cards. Hands-on expertise in Python, SQL, PySpark, and experience with MLFlow or equivalent MLOps tools. Deep understanding of machine learning techniques, including Logistic Regression, XGBoost, and Clustering. Proven experience in developing Application Scorecards and behavior Scorecards using real- World Banking Data. Strong documentation and compliance orientation, with an ability to work within regulatory frameworks. Curiosity, accountability, and a passion for solving real-world problems using data. Cloud Knowledge, JIRA, GitHub(good to have)
Posted 4 days ago
5.0 - 10.0 years
22 - 25 Lacs
Hyderabad
Work from Office
Entity :- Accenture Strategy & Consulting Team :- Global Network Data & AI Practice :- AI Managed Services Title: - AI LLM Technology Architecture Consultant Role :- ML Engineering Consultant Job Location:- Hyderabad Accenture Global Network - Data & AI practice help our clients grow their business in entirely new ways.Analytics enables our clients to achieve high performance through insights from data - insights that inform better decisions and strengthen customer relationships. From strategy to execution, Accenture works with organizations to develop analytic capabilities - from accessing and reporting on data to predictive modelling - to outperform the competition As part of our Data & AI practice, you will join a worldwide network of smart and driven colleagues experienced in leading AI/ML/Statistical tools, methods and applications. From data to analytics and insights to actions, our forward-thinking consultants provide analytically-informed, issue-based insights at scale to help our clients improve outcomes and achieve high performance. WHATS IN IT FOR YOU An opportunity to work on high-visibility projects with top clients around the globe. Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners, and business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge, and capabilities. Opportunity to thrive in a culture that is committed to accelerating equality for all. Engage in boundaryless collaboration across the entire organization. What you would do in this role Hands on experience on ML Services of AWS or Azure or Google. Develop, train, and deploy machine learning models for real-world applications. Design and implement data pipelines for model training and inference. Conduct model evaluation, testing, and validation to ensure robustness and accuracy. Work with large-scale datasets to build and improve predictive models. Optimize and fine-tune ML algorithms for performance, scalability, and efficiency. Collaborate with data scientists, software engineers, and product teams to integrate ML solutions into production systems. Stay up-to-date with the latest trends and advancements in machine learning and AI. Write clean, maintainable, and well-documented code. Accenture is an equal opportunities employer and welcomes applications from all sections of society and does not discriminate on grounds of race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, or any other basis as protected by applicable law. Qualification Who are we looking for Bachelors or Masters degree in any engineering stream or MCA. Experience/Education on Statistics, Data Science, Applied Mathematics, Business Analytics, Computer Science, Information Systems is preferable. Proven experience (5+ years) in working as per the above job description is required. Exposure to Retail, Banking, Healthcare projects is added advantage. Proficiency in Python and ML frameworks such as TensorFlow, PyTorch, or Scikit-learn. Strong understanding of algorithms, statistical modeling, and data structures. Experience with cloud platforms (AWS, GCP, Azure) and ML deployment tools (Docker, Kubernetes, MLflow, etc.). Knowledge of SQL and NoSQL databases for data handling. Familiarity with MLOps best practices for continuous integration and deployment of ML models. Excellent problem-solving skills and ability to work in a collaborative team environment. Ability to solve complex business problems and deliver client delight. Strong writing skills to build points of view on current industry trends. Good Client handling skills; able to demonstrate thought leadership & problem-solving skills.
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Company Description It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today — ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone. Job Description About the Team The Finance Analytics & Insights (FA&I) team is transforming how Finance operates by embedding AI/ML into the core of our decision-making processes. We are building intelligent, scalable data products that power use cases across forecasting, anomaly detection, case summarization, and agentic automation. Our global team includes data product managers, analysts, and engineers who are passionate about delivering measurable business value. Role Overview We are seeking a highly motivated and analytically strong ML Engineer to join our India-based team. This role will support the development and scaling of AI/ML-powered data products that drive strategic insights across Finance. As an IC3-level individual contributor, you will work closely with the Data Product Manager and Insights Analyst to build AI/ML solutions that deliver measurable business value. 🔍 Key Responsibilities Design, build, and deploy machine learning models that support use cases such as: Forecasting Anomaly detection Case summarization Agentic AI assistants Partner with the Insights Analyst to perform feature engineering, exploratory data analysis, and hypothesis testing Build and iterate on proof-of-concepts (POCs) to validate model design and demonstrate business value Collaborate with the Data Product Manager to align model development with product strategy and business outcomes Own and manage the Databricks instance for the FA&I team—partnering with the DT Data & Analytics team to define a roadmap of capabilities, test and validate new features, and ensure the platform supports scalable ML development and deployment Ensure models are production-ready, scalable, and maintainable—working closely with DT and D&A teams to integrate into enterprise platforms Monitor model performance, implement feedback loops, and retrain models as needed Contribute to agile product development processes including sprint planning, backlog grooming, and user story creation Qualifications 🧠 Required Skills & Experience 3–5 years of experience in machine learning engineering, data science, or applied AI roles Strong proficiency in Python and ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch) Solid understanding of feature engineering, model evaluation, and MLOps practices Experience working with large datasets using SQL and Snowflake Familiarity with Databricks for model development and orchestration Experience with CI/CD pipelines, version control (Git), and ML workflow tools Ability to translate business problems into ML solutions and communicate technical concepts to non-technical stakeholders Experience working in agile teams and collaborating with product managers, analysts, and engineers ⭐ Preferred Qualifications Experience working in or supporting Finance or Accounting teams Prior experience deploying models in production environments and integrating with enterprise systems Familiarity with GenAI, prompt engineering, or LLM-based applications is a plus Experience with MLflow, Azure ML, or similar platforms Comfort with async collaboration tools and practices, including Teams, recorded video demos, and documentation-first communication Experience working in a global, cross-functional environment with stakeholders across time zones 💡 Key Behaviors & Mindsets Builder’s Mentality: You love turning ideas into working models and iterating quickly to improve them. Collaborative Engineer: You work closely with analysts and product managers to co-create solutions that solve real business problems. Customer-Centric: You care deeply about the end user and build models that are interpretable, actionable, and aligned with business needs. Bias for Action: You move fast, test often, and focus on delivering value—not just code. Global Mindset: You thrive in a distributed team and proactively align to US morning hours (PST overlap) to keep momentum across geographies. Async-First Communicator: You’re comfortable working in a hybrid async environment—leveraging Teams, recorded demos, and documentation to keep work moving forward. Growth-Oriented: You’re always learning—whether it’s a new algorithm, tool, or business domain—and you help others grow too. Additional Information Work Personas We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work and their assigned work location. Learn more here. To determine eligibility for a work persona, ServiceNow may confirm the distance between your primary residence and the closest ServiceNow office using a third-party service. Equal Opportunity Employer ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements. Accommodations We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact globaltalentss@servicenow.com for assistance. Export Control Regulations For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities. From Fortune. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.
Posted 4 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Position: We are conducting an in-person hiring drive on 28th june 2025, for Azure Data Engineer in Hyderabad. In Person Drive Location: Persistent Systems (6th Floor), Gate 11, SALARPURIA SATTVA ARGUS, SALARPURIA SATTVA KNOWLEDGE CITY, beside T hub, Shilpa Gram Craft Village, Madhapur, Rai Durg, Hyderabad, Telangana 500081 We are hiring Azure Data Engineer with skills in Azure Databricks, Azure DataFactory, Pyspark, SQL. Role: Azure Data Engineer Location: Hyderabad Experience: 3-8 Years Job Type: Full Time Employment What You'll Do: Design and implement robust ETL/ELT pipelines using PySpark on Databricks. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Optimize data workflows for performance and scalability. Manage and monitor data pipelines in production environments. Ensure data quality, integrity, and security across all stages of data processing. Integrate data from various sources including APIs, databases, and cloud storage. Develop reusable components and frameworks for data processing. Document technical solutions and maintain code repositories. Expertise You'll Bring: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 2+ years of experience in data engineering or software development. Strong proficiency in PySpark and Apache Spark. Hands-on experience with Databricks platform. Proficiency in SQL and working with relational databases. Experience with cloud platforms (Azure, AWS, or GCP). Familiarity with Delta Lake, MLflow, and other Databricks ecosystem tools. Strong problem-solving and communication skills. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 4 days ago
8.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
🧠 We're Hiring: Data Architect (8+ Years Experience) 📍 Location: Jaipur 🏢 Hiring Company: ThoughtsWin System 📩 Send your CV to: shobha.jain@appzime.com or pragya.pandey@appzime.com 💼 Job Summary: We’re seeking a skilled Data Architect to lead the design and delivery of innovative, scalable data solutions that power our business. With deep expertise in Databricks and cloud platforms like Azure and AWS , you’ll architect high-performance data systems and drive impactful analytics. If you thrive on solving complex data challenges, this is your chance to shine. 🔧 Key Responsibilities: 🧱 Design scalable data lakes, warehouses, and real-time streaming architectures ⚙️ Build and optimize data pipelines and Delta Lake solutions using Databricks (Spark, Workflows, SQL Analytics) ☁️ Develop cloud-native data platforms on Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, Glue, S3) 🔄 Create and automate ETL/ELT workflows with Apache Spark, PySpark, and cloud tools 📊 Design robust data models (dimensional, normalized, star schemas) for analytics and reporting 🚀 Leverage big data tech like Hadoop, Kafka, Scala for large-scale processing 🔐 Ensure data governance , security, and compliance (GDPR, HIPAA) ⚡ Optimize Spark workloads and storage for performance 🤝 Collaborate with engineering, analytics, and business teams to align data solutions with goals ✅ Required Skills & Qualifications: 🧠 8+ years in data architecture, engineering , or analytics roles 🔥 Hands-on with Databricks (Delta Lake, Spark, MLflow, pipelines) ☁️ Expertise in Azure (Synapse, Data Lake, Data Factory) & AWS (Redshift, S3, Glue) 🐍 Strong coding in SQL , Python , or Scala 🗃️ Experience with NoSQL (e.g., MongoDB), streaming tools (e.g., Kafka) 📋 Knowledge of data governance and compliance practices ✨ Excellent problem-solving & communication skills 👥 Ability to work cross-functionally with multiple teams 🚀 Ready to architect the future of data? Send your CV to shobha.jain@appzime.com and be part of a visionary team. #DataArchitect #BigData #Azure #AWS #Databricks #ETL #DataEngineering #Spark #Hiring #TechJobs #JaipurJobs #ThoughtsWinSystem #AppZimeHiring
Posted 4 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About Motadata Motadata is a renowned IT monitoring and management software company that has been transforming how businesses manage their ITOps since its inception. Our vision is to revolutionize the way organizations extract valuable insights from their IT networks. Bootstrapped since inception, Motadata has built up a formidable product suite comprising cutting-edge solutions, empowering enterprises to make informed decisions and optimize their IT infrastructure. As a market leader, we take pride in our ability to collect and analyze data from various sources, in any format, providing a unified view of IT monitoring data. Position Overview We are seeking a Senior Machine Learning Engineer to join our team, focused on enhancing our AIOps and IT Service Management (ITSM) product through the integration of cutting-edge AI/ML features and functionality. As part of our innovative approach to revolutionizing the IT industry, you will play a pivotal role in leveraging data analysis techniques and advanced machine learning algorithms to drive meaningful insights and optimize our product's performance. With a particular emphasis on end-to-end machine learning lifecycle management and MLOps, you will collaborate with cross-functional teams to develop, deploy, and continuously improve AI-driven solutions tailored to our customers' needs. From semantic search and AI chatbots to root cause analysis based on metrics, logs, and traces, you will have the opportunity to tackle diverse challenges and shape the future of intelligent IT operations. Role & Responsibility Lead the end-to-end machine learning lifecycle, understand the business problem statement, convert into ML problem statement, data acquisition, exploration, feature engineering, model selection, training, evaluation, deployment, and monitoring (MLOps). Should be able to lead the team of ML Engineers to solve the business problem and get it implemented in the product, QA validated and improvise based on the feedback from the customer. Collaborate with product managers to understand business needs and translate them into technical requirements for AI/ML solutions. Design, develop, and implement machine learning algorithms and models, including but not limited to statistics, regression, classification, clustering, and transformer-based architectures. Preprocess and analyze large datasets to extract meaningful insights and prepare data for model training. Build and optimize machine learning pipelines for model training and inference using relevant frameworks. Fine-tune existing models and/or train custom models to address specific use cases. Enhance the accuracy and performance of existing AI/ML models through monitoring, iterative refinement and optimization techniques. Collaborate closely with cross-functional teams to integrate AI/ML features seamlessly into our product, ensuring scalability, reliability, and maintainability. Document your work clearly and concisely for future reference and knowledge sharing within the team. Stay ahead of latest developments in machine learning research and technology and evaluate their potential applicability to our product roadmap. Skills And Qualifications Bachelor's or higher degree in Computer Science, Engineering, Mathematics, or related field. Minimum 5+ years of experience as a Machine Learning Engineer or similar role. Proficiency in data analysis techniques and tools to derive actionable insights from complex datasets. Solid understanding and practical experience with machine learning algorithms and techniques, including statistics, regression, classification, clustering, and transformer-based models. Hands-on experience with end-to-end machine learning lifecycle management and MLOps practices. Proficiency in programming languages such as Python and familiarity with at least one of the following : Java,Golang, .NET, Rust. Experience with machine learning frameworks/libraries (e.g. , TensorFlow, PyTorch, scikit-learn) and MLOps tools (e.g. , MLflow, Kubeflow). Experience with ML.NET and other machine learning frameworks. Familiarity with natural language processing (NLP) techniques and tools. Excellent communication and teamwork skills, with the ability to effectively convey complex technical concepts to diverse audiences. Proven track record of delivering high-quality, scalable machine learning solutions in a production environment. (ref:hirist.tech)
Posted 5 days ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In Our Website - https://aurigait.com/
Posted 5 days ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 768921
Posted 5 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Scientist Experience range: 3+ years Location: CloudLex Pune Office (In-person, Monday to Friday, 9:30 AM – 6:30 PM) Responsibilities Design and implement AI agent workflows. Develop end-to-end intelligent pipelines and multi-agent systems (e.g., LangGraph/LangChain workflows) that coordinate multiple LLM-powered agents to solve complex tasks. Create graph-based or state-machine architectures for AI agents, chaining prompts and tools as needed. Build and fine-tune generative models. Develop, train, and fine-tune advanced generative models (transformers, diffusion models, VAEs, GANs, etc.) on domain-specific data. Deploy and optimize foundation models (such as GPT, LLaMA, Mistral) in production, adapting them to our use cases through prompt engineering and supervised fine-tuning. Develop data pipelines. Build robust data collection, preprocessing, and synthetic data generation pipelines to feed training and inference workflows. Implement data cleansing, annotation, and augmentation processes to ensure high-quality inputs for model training and evaluation. Implement LLM-based agents and automation. Integrate generative AI agents (e.g., chatbots, AI copilots, content generators) into business processes to automate data processing and decision-making tasks. Use Retrieval-Augmented Generation (RAG) pipelines and external knowledge sources to enhance agent capabilities. Leverage multimodal inputs when applicable. Optimize performance and safety. Continuously evaluate and improve model/system performance. Use GenAI-specific benchmarks and metrics (e.g., BLEU, ROUGE, TruthfulQA) to assess results, and iterate to optimize accuracy, latency, and resource efficiency. Implement safeguards and monitoring to mitigate issues like bias, hallucination, or inappropriate outputs. Collaborate and document. Work closely with product managers, engineers, and other stakeholders to gather requirements and integrate AI solutions into production systems. Document data workflows, model architectures, and experimentation results. Maintain code and tooling (prompt libraries, model registries) to ensure reproducibility and knowledge sharing. Required Skills & Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related quantitative field analyticsvidhya.com (or equivalent practical experience). A strong foundation in algorithms, statistics, and software engineering is expected. Programming proficiency: Expert-level skills in Python coursera.org , with hands-on experience in machine learning and deep learning frameworks (PyTorch, TensorFlow) analyticsvidhya.com . Comfortable writing production-quality code and using version control, testing, and code review workflows. Generative model expertise: Demonstrated ability to build, fine-tune, and deploy large-scale generative models analyticsvidhya.com . Familiarity with transformer architectures and generative techniques (LLMs, diffusion models, GANs) analyticsvidhya.comanalyticsvidhya.com . Experience working with model repositories and fine-tuning frameworks (Hugging Face, etc.). LLM and agent frameworks: Strong understanding of LLM-based systems and agent-oriented AI patterns. Experience with frameworks like LangGraph/LangChain or similar multi-agent platforms gyliu513.medium.com . Knowledge of agent communication standards (e.g., MCP/Agent Protocol) gyliu513.medium.comblog.langchain.dev to enable interoperability between AI agents. AI integration and MLOps: Experience integrating AI components with existing systems via APIs and services. Proficiency in retrieval-augmented generation (RAG) setups, vector databases, and prompt engineering analyticsvidhya.com . Familiarity with machine learning deployment and MLOps tools (Docker, Kubernetes, MLflow, KServe, etc.) for managing end-to-end automation and scalable workflows analyticsvidhya.com . Familiarity with GenAI tools: Hands-on experience with state-of-the-art GenAI models and APIs (OpenAI GPT, Anthropic, Claude, etc.) and with popular libraries (Hugging Face Transformers, LangChain, etc.). Awareness of the current GenAI tooling ecosystem and best practices. Soft skills: Excellent problem-solving and analytical abilities. Strong communication and teamwork skills to collaborate across data, engineering, and business teams. Attention to detail and a quality-oriented mindset. (See Ideal Candidate below for more on personal attributes.) Ideal Candidate Innovative, problem-solver: You are a creative thinker who enjoys tackling open-ended challenges. You have a solutions-oriented mindset and proactively experiment with new ideas and techniques analyticsvidhya.com . Systems thinker: You understand how different components (data, models, services) fit together in a large system. You can architect end-to-end AI solutions with attention to reliability, scalability, and integration points. Collaborative communicator: You work effectively in multidisciplinary teams. You are able to explain complex technical concepts to non-technical stakeholders and incorporate feedback. You value knowledge sharing and mentorship. Adaptable learner: The generative AI landscape evolves rapidly. You are passionate about staying current with the latest research and tools. You embrace continuous learning and are eager to upskill and try new libraries or platforms analyticsvidhya.com . Ethical and conscientious: You care about the real-world impact of AI systems. You take responsibility for the quality and fairness of models, and proactively address concerns like data privacy, bias, and security.
Posted 5 days ago
3.0 years
0 Lacs
India
Remote
Job Title: AI Image Processing Specialist Location: Remote /Jaipur Job Type: Full-time / Contract Experience: 3+ years in computer vision, with medical imaging a plus Job Summary We are seeking a highly skilled and detail-oriented AI Image Processing Specialist to join our team, with a strong focus on medical imaging , computer vision , and deep learning . In this role, you will be responsible for developing and optimizing scalable image processing pipelines tailored for diagnostic, radiological, and clinical applications. Your work will directly contribute to advancing AI capabilities in healthcare by enabling accurate, efficient, and compliant medical data analysis. You will collaborate with data scientists, software engineers, and healthcare professionals to build cutting-edge AI solutions with real-world impact. Key Responsibilities Design, develop, and maintain robust image preprocessing pipelines to handle various medical imaging formats such as DICOM, NIfTI, and JPEG2000. Build automated, containerized, and scalable computer vision workflows suitable for high-throughput medical imaging analysis. Implement and fine-tune models for core vision tasks, including image segmentation, classification, object detection, and landmark detection using deep learning techniques. Ensure that all data handling, processing, and model training pipelines adhere to regulatory guidelines such as HIPAA, GDPR, and FDA/CE requirements. Optimize performance across pipeline stages — including data augmentation, normalization, contrast adjustment, and image registration — to ensure consistent model accuracy. Integrate annotation workflows using tools such as CVAT, Labelbox, or SuperAnnotate and implement strategies for active learning and semi-supervised annotation. Manage reproducibility and version control across datasets and model artifacts using tools like DVC, MLFlow, and Airflow. Required Skills Strong experience with Python and image processing libraries such as OpenCV, scikit-image, and SimpleITK. Proficiency in deep learning frameworks like TensorFlow or PyTorch, including experience with model architectures like U-Net, ResNet, or YOLO adapted for medical applications. Deep understanding of medical imaging formats, preprocessing techniques (e.g., windowing, denoising, bias field correction), and challenges specific to healthcare datasets. Experience working with computer vision tasks such as semantic segmentation, instance segmentation, object localization, and detection. Familiarity with annotation platforms, data curation workflows, and techniques for managing large annotated datasets. Experience with pipeline orchestration, containerization (Docker), and reproducibility tools such as Airflow, DVC, or MLFlow. Preferred Qualifications Experience with domain-specific imaging datasets in radiology, pathology, dermatology, or ophthalmology. Understanding of clinical compliance frameworks such as FDA clearance for software as a medical device (SaMD) or CE marking in the EU. Exposure to multi-modal data fusion, combining imaging with EHR, genomics, or lab data for holistic model development. Experience with pipeline orchestration, containerization (Docker), and reproducibility tools such as Airflow, DVC, or MLFlow. Ensure that all data handling, processing, and model training pipelines adhere to regulatory guidelines such as HIPAA, GDPR, and FDA/CE requirements. Why Join Us Be part of a forward-thinking team shaping the future of AI in healthcare. You’ll work on impactful projects that improve patient outcomes, streamline diagnostics, and enhance clinical decision-making. We offer a collaborative environment, opportunities for innovation, and a chance to work at the cutting edge of AI-driven healthcare. Skills: docker,u-net,mlflow,containerization,image segmentation,simpleitk,yolo,image processing,computer vision,medical imaging,object detection,tensorflow,opencv,pytorch,image preprocessing,resnet,python,dvc,airflow,scikit-image,annotation workflows
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane