Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 8.0 years
25 - 40 Lacs
Gurugram
Work from Office
About this role: Lead Software Engineer (AI) position having experience in classic and generative AI techniques, and responsible for design, implementation, and support of Python based applications to help fulfill our Research & Consulting Delivery strategy. What youll do: Deliver client engagements that use AI rapidly, on the order of a few weeks Stay on top of current tools, techniques, and frameworks to be able to use and advise clients on them Build proofs of concept rapidly, to learn and adapt to changing market needs Support building internal applications for use by associates to improve productivity What you’ll need: 6-8 years of experience in classic AI techniques and at least 1.5 years in generative AI techniques. Demonstrated ability to run short development cycles and solid grasp of building software in a collaborative team setting. Must have: Experience building applications for knowledge search and summarization, frameworks to evaluate and compare performance of different GenAI techniques, measuring and improving accuracy and helpfulness of generative responses, implementing observability. Experience with agentic AI frameworks, RAG, embedding models, vector DBs Experience working with Python libraries like Pandas, Scikit-Learn, Numpy, and Scipy is required. Experience deploying applications to cloud platforms such as Azure and AWS. Solid grasp of building software in a collaborative team setting - use of agile scrum and tools like Jira / GitHub. Nice to have: Experience in finetuning Language models. Familiarity with AWS Bedrock / Azure AI / Databricks Services. Experience in Machine learning models and techniques like NLP, BERT, Transformers, Deep learning. Experience in MLOps Frameworks like Kubeflow, MLFlow, DataRobot, Airflow etc., Experience building scalable data models and performing complex relational databases queries using SQL (Oracle, MySQL, PostgreSQL). Who you are: Excellent written, verbal, and interpersonal communication skills with the ability to present technical information in a clear and concise manner to IT Leaders and business stakeholders. Effective time management skills and ability to meet deadlines. Excellent communications skills interacting with technical and business audiences. Excellent organization, multitasking, and prioritization skills. Must possess a willingness and aptitude to embrace new technologies/ideas and master concepts rapidly. Intellectual curiosity, passion for technology and keeping up with new trends. Delivering project work on-time within budget with high quality. Demonstrated ability to run short development cycle.
Posted 14 hours ago
4.0 - 6.0 years
7 - 11 Lacs
Hyderabad, Chennai
Work from Office
Job Title : Data Scientist Location State : Tamil Nadu,Telangana Location City : Hyderabad, Chennai Experience Required : 4 to 6 Year(s) CTC Range : 7 to 11 LPA Shift: Day Shift Work Mode: Onsite Position Type: C2H Openings: 2 Company Name: VARITE INDIA PRIVATE LIMITED About The Client: Client is an Indian multinational technology company specializing in information technology services and consulting. Headquartered in Mumbai, it is a part of the Tata Group and operates in 150 locations across 46 countries. About The Job: Requirements: 5+ years in predictive analytics, with expertise in regression, classification, time-series modeling. Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark. Familiarity with MLflow, Feature Store, and Unity Catalog for governance. Industry experience in Life Insurance or P&C. Skills: Python, PySpark , MLflow, Databricks AutoML. Predictive MoClienting (Classification , Clustering , Regression, timeseries and NLP). Cloud platform (Azure/AWS) , Delta Lake, Unity Catalog. Certifications; Databricks Certified ML Practitioner (Optional) Essential Job Functions: Design and deploy predictive models (e.g., forecasting, churn analysis, fraud detection) using Python/SQL, Spark MLlib, and Databricks ML. Build end-to-end ML pipelines (data ingestion, feature engineering, model training, deployment) on Databricks Lakehouse. Optimize model performance via hyperparameter tuning, AutoML, and MLflow tracking. Collaborate with engineering teams to operationalize models (batch/real-time) using Databricks Jobs or REST APIs. Implement Delta Lake for scalable, ACID-compliant data workflows. Enable CI/CD for ML pipelines using Databricks Repos and GitHub Actions. Troubleshoot issues in Spark Jobs and Databricks Environment. Qualifications: Skill Required: Data Science, Python for Data Science Experience Range in Required Skills: 4-6 Years How to Apply: Interested candidates are invited to submit their resume using the apply online button on this job post. About VARITE: VARITE is a global staffing and IT consulting company providing technical consulting and team augmentation services to Fortune 500 Companies in USA, UK, CANADA and INDIA. VARITE is currently a primary and direct vendor to the leading corporations in the verticals of Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. Equal Opportunity Employer: VARITE is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, veteran status, or disability status. Unlock Rewards: Refer Candidates and Earn. If you're not available or interested in this opportunity, please pass this along to anyone in your network who might be a good fit and interested in our open positions. VARITE offers a Candidate Referral program, where you'll receive a one-time referral bonus based on the following scale if the referred candidate completes a three-month assignment with VARITE. Exp Req - Referral Bonus 0 - 2 Yrs. - INR 5,000 2 - 6 Yrs. - INR 7,500 6 + Yrs. - INR 10,000
Posted 14 hours ago
5.0 years
0 Lacs
India
Remote
Job Title: MLOps Engineer – AWS SageMaker & MLflow (Remote) Location: Fully Remote (India Preferred) Experience Level: 5+ Years Employment Type: Full-Time Company: KrtrimaIQ Cognitive Solutions 📧 Apply at: nikhil.kumar@krtrimaiq.ai Job Summary: KrtrimaIQ Cognitive Solutions is looking for a Senior MLOps Engineer with deep expertise in AWS SageMaker , MLflow , and end-to-end machine learning lifecycle management . The ideal candidate will have hands-on experience in deploying and managing scalable ML solutions in production environments and a passion for building reliable MLOps systems. This is a fully remote opportunity offering flexibility and the chance to work on innovative AI/ML products for global enterprises. Key Responsibilities: Design, build, and maintain scalable and secure MLOps pipelines using AWS SageMaker and MLflow Automate ML model lifecycle: training, testing, tracking, versioning, and deployment to production Implement CI/CD pipelines and DevOps practices for ML infrastructure Ensure reproducibility, monitoring, and performance optimization of deployed models Collaborate closely with Data Scientists, Data Engineers, and DevOps teams to streamline workflows Contribute to MLOps research and explore new tools and frameworks in the production ML space Ensure compliance with best practices in cloud computing , software development , and data security Must-Have Qualifications: 5+ years of professional experience in Software Engineering or MLOps Expertise in AWS , specifically AWS SageMaker for building and deploying ML models Hands-on experience with MLflow for model tracking, versioning, and deployment Strong programming skills in Python (R, Scala, or Spark is a plus) Solid experience in production-grade development and infrastructure automation Strong problem-solving, analytical, and research skills Preferred Qualifications: Experience with AWS DataZone Familiarity with Docker , Kubernetes , Airflow , Terraform , or other orchestration tools Understanding of data versioning , feature stores , and ML monitoring tools Exposure to MLOps research and experimentation in startup/innovation environments What We Offer: 💻 100% Remote Work Flexibility 🌐 Exposure to enterprise-grade AI and MLOps use cases 🤝 Collaborative work culture focused on innovation and learning 🚀 Fast-paced startup-like environment with global project opportunities How to Apply: Send your updated CV to: nikhil.kumar@krtrimaiq.ai
Posted 14 hours ago
7.0 years
8 - 10 Lacs
Hyderābād
On-site
Job Description: Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Required Qualifications: Experience: 7+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance to Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 14 hours ago
5.0 - 8.0 years
0 Lacs
Hyderābād
On-site
At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: MLOPS Data Engineer Position: Senior Software Engineer Experience: 5 - 8 Years Category: Software Development/ Engineering Main location: Bangalore/Hyderabad/Chennai Position ID: J0325-0709 Employment Type: Full Time Works independently under limited supervision and applies knowledge of subject matter in Applications Development. Possess sufficient knowledge and skills to effectively deal with issues, challenges within field of specialization to develop simple applications solutions. Second level professional with direct impact on results and outcome. Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 4 years of relevant experience. Your future duties and responsibilities Development experience in data technologies. Experienced with data wrangling and preparation for use within data science, business intelligence or similar analytical functions required. Experience in data technologies - Hadoop PySpark / Scala (Any one) Experience in Machine Learning. Experience in Database - [ RDBMS(Oracle/Teradata) / Hive - Impala / Mongo] at least two. Experience in Analytics and reporting (Tableau). (Good to have). Coordinate with Team for project deliverables , Lead and document project status meetings Highly organized with good time management skills and Customer service orientation Required qualifications to be successful in this role Position: Senior Software Engineer Experience: 5 - 8 Years Main location: Hyderabad/Bangalore/Chennai Must-Have Skills: Advanced Python programming skills Proven experience implementing machine learning data workflows Proficiency with data technologies including Hadoop, Spark, and Kafka Strong programming background in Java, Scala, and SQL Experience implementing Jenkins CI/CD pipelines Hands-on experience with data engineering in a Cloudera CDP (Cloudera Data Platform) environment. Experience with Spark Structured Streaming for near-real-time data processing Docker and Kubernetes containerization expertise Experience with Hopsworks Feature Store implementation Hands-on experience with cloud platforms (AWS, GCP, or Azure) Experience with MLOps tools and frameworks (e.g., Kubeflow, MLflow, etc.) Familiarity with model versioning, continuous integration, and deployment in machine learning pipeline. Knowledge of data model management and automated testing in machine learning environments Bachelor’s degree in computer science or related field Excellent communication skills and collaborative mindset Good-to-Have Skills: Knowledge on IBM Mainframe- Monitor jobs in CA7 schedular, Create CA7 jobs for automation of data loads Knowledge on DOC05, DOC40 & AGENT will be beneficial Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 14 hours ago
7.0 years
2 - 9 Lacs
Hyderābād
On-site
Job Description: Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Required Qualifications: Experience: 7+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance to Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 14 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description NielsenIQ is a consumer intelligence company that delivers the Full View™, the world’s most complete and clear understanding of consumer buying behavior that reveals new pathways to growth. Since 1923, NIQ has moved measurement forward for industries and economies across the globe. We are putting the brightest and most dedicated minds together to accelerate progress. Our diversity brings out the best in each other so we can leave a lasting legacy on the work that we do and the people that we do it with. NielsenIQ offers a range of products and services that leverage Machine Learning and Artificial Intelligence to provide insights into consumer behavior and market trends. This position opens the opportunity to apply the latest state of the art in AI/ML and data science to global and key strategic projects Job Description We are looking for a Research Scientist with a data-centric mindset to join our applied research and innovation team. The ideal candidate will have a strong background in machine learning, deep learning, operationalization of AI/ML and process automation. You will be responsible for analyzing data, researching the most appropriate techniques, and the development, testing, support and delivery of proof of concepts to resolve real-world and large-scale challenging problems. Job Responsibilities Develop and apply machine learning innovations with minimal technical supervision. Understand the requirements from stakeholders and be able to communicate results and conclusions in a way that is accurate, clear and winsome. Perform feasibility studies and analyse data to determine the most appropriate solution. Work on many different data challenges, always ensuring a combination of simplicity, scalability, reproducibility and maintainability within the ML solutions and source code. Both data and software must be developed and maintained with high-quality standards and minimal defects. Collaborate with other technical fellows on the integration and deployment of ML solutions. To work as a member of a team, encouraging team building, motivation and cultivating effective team relations. Qualifications Essential Requirements Bachelor's degree in Computer Science or an equivalent numerate discipline Demonstrated senior experience in Machine Learning, Deep Learning & other AI fields Experience working with large datasets, production-grade code & operationalization of ML solutions EDA analysis & practical hands-on experience with datasets, ML models (Pytorch or Tensorflow) & evaluations Able to understand scientific papers & develop the idea into executable code Analytical mindset, problem solving & logical thinking capabilities Proactive attitude, constructive, intellectual curiosity & persistence to find answers to questions A high level of interpersonal & communication skills in English & strong ability to meet deadlines Python, Pytorch, Git, pandas, dask, polars, sklearn, huggingface, docker, databricks Desired Skills Masters degree &/or specialization courses in AI/ML. PhD in science is an added value Experience in MLOPs (MLFlow, Prefect) & deployment of AI/ML solutions to the cloud (Azure preferred) Understanding & practice of LLMs & Generative AI (prompt engineering, RAG) Experience with Robotic Process Automation, Time Series Forecasting & Predictive modeling A practical grasp of databases (SQL, ElasticSearch, Pinecone, Faiss) Previous experience in retail, consumer, ecommerce, business, FMCG products (NielsenIQ portfolio) Additional Information With @NielsenIQ, we’re now an even more diverse team of 40,000 people – each with their own stories Our increasingly diverse workforce empowers us to better reflect the diversity of the markets we measure. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 15 hours ago
2.0 years
15 - 25 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Experience : 2.00 + years Salary : INR 1500000-2500000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Pune) Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Anervea.AI) (*Note: This is a requirement for one of Uplers' client - Anervea.AI) What do you need for this opportunity? Must have skills required: Airflow, LLMs, NLP, Statistical Modeling, Predictive Analysis, Forecasting, Python, SQL, MLFlow, pandas, Scikit-learn, XgBoost Anervea.AI is Looking for: As an ML / Data Science Engineer at Anervea, you’ll work on designing, training, deploying, and maintaining machine learning models across multiple products. You’ll build models that predict clinical trial outcomes, extract insights from structured and unstructured healthcare data, and support real-time scoring for sales or market access use cases. You’ll collaborate closely with AI engineers, backend developers, and product owners to translate data into product features that are explainable, reliable, and impactful. Key Responsibilities Develop and optimize predictive models using algorithms such as XGBoost, Random Forest, Logistic Regression, and ensemble methods Engineer features from real-world healthcare data (clinical trials, treatment adoption, medical events, digital behavior) Analyze datasets from sources like ClinicalTrials.gov, PubMed, Komodo, Apollo.io, and internal survey pipelines Build end-to-end ML pipelines for inference and batch scoring Collaborate with AI engineers to integrate LLM-generated features with traditional models Ensure explainability and robustness of models using SHAP, LIME, or custom logic Validate models against real-world outcomes and client feedback Prepare clean, structured datasets using SQL and Pandas Communicate insights clearly to product, business, and domain teams Document all processes, assumptions, and model outputs thoroughly Technical Skills Required : Strong programming skills in Python (NumPy, Pandas, scikit-learn, XGBoost, LightGBM) Experience with statistical modeling and classification algorithms Solid understanding of feature engineering, model evaluation, and validation techniques Exposure to real-world healthcare, trial, or patient data (strong bonus) Comfortable working with unstructured data and data cleaning techniques Knowledge of SQL and NoSQL databases Familiarity with ML lifecycle tools (MLflow, Airflow, or similar) Bonus: experience working alongside LLMs or incorporating generative features into ML Bonus: knowledge of NLP preprocessing, embeddings, or vector similarity methods Personal Attributes : Strong analytical and problem-solving mindset Ability to convert abstract questions into measurable models Attention to detail and high standards for model quality Willingness to learn life sciences concepts relevant to each use case Clear communicator who can simplify complexity for product and business teams Independent learner who actively follows new trends in ML and data science Reliable, accountable, and driven by outcomes—not just code Bonus Qualities : Experience building models for healthcare, pharma, or biotech Published work or open-source contributions in data science Strong business intuition on how to turn models into product decisions How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 15 hours ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a skilled Lead Software Engineer to join our team and lead a project focused on developing GenAI applications using Large Language Models (LLMs) and Python programming . In this role, you will be responsible for designing and optimizing Al-generated text prompts to maximize effectiveness for various applications. You will also collaborate with cross-functional teams to ensure seamless integration of optimized prompts into the overall product or system. Your expertise in prompt engineering principles and techniques will allow you to guide models to desired outcomes and evaluate prompt performance to identify areas for optimization and iteration. Responsibilities Design, develop, test and refine AI-generated text prompts to maximize effectiveness for various applications Ensure seamless integration of optimized prompts into the overall product or system Rigorously evaluate prompt performance using metrics and user feedback Collaborate with cross-functional teams to understand requirements and ensure prompts align with business goals and user needs Document prompt engineering processes and outcomes, educate teams on prompt best practices and keep updated on the latest AI advancements to bring innovative solutions to the project Requirements 7 to 12 years of relevant professional experience Expertise in Python programming including experience with Al/machine learning frameworks like TensorFlow, PyTorch, Keras, Langchain, MLflow, Promtflow 2-5 years of working knowledge of NLP and LLMs like BERT, GPT-3/4, T5, etc. Knowledge of how these models work and how to fine-tune them Expertise in prompt engineering principles and techniques like chain of thought, in-context learning, tree of thought, etc. Knowledge of retrieval augmented generation (RAG) Strong analytical and problem-solving skills with the ability to think critically and troubleshoot issues Excellent communication skills, both verbal and written in English at a B2+ level for collaborating across teams, explaining technical concepts, and documenting work outcomes
Posted 15 hours ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
As a trusted global transformation partner, Welocalize accelerates the global business journey by enabling brands and companies to reach, engage, and grow international audiences. Welocalize delivers multilingual content transformation services in translation, localization, and adaptation for over 250 languages with a growing network of over 400,000 in-country linguistic resources. Driving innovation in language services, Welocalize delivers high-quality training data transformation solutions for NLP-enabled machine learning by blending technology and human intelligence to collect, annotate, and evaluate all content types. Our team works across locations in North America, Europe, and Asia serving our global clients in the markets that matter to them. www.welocalize.com To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Job Reference: Role Summary: The AIML Platform Engineering Lead is a pivotal leadership role responsible for managing the day-to-day operations and development of the AI/ML platform team. In this role, you will guide the team in designing, building, and maintaining scalable platforms, while collaborating with other engineering and data science teams to ensure successful model deployment and lifecycle management. Key Responsibilities: Lead and manage a team of platform engineers in developing and maintaining robust AI/ML platforms Define and implement best practices for machine learning infrastructure, ensuring scalability, performance, and security Collaborate closely with data scientists and DevOps teams to optimize the ML lifecycle from model training to deployment Establish and enforce standards for platform automation, monitoring, and operational efficiency Serve as the primary liaison between engineering teams, product teams, and leadership Mentor and develop junior engineers, providing technical guidance and performance feedback Stay abreast of the latest advancements in AI/ML infrastructure and integrate new technologies where applicable Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field 8+ years of experience in Python & Node.js development and infrastructure Proven experience in leading engineering teams and driving large-scale projects Extensive expertise in cloud infrastructure (AWS, GCP, Azure), MLOps tools (e.g., Kubeflow, MLflow), and infrastructure as code (Terraform) Strong programming skills in Python and Node.js, with a proven track record of building scalable and maintainable systems that support AI/ML workflows Hands-on experience with monitoring and observability tools, such as Datadog, to ensure platform reliability and performance Strong leadership and communication skills with the ability to influence cross-functional teams Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment
Posted 15 hours ago
0 years
0 Lacs
India
On-site
About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS or equivalent required, MSCS or equivalent strongly preferred Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.
Posted 16 hours ago
2.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Title: Computer Vision Engineer Location: Coimbatore, Work from office role Experience Required: 2+ years Employment Type: Full-Time, Permanent Company: Katomaran Technologies About Us Katomaran Technologies is a cutting-edge technology company building real-world AI applications that span across computer vision, large language models (LLMs), and AI agentic systems. We are looking for a highly motivated Senior AI Developer who thrives at the intersection of technology, leadership, and innovation. You will play a core role in architecting AI products, leading engineering teams, and collaborating directly with the founding team and customers to turn vision into scalable, production-ready solutions. Key Responsilbilities Architect and develop scalable AI solutions using computer vision, LLMs, and agent-based AI architectures. Collaborate with the founding team to define product roadmaps and AI strategy. Lead and mentor a team of AI and software engineers, ensuring high code quality and project delivery timelines. Develop robust, efficient pipelines for model training, validation, deployment, and real-time inference. Work closely with customers and internal stakeholders to translate requirements into AI-powered applications. Stay up to date with state-of-the-art research in vision models (YOLO, SAM, CLIP, etc.), transformers, and agentic systems (AutoGPT-style orchestration). Optimize AI models for deployment on cloud and edge environments. Required Skills and Qualifications Bachelor’s or Master’s in Computer Science, AI, Machine Learning, or related fields. 2+ years of hands-on experience building AI applications in computer vision and/or NLP. Strong knowledge of Deep Learning frameworks (PyTorch, TensorFlow, OpenCV, HuggingFace, etc.). Proven experience with LLM fine-tuning, prompt engineering, and embedding-based retrieval (RAG). Solid understanding of agentic systems such as LangGraph, CrewAI, AutoGen, or custom orchestrators. Ability to design and manage production-grade AI systems (Docker, REST APIs, GPU optimization, etc.). Strong communication and leadership skills, with experience managing small to mid-size teams. Startup mindset – self-driven, ownership-oriented, and comfortable in ambiguity. Nice to have Experience with video analytics platforms or edge deployment (Jetson, Coral, etc.). Experience with programming skills in C++ will be an added advantage Knowledge of MLOps practices and tools (MLflow, Weights & Biases, ClearML, etc.). Exposure to Reinforcement Learning or multi-agent collaboration models. Customer-facing experience or involvement in AI product strategy. What we offer Medical insurance Paid sick time Paid time off PF To Apply: Send your resume, GitHub/portfolio, and a brief note about your most exciting AI project to hr@katomaran.com.
Posted 18 hours ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Summary We are seeking a highly skilled and self-motivated AI Engineer to join us as we establish our AI Center of Excellence (CoE). As an early team member, you will play a critical role in shaping the foundation, strategy, and implementation of AI and ML solutions across the organization. This role offers a unique opportunity to work at the forefront of AI innovation, contribute to impactful use cases, and collaborate with cross-functional teams to build intelligent agentic systems. You will work with both Microsoft technologies (Copilot Studio, AI Foundry, Azure OpenAI) and open-source frameworks to design, deploy, and manage enterprise-ready AI solutions. Key Responsibilities a. Design, develop, and deploy AI agents using Microsoft Copilot Studio and AI Foundry. b. Build and fine-tune machine learning models for NLP, prediction, classification, and recommendation tasks. c. Conduct exploratory data analysis (EDA) to extract insights and support model development. d. Implement and manage LLM workflows, including prompt engineering, fine-tuning, evaluation, deployment, and monitoring. e. Utilize open-source frameworks such as LangChain, Hugging Face, MLflow, and RAG pipelines to build scalable, modular AI solutions. f. Integrate AI solutions with business workflows using APIs and cloud-native deployment methods. g. Use Azure AI services, including AI Foundry and Azure OpenAI, for secure and scalable model operations. h. Contribute to the creation of an AI governance framework, including Responsible AI principles, model explainability, fairness, and accountability. i. Support the creation of standards, reusable assets, and documentation as the CoE grows. j. Collaborate with engineering, data, and business teams to define problems, build solutions, and demonstrate value. k. Stay up to date with emerging AI capabilities such as Model Context Protocol (MCP), Agent-to-Agent (A2A) frameworks, and Agent Communication Protocols (ACP), and proactively evaluate opportunities to integrate them into enterprise solutions. Required Qualifications · Bachelor's or master's degree in computer science, Data Science, Engineering, or a related field. · 5+ years of experience in AI, machine learning, or data science with production-level deployments. · Strong foundation in statistics, ML algorithms, and data analysis techniques. · Hands-on experience building with LLMs, GenAI platforms, and AI copilots. · Proficient in Python, with experience using libraries such as Pandas, Scikit-learn, PyTorch, TensorFlow, and Transformers. · Experience with Microsoft Copilot Studio, AI Foundry, and Azure OpenAI. · Working knowledge of open-source GenAI tools (LangChain, Haystack, MLflow). · Understanding of cloud deployment, API integration, and version control (Git).
Posted 18 hours ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
AI/ML ENGINEER Who We Are? Cleantech Industry Resources accelerates United States solar, battery storage and EV projects by providing turnkey development as a service including 100% internal systems engineering. The company deploys a leading team that spun out of the largest solar power producer in the world. This team operates within a sophisticated suite of software to support projects from land origination, through to commercial operation. Location Chennai What We Offer Opportunity to join a top-notch, collaborative team of professionals Fantastic team environment and collaborative culture Professional development opportunities to grow into an industry leader Medical Insurance for the employee and family Spot Recognition bonus for exceptional performance Long Term Incentive policy Regular team outings, events, and activities to foster a positive work environment Our Commitment to Diversity At CIR, we are dedicated to nurturing a diverse and equitable workforce that truly reflects our community. We deeply value each person’s unique perspective, skills, and experiences. CIR embraces all individuals, regardless of race, religion, sexual orientation, gender identity, age, or nationality. We are steadfast in our commitment to fostering a just and inclusive world through intentional policies and actions. Your individuality enriches our collective strength, and we strive to ensure everyone feels respected, valued, and empowered. Position Summary We are looking for an AI/ML Engineer to build and optimize machine learning models for GIS-based spatial analysis and data-driven decision-making. This role involves working on geospatial AI models, data pipelines, and Retrieval-Augmented Generation (RAG)-based applications for zoning, county sentiment analysis, and regulatory insights. The engineer will also work closely with the data team, leading efforts in data curation and building robust data pipelines to collect, preprocess, and analyse extensive datasets from various geospatial and regulatory sources to generate automated reports and insights. Core Responsibilities Machine Learning for GIS & Spatial Analysis: Develop and deploy ML models for geospatial data processing, forecasting, and automated GIS insights. Work with large-scale geospatial datasets (e.g., satellite imagery, shapefiles, raster/vector data). Create AI models for land classification, feature detection, and geospatial pattern analysis. Optimize spatial data pipelines and build predictive models for environmental and energy sector applications. Retrieval-Augmented Generation (RAG) & NLP Development: Develop RAG-based AI applications to extract insights from zoning, permitting, and regulatory documents. Build LLM-based applications for zoning law interpretation, county sentiment analysis, and compliance predictions. Implement document retrieval and summarization techniques for legal, policy, and energy development reports. Data Engineering & Pipeline Development: Lead the creation of ETL pipelines to collect and preprocess geospatial data for ML model training. Work with PostGIS, PostgreSQL, and cloud storage to manage structured and unstructured data. Collaborate with the data team to design and implement efficient data processing and storage solutions. AI Model Optimization & Deployment: Fine-tune LLMs for domain-specific applications in renewable energy and urban planning. Deploy AI models using cloud-based MLOps frameworks (AWS, GCP, Azure). Optimize ML model inference for real-time GIS applications and geospatial data analysis. Collaboration & Continuous Improvement: Work with cross-functional teams to ensure seamless AI integration with existing business processes. Engage in knowledge sharing and mentoring within the company. Stay updated with latest advancements in AI, GIS, and NLP to improve existing models and solutions. Education Requirements Master’s in Computer Science, Data Science, Machine Learning, Geostatistics, or related fields. Technical Skills and Experience Software Proficiency: Programming: Python (TensorFlow, PyTorch, scikit-learn, pandas, NumPy), SQL. Machine Learning & AI: Deep learning, NLP, retrieval-based AI, geospatial AI, predictive modeling. GIS & Spatial Data Processing: Experience with PostGIS, GDAL, GeoPandas, QGIS, Google Earth Engine. LLM & RAG Development: Experience in fine-tuning LLMs, retrieval models, vector databases (FAISS, Weaviate). Cloud & MLOps: AWS/GCP/Azure, Docker, Kubernetes, MLflow, FastAPI. Big Data Processing: Experience with large-scale data mining, data annotation, and knowledge graph techniques. Database & Storage: PostgreSQL, NoSQL, vector databases, cloud storage solutions. Communication: Strong ability to explain complex AI/ML concepts to non-technical stakeholders. Project Management: Design experience in projects from conception to implementation. Ability to coordinate with other engineers and stakeholders. Renewable Energy Systems: Understanding of solar energy systems and their integration into existing infrastructure Experience 2-4 years of experience Experience in developing AI for energy sector, urban planning, or environmental analysis. Strong understanding of potential prediction, zoning laws, and regulatory compliance AI applications. Familiarity with spatiotemporal ML models and satellite-based geospatial analytics. Psychosocial Skills /Human Skills/Behavioural Skills Strong analytical, organizational, and problem-solving skills. Management experience a plus. Must be a go-getter with an enterprising attitude A self-starter, able to demonstrate high levels of initiative and motivation Entrepreneurial mindset with the ability to take ideas and run with them from concept to conclusion. Technical understanding of clean energy business processes Exceptional verbal and writing communication skills with superiors, peers, partners, and other stakeholders. Excellent interpersonal skills while managing multiple priorities in a fast-paced and ever-changing environment. Physical Demands The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. The physical demands of this job require an individual to be able to work at a computer for most of the day, be able to participate in conference calls and travel to team retreats on a time-to-time basis. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Work Conditions The work environment is usually quiet (normal city traffic noises are common), a blend of artificial and natural light, temperate and generally supports a collaborative work environment. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Equal Opportunity Employer At Cleantech Industry Resources, we embrace diversity and uphold a strong dedication to establishing an all-encompassing atmosphere for both our staff and associates. Our choices in employment are free from any bias related to race, creed, nationality, ethnicity, gender, sexual orientation, gender identity, gender expression, age, physical limitations, veteran status, or any other legally safeguarded attributes. Being an integral part of Cleantech Industry Resources means you can expect to be immersed in a realm of professional possibilities within a culture that nurtures teamwork, adaptability, and the embracing of all.
Posted 19 hours ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Exp : 15yrs to 23yrs Primary skills :- Vision AI Solution, Nvidia, Computer Vision, Media, Open Stack. Key Responsibilities Define and lead the end-to-end technical architecture for vision-based AI systems across edge and cloud. Design and optimize large-scale video analytics pipelines using NVIDIA DeepStream, TensorRT, and Triton Inference Server. Architect distributed AI systems, including model training, deployment, inferencing, monitoring, and continuous learning. Collaborate with product, research, and engineering teams to translate business requirements into scalable AI solutions. Lead efforts in model optimization (quantization, pruning, distillation) for real-time performance on devices like Jetson Orin/Xavier. Drive the integration of multi-modal AI (vision + language, 3D, audio) where applicable. Guide platform choices (e.g., edge AI vs cloud AI trade-offs), ensuring cost-performance balance. Mentor senior engineers and promote best practices in MLOps, system reliability, and AI observability. Stay current with emerging technologies (e.g., NeRF, Diffusion Models, Vision Transformers, synthetic data). Contribute to internal innovation strategy, including IP generation, publications, and external presentations. ________________________________________ 🛠️ Required Technical Skills Deep expertise in computer vision, deep learning, and multi-modal AI. Proven hands-on experience with: NVIDIA Jetson, DeepStream SDK, TensorRT, Triton Inference Server TAO Toolkit, Isaac SDK, CUDA, cuDNN Strong in PyTorch, TensorFlow, OpenCV, GStreamer, and GPU-accelerated pipelines. Experience deploying vision AI models at large scale (e.g., 1000+ cameras/devices or multi-GPU clusters). Skilled in cloud-native ML infrastructure: Docker, Kubernetes, CI/CD, MLflow, Seldon, Airflow Proficiency in Python, C++, CUDA (or PyCUDA), and scripting. Familiar with 3D vision, synthetic data pipelines, and generative models (e.g., SAM, NeRF, Diffusion). Experience in multi modal (LVM/VLM), SLMs, small LVM/ VLM, Time series Gen AI models, Agentic AI, LLMOps/Edge LLMOps, Guardrails, Security in Gen AI, YOLO/Vision Transformers ________________________________________ 🤝 Soft Skills & Leadership 10+ years in AI/ML/Computer Vision, with 8+ years in technical leadership or architect roles Strong leadership skills with experience mentoring technical teams and driving innovation. Excellent communicator with the ability to engage stakeholders across engineering, product, and business. Strategic thinker with a practical mindset—able to balance innovation with production-readiness. Experience interfacing with enterprise customers, researchers, and hardware partners. ________________________________________ 🧩 Preferred Qualifications MS or PhD in Computer Vision, Machine Learning, Robotics, or a related technical field ( Added Advantage ) Experience with NVIDIA Omniverse, Clara, or MONAI for healthcare or simulation environments. Experience in domains like smart cities, robotics, retail analytics, or medical imaging. Contributions to open-source projects or technical publications. Certifications: NVIDIA Jetson Developer, AWS/GCP AI/ML Certifications.
Posted 19 hours ago
4.0 years
0 Lacs
India
Remote
Job Title: AI/ML Engineer Experience: 4+ Years Location: Remote Job Type: Full-Time Job Summary: We are looking for a passionate and results-driven AI/ML Engineer with 4 years of experience in designing, building, and deploying machine learning models and intelligent systems. The ideal candidate should have solid programming skills, a strong grasp of data preprocessing, model evaluation, and MLOps practices. You will collaborate with cross-functional teams including data scientists, software engineers, and product managers to integrate intelligent features into applications and systems. Key Responsibilities: Design, develop, train, and optimize machine learning and deep learning models for real-world applications. Preprocess, clean, and transform structured and unstructured data for model training and evaluation. Implement, test, and deploy models using APIs or microservices (Flask, FastAPI, etc.) in production environments. Use ML libraries and frameworks like Scikit-learn, TensorFlow, PyTorch, Hugging Face, XGBoost, etc. Monitor and retrain models as needed for performance, accuracy, and drift mitigation. Collaborate with software and data engineering teams to operationalize ML solutions using MLOps tools. Stay updated with emerging trends in AI/ML and suggest enhancements to existing systems. Required Skills and Qualifications: Bachelor’s or Master’s in Computer Science, Engineering, AI/ML, Data Science, or related field. 4+ years of hands-on experience in machine learning model development and deployment. Strong experience in Python and libraries like Pandas, NumPy, Scikit-learn, Matplotlib/Seaborn. Experience with deep learning frameworks such as TensorFlow, PyTorch, or Keras. Proficiency in model deployment using Flask, FastAPI, Docker, and REST APIs. Experience with version control (Git), model versioning, and experiment tracking (MLflow, Weights & Biases). Familiarity with cloud platforms like AWS (SageMaker), Azure ML, or GCP AI Platform. Knowledge of databases (SQL/NoSQL) and data pipelines (Airflow, Spark, etc.). Strong problem-solving and debugging skills, with an analytical mindset.
Posted 20 hours ago
3.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Are you passionate about building scalable AI/ML systems and ensuring their success in production? We’re looking for an MLOps Engineer to join our growing AI/ML team. In this role, you’ll support the end-to-end lifecycle of AI/ML and GenAI use cases — from development to deployment and performance monitoring. Work Profile: - Support AI/ML/GenAI use case development, deployment, and monitoring - Implement AI observability tools to track model performance, drift, and health - Collaborate with cross-functional teams and manage stakeholder communications - Create technical documentation, onboarding guides, and monitoring dashboards - Work with cloud platforms (AWS, Azure, GCP) and scripting tools for automation Skills Required: 🔹 3+ years of experience in AI/ML, GenAI, or MLOps roles 🔹 Solid knowledge of AI observability tools (e.g., MLflow, Evidently, Arize) 🔹 Hands-on experience with cloud tools and platforms 🔹 Proficiency in Python, Bash , or similar scripting languages 🔹 Strong communication and stakeholder management skills Bonus If You Have: - Experience working with LLM/GenAI models - Familiarity with orchestration tools (Airflow, Kubeflow) - Exposure to responsible AI or compliance monitoring frameworks
Posted 21 hours ago
14.0 - 16.0 years
0 Lacs
Greater Chennai Area
On-site
Principal Data Scientist Experience : 14-16 years Job Summary We are seeking a highly experienced AI Lead - Principal Data Scientist to spearhead the delivery of multiple AI and machine learning projects across industries such as supply chain, logistics, pricing, manufacturing, and workforce planning. This role combines deep hands-on expertise in AI/ML/Gen AI (50%) with strategic leadership and cross-functional stakeholder management. You will lead enterprise AI solutions from concept to production, architect scalable cloud-native platforms, and collaborate with business and technology teams to deliver measurable business outcomes. Key Responsibilities Technical Leadership & Solutioning : Design, build, and deploy advanced AI, machine learning, deep learning, and Gen AI solutions using Python, Scikit-learn, TensorFlow/PyTorch, and LangChain/OpenAI APIs. Architect and implement end-to-end AI systems including data ingestion, preprocessing, model training, validation, and deployment. Develop modular, reusable components and APIs (FastAPI/Flask) for inference and integration with digital applications. Lead cloud-native development on AWS, Azure, or GCP for scalable deployment of AI models and pipelines. Project & Delivery Ownership Manage the delivery of multiple concurrent AI/ML/Gen AI initiatives, ensuring quality, timeliness, and business alignment. Define technical roadmap, sprint plans, and milestone goals; track delivery KPIs and model performance in production. Guide agile teams through best practices in model lifecycle management, DevOps/MLOps, and reusable IP development. Business Engagement & Techno-Functional Consulting Act as the techno-functional bridge between business and engineering teams to translate high-level problems into AI/ML use cases. Conduct business value assessments, requirement workshops, and stakeholder reviews. Drive adoption by presenting explainable AI results using visual storytelling and decision support tools. Team Enablement & Innovation Mentor and upskill junior data scientists and engineers in best practices, new AI trends, and real-world problem-solving. Stay current with the latest trends in Generative AI, LLMs, Vision AI, and responsible AI practices. Contribute to internal frameworks, accelerators, and reusable artifacts for faster go-to-market. Required Skills & Qualifications Bachelor's or Master's in Computer Science, AI/ML, Data Science, or related quantitative field. 10-13 years of experience in delivering AI/ML solutions at scale with at least 5 years in a lead or principal role. Hands-on expertise in Python, ML/DL frameworks (TensorFlow, PyTorch, Scikit-learn) and Generative AI (OpenAI, Llama, LangChain). Strong cloud development experience with AWS, GCP, or Azure, including AI/ML services and containerized deployments. Experience deploying models in production via APIs and integrating with enterprise applications. Excellent communication, stakeholder management, and problem-solving skills. Preferred Qualifications Experience in Generative AI (LLMs, prompt engineering, RAG pipelines). Familiarity with MLOps tools (MLflow, Airflow, DVC, Kubeflow). Working knowledge of data engineering workflows, feature stores, and streaming/batch data pipelines. Exposure to data visualization tools like Streamlit, Dash, or Power BI for presenting insights. Certifications in cloud (AWS/GCP/Azure), AI/ML, or data science. (ref:hirist.tech)
Posted 1 day ago
10.0 years
0 Lacs
Greater Kolkata Area
On-site
Responsibilities : About Lexmark: Founded in 1991 and headquartered in Lexington, Kentucky, Lexmark is recognized as a global leader in print hardware, service, software solutions and security by many of the technology industry’s leading market analyst firms. Lexmark creates cloud-enabled imaging and IoT solutions that help customers in more than 170 countries worldwide quickly realize business outcomes. Lexmark’s digital transformation objectives accelerate business transformation, turning information into insights, data into decisions, and analytics into action. Lexmark India, located in Kolkata, is one of the research and development centers of Lexmark International Inc. The India team works on cutting edge technologies & domains like cloud, AI/ML, Data science, IoT, Cyber security on creating innovative solutions for our customers and helping them minimize the cost and IT burden in providing a secure, reliable, and productive print and imaging environment. At our core, we are a technology company – deeply committed to building our own R&D capabilities, leveraging emerging technologies and partnerships to bring together a library of intellectual property that can add value to our customer's business. Caring for our communities and creating growth opportunities by investing in talent are woven into our culture. It’s how we care, grow, and win together. Job Description/Responsibilities: We are looking for a highly skilled and strategic Data Architect with deep expertise in the Azure Data ecosystem . This role requires a strong command over Azure Databricks , Azure Data Lake , Azure Data Factory , data warehouse design , SQL optimization , and AI/ML integration . The Data Architect will design and oversee robust, scalable, and secure data architectures to support advanced analytics and machine learning workloads. Qualification: BE/ME/MCA with 10+ Years in IT Experience. Must Have Skills/Skill Requirement: Define and drive the overall Azure-based data architecture strategy aligned with enterprise goals. Architect and implement scalable data pipelines, data lakes, and data warehouses using Azure Data Lake, ADF, and Azure SQL/Synapse. Provide technical leadership on Azure Databricks (Spark, Delta Lake, Notebooks, MLflow etc.) for large-scale data processing and advanced analytics use cases. Integrate AI/ML models into data pipelines and support end-to-end ML lifecycle (training, deployment, monitoring). Collaborate with cross-functional teams including data scientists, DevOps engineers, and business analysts. Evaluate and recommend tools, platforms, and design patterns for data and ML infrastructure. Mentor data engineers and junior architects on best practices and architectural standards. Strong experience with data modeling, ETL/ELT frameworks, and data warehousing concepts. Proficient in SQL, Python, PySpark. Solid understanding of AI/ML workflows and tools. Exposure on Azure DevOps. Excellent communication and stakeholder management skills. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.
Posted 1 day ago
5.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior AI Cloud Operations Engineer Seniority: 4-5 OffShore Profile Summary: We're looking for a Senior AI Cloud Operations Engineer to start building a new for AI Cloud Operations team, starting with this strategic position. We are searching for an experienced Senior AI Cloud Operations Engineer with deep expertise in AI technologies to lead our cloud-based AI infrastructure management. This role is integral to ensuring our AI systems scalability, reliability, and performance, enabling us to deliver cutting-edge solutions. The ideal candidate will have a robust understanding of machine learning frameworks, cloud services architecture, and operations management. Key Responsibilities: Cloud Architecture Design: Design, architect, and manage scalable cloud infrastructure tailored for AI workloads, leveraging platforms like AWS, Azure, or Google Cloud. System Monitoring and Optimization: Implement comprehensive monitoring solutions to ensure high availability and swift performance, utilizing tools like Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: Work closely with data scientists to operationalize AI models, ensuring seamless integration with existing systems and workflows. Familiarity with tools such as MLflow or TensorFlow Serving can be beneficial. Automation and Orchestration: Develop automated deployment pipelines using orchestration tools like Kubernetes and Terraform to streamline operations and reduce manual interventions. Security and Compliance: Ensure that all cloud operations adhere to security best practices and compliance standards, including data privacy regulations like GDPR or HIPAA. Documentation and Reporting: Create and maintain detailed documentation of cloud configurations, procedures, and operational metrics to foster transparency and continuous improvement. Performance Tuning: Conduct regular performance assessments and implement strategies to optimize cloud resource utilization and reduce costs without compromising system effectiveness. Issue Resolution: Rapidly identify, diagnose, and resolve technical issues, minimizing downtime and ensuring maximum uptime. Qualifications: Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred. Professional Experience: 5+ years of extensive experience in cloud operations, particularly within AI environments. Demonstrated expertise in deploying and managing complex AI systems in cloud settings . Technical Expertise: Deep knowledge of cloud platforms (AWS, Azure, Google Cloud) including their AI-specific services such as AWS SageMaker or Google AI Platform. AI/ML Proficiency: In-depth understanding of AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, along with experience in ML model lifecycle management. Infrastructure as Code: Proficiency in infrastructure-as-code tools such as Terraform and AWS CloudFormation to automate and manage cloud deployment processes. Containerization and Microservices: Expertise in managing containerized applications using Docker and orchestrating services with Kubernetes. Soft Skills: Strong analytical, problem-solving, and communication skills, with the ability to work effectively both independently and in collaboration with cross-functional teams. Preferred Qualifications Advanced certifications in cloud services, such as AWS Certified Solutions Architect or Google Cloud Professional Data Engineer. Experience in advanced AI techniques such as deep learning or reinforcement learning. Knowledge of emerging AI technologies and trends to drive innovation within existing infrastructure. List of Used Tools: Cloud Provider: Azure, AWS or Google. Performance & monitor: Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: MLflow or TensorFlow Serving Automation and Orchestration: Kubernetes and Terraform Security and Compliance: Data privacy regulations like GDPR or HIPAA. Job Title: Senior AI Cloud Operations Engineer Seniority: 4-5 OffShore Profile Summary: We're looking for a Senior AI Cloud Operations Engineer to start building a new for AI Cloud Operations team, starting with this strategic position. We are searching for an experienced Senior AI Cloud Operations Engineer with deep expertise in AI technologies to lead our cloud-based AI infrastructure management. This role is integral to ensuring our AI systems scalability, reliability, and performance, enabling us to deliver cutting-edge solutions. The ideal candidate will have a robust understanding of machine learning frameworks, cloud services architecture, and operations management. Key Responsibilities: Cloud Architecture Design: Design, architect, and manage scalable cloud infrastructure tailored for AI workloads, leveraging platforms like AWS, Azure, or Google Cloud. System Monitoring and Optimization: Implement comprehensive monitoring solutions to ensure high availability and swift performance, utilizing tools like Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: Work closely with data scientists to operationalize AI models, ensuring seamless integration with existing systems and workflows. Familiarity with tools such as MLflow or TensorFlow Serving can be beneficial. Automation and Orchestration: Develop automated deployment pipelines using orchestration tools like Kubernetes and Terraform to streamline operations and reduce manual interventions. Security and Compliance: Ensure that all cloud operations adhere to security best practices and compliance standards, including data privacy regulations like GDPR or HIPAA. Documentation and Reporting: Create and maintain detailed documentation of cloud configurations, procedures, and operational metrics to foster transparency and continuous improvement. Performance Tuning: Conduct regular performance assessments and implement strategies to optimize cloud resource utilization and reduce costs without compromising system effectiveness. Issue Resolution: Rapidly identify, diagnose, and resolve technical issues, minimizing downtime and ensuring maximum uptime. Qualifications: Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred. Professional Experience: 5+ years of extensive experience in cloud operations, particularly within AI environments. Demonstrated expertise in deploying and managing complex AI systems in cloud settings . Technical Expertise: Deep knowledge of cloud platforms (AWS, Azure, Google Cloud) including their AI-specific services such as AWS SageMaker or Google AI Platform. AI/ML Proficiency: In-depth understanding of AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, along with experience in ML model lifecycle management. Infrastructure as Code: Proficiency in infrastructure-as-code tools such as Terraform and AWS CloudFormation to automate and manage cloud deployment processes. Containerization and Microservices: Expertise in managing containerized applications using Docker and orchestrating services with Kubernetes. Soft Skills: Strong analytical, problem-solving, and communication skills, with the ability to work effectively both independently and in collaboration with cross-functional teams. Preferred Qualifications Advanced certifications in cloud services, such as AWS Certified Solutions Architect or Google Cloud Professional Data Engineer. Experience in advanced AI techniques such as deep learning or reinforcement learning. Knowledge of emerging AI technologies and trends to drive innovation within existing infrastructure. List of Used Tools: Cloud Provider: Azure, AWS or Google. Performance & monitor: Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: MLflow or TensorFlow Serving Automation and Orchestration: Kubernetes and Terraform Security and Compliance: Data privacy regulations like GDPR or HIPAA.
Posted 1 day ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Job Description – External: We are hiring a Senior Data Engineer with deep expertise in Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics to join our high-performing team. The ideal candidate will have a proven track record in designing, building, and optimizing big data pipelines and architectures while leveraging their technical proficiency in cloud-based data engineering. This role requires a strategic thinker who can bridge the gap between raw data and actionable insights, enabling data-driven decision-making for large-scale enterprise initiatives. A strong foundation in distributed computing, ETL frameworks, and advanced data modeling is crucial. The individual will work closely with data architects, analysts, and business teams to deliver scalable and efficient data solutions. Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Required Qualifications: Experience: 12+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance tools like Azure Purview. Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. JobCategory:BigData
Posted 1 day ago
7.0 years
0 Lacs
India
On-site
Welcome to Radin Health A premier Healthcare IT Software as a Service (SaaS) provider specializing in revolutionizing radiology workflow processes. Our cloud-based solutions encompass Radiology Information Systems (RIS), Picture Archiving and Communication Systems (PACS), Voice Dictation (Dictation AI) and Radiologist Workflow Management (RADIN Select), all powered by Artificial Intelligence. We are an innovative, forward-thinking Company with AI-Powered Solutions. Join Our Team! We Are Looking for Talent We are seeking a highly skilled AI Engineer with proven experience in healthcare document intelligence. You will lead the development and optimization of machine learning models for document classification and OCR-based data extraction , helping us extract structured data from prescriptions, insurance cards, consent forms, orders, and other medical records. You will be part of a fast-paced, cross-functional team working to integrate AI seamlessly into healthcare operations while maintaining the highest standards of accuracy, security, and compliance. Key Responsibilities Model Development: Design, train, and deploy ML/DL models for classifying healthcare documents and extracting structured data (e.g., patient info, insurance details, physician names, procedures). OCR Integration & Tuning: Work with OCR engines like Tesseract, AWS Textract, or Google Vision to extract text from scanned images and PDFs, enhancing accuracy via post-processing and pre-processing techniques. Document Classification: Build and refine document classification models using supervised learning and NLP techniques, with real-world noisy healthcare data. Data Labeling & Annotation: Create tools and workflows for large-scale labeling; collaborate with clinical experts and data annotators to improve model precision. Model Evaluation & Improvement: Measure model performance using precision, recall, F1 scores, and deploy improvements based on real-world production feedback. Pipeline Development: Build scalable ML pipelines for training, validation, inference, and monitoring using frameworks like PyTorch, TensorFlow, and MLFlow. Collaboration: Work closely with backend engineers, product managers, and QA teams to integrate models into healthcare products and workflows. Required Skills & Qualifications Bachelor's or Master’s in Computer Science, AI, Data Science, or related field. 7+ years experience in machine learning, with at least 3 years in healthcare AI applications. Strong experience with OCR technologies (Tesseract, AWS Textract, Azure Form Recognizer, Google Vision API). Proven track record in training and deploying classification models for healthcare documents. Experience with Python (NumPy, Pandas, Scikit-learn), deep learning frameworks (PyTorch, TensorFlow), and NLP libraries (spaCy, Hugging Face, etc.). Understanding of HIPAA-compliant data handling and healthcare terminology. Familiarity with real-world document types such as referrals, AOBs, insurance cards, and physician notes. Preferred Qualifications Experience working with noisy scanned documents and handwritten text. Exposure to EHR/EMR systems and HL7/FHIR integration. Knowledge of labeling tools like Label Studio or Prodigy. Experience with active learning or human-in-the-loop systems. Contributions to healthcare AI research or open-source projects.
Posted 1 day ago
8.0 - 12.0 years
35 - 50 Lacs
Bengaluru
Work from Office
My profile - linkedin.com/in/yashsharma1608 Position : AI Architect ( Gen AI ) Experience : 8 - 10 years Notice Period : Immediate to 15 days. Budget upto - 45 to 50 LPA Location : Bangalore. Note : - (any developer with minimum 3 to 4 years into AI), SaaS company mandatory. Product Based company Mandatory Discuss the feasibility of AI/ML use cases along with architectural design with business teams and translate the vision of business leaders into realistic technical implementation Play a key role in defining the AI architecture and selecting appropriate technologies from a pool of open-source and commercial offerings Design and implement robust ML infrastructure and deployment pipelines Establish comprehensive MLOps practices for model training, versioning, and deployment Lead the development of HR-specialized language models (SLMs) Implement model monitoring, observability, and performance optimization frameworks Develop and execute fine-tuning strategies for large language models Create and maintain data quality assessment and validation processes Design model versioning systems and A/B testing frameworks Define technical standards and best practices for AI development Optimize infrastructure for cost, performance, and scalability Required Qualifications 7+ years of experience in ML/AI engineering or related technical roles 3+ years of hands-on experience with MLOps and production ML systems Demonstrated expertise in fine-tuning and adapting foundation models Strong knowledge of model serving infrastructure and orchestration Proficiency with MLOps tools (MLflow, Kubeflow, Weights & Biases, etc.) Experience implementing model versioning and A/B testing frameworks Strong background in data quality methodologies for ML training Proficiency in Python and ML frameworks (PyTorch, TensorFlow, Hugging Face) Experience with cloud-based ML platforms (AWS, Azure, Google Cloud) Proven track record of deploying ML models at scale Preferred Qualifications Experience developing AI applications for enterprise software domains Knowledge of distributed training techniques and infrastructure Experience with retrieval-augmented generation (RAG) systems Familiarity with vector databases (Pinecone, Weaviate, Milvus) Understanding of responsible AI practices and bias mitigation Bachelor's or Master's degree in Computer Science, Machine Learning, or related field What We Offer Opportunity to shape AI strategy for a fast-growing HR technology leader Collaborative environment focused on innovation and impact Competitive compensation package Professional development opportunities Flexible work arrangements Qualified candidates who are passionate about applying cutting-edge AI to transform HR technology are encouraged to apply
Posted 1 day ago
0.0 - 2.0 years
5 - 8 Lacs
Gurugram
Work from Office
Build the future of AI video with Django, PostgreSQL, Redis, DSPy, MLflow & GCP. Debug tough issues, ship fast, and own features end-to-end. Must love AI tools, learn fast, and thrive under pressure. Coding task Final interview. Share your GitHub!
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary We are seeking a highly motivated AI/ML Engineer to design, develop, and deploy machine learning solutions that solve real-world problems. The ideal candidate should have strong foundations in machine learning algorithms , Python programming , and experience with model development , data pipelines , and production deployment in cloud or on-prem environments. Key Responsibilities Design and implement machine learning models and AI solutions for business use cases Build and optimize data preprocessing pipelines for training and inference Train, evaluate, and fine-tune supervised, unsupervised, and deep learning models Collaborate with data engineers, product teams, and software developers Deploy ML models into production using APIs, Docker, or cloud-native tools Monitor model performance and retrain/update models as needed Document model architectures, experiments, and performance metrics Research and stay updated on new AI/ML trends and tools Required Skills And Experience Strong programming skills in Python (NumPy, Pandas, Scikit-learn, etc.) Experience with deep learning frameworks like TensorFlow, Keras, or PyTorch Solid understanding of machine learning algorithms, data structures, and statistics Experience with NLP, computer vision, or time series analysis is a plus Familiarity with tools like Jupyter, MLflow, or Weights & Biases Understanding of Docker, Git, and RESTful APIs Experience with cloud platforms such as AWS, GCP, or Azure Strong problem-solving and communication skills Nice To Have Experience with MLOps tools and concepts (CI/CD for ML, model monitoring) Familiarity with big data tools (Spark, Hadoop) Knowledge of FastAPI, Flask, or Streamlit for ML API development Understanding of transformer models (e.g., BERT, GPT) or LLM integration Education Bachelor’s or Master’s degree in Computer Science, Data Science, AI, or a related field Certifications in Machine Learning/AI (e.g., Google ML Engineer, AWS ML Specialty) are a plus
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The mlflow job market in India is rapidly growing as companies across various industries are increasingly adopting machine learning and data science technologies. mlflow, an open-source platform for the machine learning lifecycle, is in high demand in the Indian job market. Job seekers with expertise in mlflow have a plethora of opportunities to explore and build a rewarding career in this field.
These cities are known for their thriving tech industries and have a high demand for mlflow professionals.
The average salary range for mlflow professionals in India varies based on experience: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
Salaries may vary based on factors such as location, company size, and specific job requirements.
A typical career path in mlflow may include roles such as: 1. Junior Machine Learning Engineer 2. Machine Learning Engineer 3. Senior Machine Learning Engineer 4. Tech Lead 5. Machine Learning Manager
With experience and expertise, professionals can progress to higher roles and take on more challenging projects in the field of machine learning.
In addition to mlflow, professionals in this field are often expected to have skills in: - Python programming - Data visualization - Statistical modeling - Deep learning frameworks (e.g., TensorFlow, PyTorch) - Cloud computing platforms (e.g., AWS, Azure)
Having a strong foundation in these related skills can further enhance a candidate's profile and career prospects.
As you explore opportunities in the mlflow job market in India, remember to continuously upskill, stay updated with the latest trends in machine learning, and showcase your expertise confidently during interviews. With dedication and perseverance, you can build a successful career in this dynamic and rapidly evolving field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane