Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
4 - 8 Lacs
Gurgaon
On-site
Sprinklr is a leading enterprise software company for all customer-facing functions. With advanced AI, Sprinklr's unified customer experience management (Unified-CXM) platform helps companies deliver human experiences to every customer, every time, across any modern channel. Headquartered in New York City with employees around the world, Sprinklr works with more than 1,000 of the world’s most valuable enterprises — global brands like Microsoft, P&G, Samsung and more than 50% of the Fortune 100. Learn more about our culture and how we make our employees happier through The Sprinklr Way. Job Description What will you do: Work in a collaboration of machine learning engineers and data scientists. Work on various disciplines of machine learning including but not limited to variety of disciplines including deep learning, reinforcement learning, computer vision, language, speech processing etc. Work closely with machine learning leadership team to define and implement the technology and architectural strategy. Take partial ownership of project technical roadmap, which includes deciding, planning, publishing schedules, milestones, technical solution engineering, risks/mitigations, course corrections, trade-outs delivery. Deliver and maintain high-quality scalable systems in a timely and cost-effective manner. Recognising potential use-cases of cutting edge research in Sprinklr products and implementing your own solutions for the same. Stay updated on industry trends, emerging technologies, and advancements in data science, incorporating relevant innovations into the team's workflow. What makes you qualified: Degree in Computer Science or related quantitative field of relevant experience from Tier 1 colleges. At least 2 years of Deep Learning Experience with a distinguished track record on technically fast paced projects. Familiarity with cloud deployment technologies, such as Kubernetes or Docker containers. Experience with large language models (GPT-4, Pathways, Google Bert, Transformer) and deep learning tools (TensorFlow, Torch). Working experience of software engineering best practices including coding standards, code reviews, SCM, CI, build processes, testing, and operations. Experience in communicating with users, other technical teams, and product management to understand requirements, describe software product features, and technical designs. Nice to have: Experience with Multi-Modal ML including Generative AI. Interested in and thoughtful about the impacts of AI technology. A real passion for AI! Why You'll Love Sprinklr: We're committed to creating a culture where you feel like you belong, are happier today than you were yesterday, and your contributions matter. At Sprinklr, we passionately, genuinely care. For full-time employees, we provide a range of comprehensive health plans, leading well-being programs, and financial protection for you and your family through a range of global and localized plans throughout the world. For more information on Sprinklr Benefits around the world, head to https://sprinklrbenefits.com/ to browse our country-specific benefits guides. We focus on our mission: We founded Sprinklr with one mission: to enable every organization on the planet to make their customers happier. Our vision is to be the world’s most loved enterprise software company, ever. We believe in our product: Sprinklr was built from the ground up to enable a brand’s digital transformation. Its platform provides every customer-facing team with the ability to reach, engage, and listen to customers around the world. At Sprinklr, we have many of the world's largest brands as our clients, and our employees have the opportunity to work closely alongside them. We invest in our people: At Sprinklr, we believe every human has the potential to be amazing. We empower each Sprinklrite in the journey toward achieving their personal and professional best. For wellbeing, this includes daily meditation breaks, virtual fitness, and access to Headspace. We have continuous learning opportunities available with LinkedIn Learning and more. EEO - Our philosophy: Our goal is to ensure every employee feels like they belong and are operating in a judgment-free zone regardless of gender, race, ethnicity, age, and lifestyle preference, among others. We value and celebrate diversity and fervently believe every employee matters and should be respected and heard. We believe we are stronger when we belong because collectively, we’re more innovative, creative, and successful. Sprinklr is proud to be an equal-opportunity workplace and is an affirmative-action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. See also Sprinklr’s EEO Policy and EEO is the Law.
Posted 1 week ago
3.0 years
5 - 6 Lacs
Gurgaon
On-site
Gurgaon 4 3+ Years Full Time We are looking for a technically adept and instructionally strong AI Developer with core expertise in Python, Large Language Models (LLMs), prompt engineering, and vector search frameworks such as FAISS, LlamaIndex, or RAG-based architectures. The ideal candidate combines solid foundations in data science, statistics, and machine learning development with a hands-on understanding of ML DevOps, model selection, and deployment pipelines. 3–4 years of experience in applied machine learning or AI development, including at least 1–2 years working with LLMs, prompt engineering, or vector search systems. Core Skills Required: Python: Advanced-level expertise in scripting, data manipulation, and model development LLMs (Large Language Models): Practical experience with GPT, LLaMA, Mistral, or open- source transformer models Prompt Engineering: Ability to design, optimize, and instruct on prompt patterns for various use cases Vector Search & RAG: Understanding of feature vectors, nearest neighbor search, and retrieval-augmented generation (RAG) using tools like FAISS, Pinecone, Chroma, or Weaviate LlamaIndex: Experience building AI applications using LlamaIndex, including indexing documents and building query pipelines Rack Knowledge: Familiarity with RACK architecture, model placement, and scaling on distributed hardware ML / ML DevOps: Knowledge of full ML lifecycle including feature engineering, model selection, training, and deployment Data Science & Statistics: Solid grounding in statistical modeling, hypothesis testing, probability, and computing concepts Responsibilities: Design and develop AI pipelines using LLMs and traditional ML models Build, fine-tune, and evaluate large language models for various NLP tasks Design prompts and RAG-based systems to optimize output relevance and factual grounding Implement and deploy vector search systems integrated with document knowledge bases Select appropriate models based on data and business requirements Perform data wrangling, feature extraction, and model training Develop training material, internal documentation, and course content (especially around Python and AI development using LlamaIndex) Work with DevOps to deploy AI solutions efficiently using containers, CI/CD, and cloud infrastructure Collaborate with data scientists and stakeholders to build scalable, interpretable solutions Maintain awareness of emerging tools and practices in AI and ML ecosystems Preferred Tools & Stack: Languages: Python, SQL ML Frameworks: Scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers Vector DBs: FAISS, Pinecone, Chroma, Weaviate RAG Tools: LlamaIndex, LangChain ML Ops: MLflow, DVC, Docker, Kubernetes, GitHub Actions Data Tools: Pandas, NumPy, Jupyter Visualization: Matplotlib, Seaborn, Streamlit Cloud: AWS/GCP/Azure (S3, Lambda, Vertex AI, SageMaker) Ideal Candidate: Background in Data Science, Statistics, or Computing Passionate about emerging AI tech, LLMs, and real-world applications Demonstrates both hands-on coding skills and teaching/instructional abilities Capable of building reusable, explainable AI solutions Location gurgaon sector 49
Posted 1 week ago
0 years
10 - 30 Lacs
Sonipat
Remote
Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Mining Engineer to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: ● Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. ● Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. ● Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering, and Anomaly Detections. ● Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. ● Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). ● Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). ● Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. ● Contribute to the academic and research environment of the department and the university. Required Qualifications: ● A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. ● Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. ● Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). ● Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. ● Excellent communication and interpersonal skills. Preferred Qualifications: ● A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. ● Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. ● Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). ● Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: ● Competitive salary packages aligned with industry standards. ● Access to state-of-the-art labs and classroom facilities. ● To know more about us, feel free to explore our website: Newton School of Technology. We look forward to the possibility of having you join our academic team and help shape the future of tech education! Job Type: Full-time Pay: ₹1,000,000.00 - ₹3,000,000.00 per year Benefits: Food provided Health insurance Leave encashment Paid sick time Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Application Question(s): Are you interested in a full-time time onsite Instructor role? Are you ready to relocate to Sonipat - NCR Delhi? Are you ready to relocate to Pune? Work Location: In person Expected Start Date: 15/07/2025
Posted 1 week ago
50.0 years
0 Lacs
Bhilai, Chhattisgarh, India
On-site
INNODEED SYSTEMS PRIVATE LIMITED is a cutting-edge AI-driven software solutions company, revolutionizing web and mobile application development with advanced artificial intelligence. From creating intelligent native apps and AI-powered bots that automate and optimize business processes to launching and marketing your digital solutions, we provide comprehensive, end-to-end services. Our expert team, with over 50 years of combined experience, harnesses the power of AI to enhance efficiency, user engagement, and overall digital transformation. By integrating state-of-the-art AI technologies, we drive innovation, streamline operations, and deliver unparalleled digital experiences. Explore the future of AI-driven solutions with us at www.innodeed.com We are looking for an experienced J2EE Developer to build and enhance AI-powered enterprise applications. The ideal candidate should have a strong background in Java, J2EE frameworks, and AI/ML integrations. You will collaborate with AI engineers, data scientists, and front-end developers to develop scalable, high-performance applications. Key Responsibilities: · Develop, optimize, and maintain AI-powered applications using Java and J2EE technologies. · Integrate AI/ML models using TensorFlow, OpenAI APIs, or other ML frameworks. · Design and implement RESTful APIs and Web Services. · Work with databases like Oracle, MySQL, or PostgreSQL for data management. · Collaborate with front-end developers to ensure seamless application performance. · Implement authentication, authorization, and data security best practices. · Optimize application performance, scalability, and reliability. · Debug, test, and resolve issues to maintain high-quality application standards. · Stay updated with the latest advancements in J2EE, AI, and cloud computing. · Participate in Agile development processes and contribute to sprint planning. Required Skills & Qualifications: · Bachelor's or Master’s degree in Computer Science, IT, or a related field. · 3+ years of experience in Java/J2EE development. · Strong knowledge of Spring Framework, Spring Boot, Hibernate, JPA. · Proficiency in RDBMS (Oracle, MySQL, or PostgreSQL) is essential. · Experience integrating AI/ML models using TensorFlow, OpenAI API, or similar technologies. · Hands-on experience with RESTful APIs, SOAP, and Microservices architecture. · Familiarity with cloud-based AI services (AWS, Azure, Google AI APIs). · Proficiency in version control systems like Git. · Strong problem-solving skills and a passion for AI-driven innovation. · Experience working with AI-powered applications or machine learning models. · A strong understanding of RDBMS (Oracle, MySQL, or PostgreSQL) and OOPS concepts is essential. · Knowledge of Big Data processing frameworks (Apache Kafka, Spark, Hadoop) is an added advantage. · Knowledge of the CI/CD pipeline is a plus.
Posted 1 week ago
3.0 years
10 - 12 Lacs
Delhi
On-site
S enior Fullstack AI/ML Engineer Location: Delhi Experience: 3-5 years Mode: On-site About the Role We are seeking a highly skilled Senior AI/ML Engineer to join our dynamic team. The ideal candidate will have extensive experience in designing, building, and deploying machine learning models and AI solutions to solve real-world business challenges. You will collaborate with cross-functional teams to create and integrate AI/ML models into end-to-end applications, ensuring models are accessible through APIs or product interfaces for real-time usage. Responsibilities Lead the design, development, and deployment of machine learning models for various use cases such as recommendation systems, computer vision, natural language processing (NLP), and predictive analytics. Work with large datasets to build, train, and optimize models using techniques such as classification, regression, clustering, and neural networks. Fine-tune pre-trained models and develop custom models based on specific business needs. Collaborate with data engineers to build scalable data pipelines and ensure the smooth integration of models into production. Collaborate with frontend/backend engineers to build AI-driven features into products or platforms. Build proof-of-concept or production-grade AI applications and tools with intuitive UIs or workflows. Ensure scalability and performance of deployed AI solutions within the full application stack. Implement model monitoring and maintenance strategies to ensure performance, accuracy, and continuous improvement of deployed models. Design and implement APIs or services that expose machine learning models to frontend or other systems Internal Utilize cloud platforms (AWS, GCP, Azure) to deploy, manage, and scale AI/ML solutions. Stay up-to-date with the latest advancements in AI/ML research, and apply innovative techniques to improve existing systems. Communicate effectively with stakeholders to understand business requirements and translate them into AI/ML-driven solutions. Document processes, methodologies, and results for future reference and reproducibility. Required Skills & Qualifications Experience: 5+ years of experience in AI/ML engineering roles, with a proven track record of successfully delivering machine learning projects. AI/ML Expertise: Strong knowledge of machine learning algorithms (supervised, unsupervised, reinforcement learning) and AI techniques, including NLP, computer vision, and recommendation systems. Programming Languages: Proficient in Python and relevant ML libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. Data Manipulation: Experience with data manipulation libraries such as Pandas, NumPy, and SQL for managing and processing large datasets. Model Development: Expertise in building, training, deploying, and fine-tuning machine learning models in production environments. Cloud Platforms: Experience with cloud platforms such as AWS, GCP, or Azure for the deployment and scaling of AI/ML models. MLOps: Knowledge of MLOps practices for model versioning, automation, and monitoring. Data Preprocessing: Proficient in data cleaning, feature engineering, and preparing datasets for model training. Strong experience building and deploying end-to-end AI-powered applications— not just models but full system integration. Hands-on experience with Flask, FastAPI, Django, or similar for building REST APIs for model serving. Understanding of system design and software architecture for integrating AI into production environments. Experience with frontend/backend integration (basic React/Next.js knowledge is a plus). Demonstrated projects where AI models were part of deployed user-facing applications. Internal NLP & Computer Vision: Hands-on experience with natural language processing or computer vision projects. Big Data: Familiarity with big data tools and frameworks (e.g., Apache Spark, Hadoop) is an advantage. Problem-Solving Skills: Strong analytical and problem-solving abilities, with a focus on delivering practical AI/ML solutions. Nice to Have Experience with deep learning architectures (CNNs, RNNs, GANs, etc.) and techniques. Knowledge of deployment strategies for AI models using APIs, Docker, or Kubernetes. Experience building full-stack applications powered by AI (e.g., chatbots, recommendation dashboards, AI assistants, etc.). Experience deploying AI/ML models in real-time environments using API gateways, microservices, or orchestration tools like Docker and Kubernetes. Solid understanding of statistics and probability. Experience working in Agile development environments. What You'll Gain Be part of a forward-thinking team working on cutting-edge AI/ML technologies. Collaborate with a diverse, highly skilled team in a fast-paced environment. Opportunity to work on impactful projects with real-world applications. Competitive salary and career growth opportunities Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,200,000.00 per year Schedule: Day shift Fixed shift Work Location: In person
Posted 1 week ago
3.0 - 4.0 years
0 - 3 Lacs
Mohali
On-site
Job description least andand sci-kit-learnat leastand Experience: The ideal candidate should have a minimum of 3-4 years of professional experience working with Python, developing web applications and APIs. Proficiency in Python Frameworks: Strong knowledge and hands-on experience in Python frameworks such as Django, Flask, and FastAPI are essential. The candidate should be adept at building robust and scalable web applications. Basic AI Knowledge: Familiarity with the fundamentals of Artificial Intelligence and its application in Python would be highly beneficial. Experience with machine learning libraries like TensorFlow or scikit-learn is a plus. Database Skills: Basic understanding and experience working with databases (SQL and/or NoSQL) are required. Knowledge of ORMs (Object-Relational Mapping) like SQLAlchemy would be advantageous. Responsibilities for the Python Developer role:Responsibilities for the Python Developer role: - Collaborate with the development team to design, develop, and deploy high-quality Python-based applications. - Build and maintain efficient and reusable Python code. - Implement best practices for software development, including code reviews, automated testing, and documentation. - Work with databases and integrate them into applications, ensuring optimal performance and data integrity. - Research and apply AI concepts and techniques to enhance our products and services. Job Type: Full-time Benefits: Health insurance Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) total work: 1 year (Preferred) Job Type: Full-time Pay: ₹8,086.00 - ₹25,000.00 per month Location Type: In-person Schedule: Day shift Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Gāndhīnagar
On-site
Designation : AI/ML Intern Duration : 6-Month Full-Time Internship Location: Kudasan, Gandhinagar (On-site) Type: Internship (Full-time, 6 months) About Us: We’re a fast-growing startup on a mission to build innovative products that solve real-world problems. AI-Driven IT Solutions for a Smarter Future is more than a tagline — it’s how we operate. We believe in moving fast, learning constantly, and building with purpose. As part of our small and passionate team, you’ll work directly on meaningful projects that shape the future of tech. What You’ll Do: Work on machine learning and AI models for real-world use cases (NLP, classification, recommendations, etc.). Assist in data collection, preprocessing, model training, and evaluation. Collaborate on APIs and model deployment using tools like Flask or FastAPI. Experiment with frameworks such as scikit-learn, TensorFlow, or PyTorch. Conduct exploratory data analysis and participate in feature engineering. Stay up-to-date with the latest in AI/ML research and tools. What We’re Looking For: Good knowledge of Python and basic ML libraries (NumPy, pandas, scikit-learn). Understanding of core ML concepts like supervised/unsupervised learning, overfitting, etc. Enthusiasm for AI/ML problem-solving and learning on the job. Bonus: Experience with TensorFlow, PyTorch, NLP, or side projects. What You’ll Gain: Hands-on experience building AI solutions for production use. Mentorship from experienced engineers and startup founders. Real exposure to startup pace, decision-making, and ownership. Certificate, Letter of Recommendation, and potential PPO (Pre-Placement Offer). How to Apply: Send your resume, LinkedIn profile, and a short note on why you’d be a great fit to on Swan investment1910@gmail.com Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 1 week ago
0 years
0 Lacs
India
Remote
Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 22nd June 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀
Posted 1 week ago
0 years
0 Lacs
India
Remote
Machine Learning Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship Application Deadline: 21st June 2025 About WebBoost Solutions by UM WebBoost Solutions by UM provides students and graduates with hands-on learning and career growth opportunities in machine learning and data science . Role Overview As a Machine Learning Intern , you’ll work on real-world projects , gaining practical experience in machine learning and data analysis . Responsibilities ✅ Design, test, and optimize machine learning models. ✅ Analyze and preprocess datasets. ✅ Develop algorithms and predictive models for various applications. ✅ Use tools like TensorFlow, PyTorch, and Scikit-learn . ✅ Document findings and create reports to present insights. Requirements 🎓 Enrolled in or graduate of a relevant program (AI, ML, Data Science, Computer Science, or related field) 📊 Knowledge of machine learning concepts and algorithms . 🐍 Proficiency in Python or R (preferred). 🤝 Strong analytical and teamwork skills . Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) ✔ Practical machine learning experience . ✔ Internship Certificate & Letter of Recommendation . ✔ Build your portfolio with real-world projects . How to Apply 📩 Submit your application by 21st June 2025 with the subject: "Machine Learning Intern Application" . Equal Opportunity WebBoost Solutions by UM is an equal opportunity employer , welcoming candidates from all backgrounds .
Posted 1 week ago
0 years
0 Lacs
India
Remote
AI and Machine Learning Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The AI and Machine Learning Internship is crafted to provide practical exposure to building intelligent systems, enabling interns to bridge theoretical knowledge with real-world applications. Role Overview: As an AI and Machine Learning Intern, you will work on projects involving data preprocessing, model development, and performance evaluation. This internship will strengthen your skills in algorithm design, model optimization, and deploying AI solutions to solve real-world problems. Key Responsibilities: Collect, clean, and preprocess datasets for training machine learning models Implement machine learning algorithms for classification, regression, and clustering Develop deep learning models using frameworks like TensorFlow or PyTorch Evaluate model performance using metrics such as accuracy, precision, and recall Collaborate on AI-driven projects, such as chatbots, recommendation engines, or prediction systems Document code, methodologies, and results for reproducibility and knowledge sharing Qualifications: Pursuing or recently completed a degree in Computer Science, Data Science, Artificial Intelligence, or a related field Strong foundation in Python and understanding of libraries such as Scikit-learn, NumPy, Pandas, and Matplotlib Familiarity with machine learning concepts like supervised and unsupervised learning Experience or interest in deep learning frameworks (TensorFlow, Keras, PyTorch) Good problem-solving skills and a passion for AI innovation Eagerness to learn and contribute to real-world ML applications Internship Benefits: Hands-on experience with real-world AI and ML projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of AI models and machine learning solutions
Posted 1 week ago
0 years
0 Lacs
India
Remote
Machine Learning Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship Application Deadline: 22nd June 2025 About WebBoost Solutions by UM WebBoost Solutions by UM provides students and graduates with hands-on learning and career growth opportunities in machine learning and data science . Role Overview As a Machine Learning Intern , you’ll work on real-world projects , gaining practical experience in machine learning and data analysis . Responsibilities ✅ Design, test, and optimize machine learning models. ✅ Analyze and preprocess datasets. ✅ Develop algorithms and predictive models for various applications. ✅ Use tools like TensorFlow, PyTorch, and Scikit-learn . ✅ Document findings and create reports to present insights. Requirements 🎓 Enrolled in or graduate of a relevant program (AI, ML, Data Science, Computer Science, or related field) 📊 Knowledge of machine learning concepts and algorithms . 🐍 Proficiency in Python or R (preferred). 🤝 Strong analytical and teamwork skills . Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) ✔ Practical machine learning experience . ✔ Internship Certificate & Letter of Recommendation . ✔ Build your portfolio with real-world projects . How to Apply 📩 Submit your application by 22nd June 2025 with the subject: "Machine Learning Intern Application" . Equal Opportunity WebBoost Solutions by UM is an equal opportunity employer , welcoming candidates from all backgrounds .
Posted 1 week ago
0 years
0 Lacs
India
Remote
Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python or R for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 22nd June 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀
Posted 1 week ago
0 years
0 Lacs
India
Remote
Machine Learning Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship Application Deadline: 22nd June 2025 About WebBoost Solutions by UM WebBoost Solutions by UM provides students and graduates with hands-on learning and career growth opportunities in machine learning and data science . Role Overview As a Machine Learning Intern , you’ll work on real-world projects , gaining practical experience in machine learning and data analysis . Responsibilities ✅ Design, test, and optimize machine learning models. ✅ Analyze and preprocess datasets. ✅ Develop algorithms and predictive models for various applications. ✅ Use tools like TensorFlow, PyTorch, and Scikit-learn . ✅ Document findings and create reports to present insights. Requirements 🎓 Enrolled in or graduate of a relevant program (AI, ML, Data Science, Computer Science, or related field) 📊 Knowledge of machine learning concepts and algorithms . 🐍 Proficiency in Python or R (preferred). 🤝 Strong analytical and teamwork skills . Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) ✔ Practical machine learning experience . ✔ Internship Certificate & Letter of Recommendation . ✔ Build your portfolio with real-world projects . Equal Opportunity WebBoost Solutions by UM is an equal opportunity employer , welcoming candidates from all backgrounds .
Posted 1 week ago
0 years
0 Lacs
India
Remote
Machine Learning Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship Application Deadline: 21st June 2025 About WebBoost Solutions by UM WebBoost Solutions by UM provides students and graduates with hands-on learning and career growth opportunities in machine learning and data science . Role Overview As a Machine Learning Intern , you’ll work on real-world projects , gaining practical experience in machine learning and data analysis . Responsibilities ✅ Design, test, and optimize machine learning models. ✅ Analyze and preprocess datasets. ✅ Develop algorithms and predictive models for various applications. ✅ Use tools like TensorFlow, PyTorch, and Scikit-learn . ✅ Document findings and create reports to present insights. Requirements 🎓 Enrolled in or graduate of a relevant program (AI, ML, Data Science, Computer Science, or related field) 📊 Knowledge of machine learning concepts and algorithms . 🐍 Proficiency in Python or R (preferred). 🤝 Strong analytical and teamwork skills . Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) ✔ Practical machine learning experience . ✔ Internship Certificate & Letter of Recommendation . ✔ Build your portfolio with real-world projects . Equal Opportunity WebBoost Solutions by UM is an equal opportunity employer , welcoming candidates from all backgrounds .
Posted 1 week ago
0.0 - 5.0 years
0 - 3 Lacs
Chennai
Work from Office
This is an urgent and fast filling position - Need immediate joiners OR l1 month notice period We are Looking for 1)Junior AI/ML Engineer - Positions 2 open 2)Mid-level AI/ML Engineer -1 position open 3)Lead AI/ML Engineer - 1 position open Location: Ambattur, Chennai Fulltime position Job Summary: We are looking for a AI/ML Engineer to develop, optimize, and deploy machine learning models for real-world applications. You will work on end-to-end ML pipelines , collaborate with cross-functional teams, and apply AI techniques such as NLP, Computer Vision, and Time-Series Forecasting . This role offers opportunities to work on cutting-edge AI solutions while growing your expertise in model deployment and optimization. Key Responsibilities: Design, build, and optimize machine learning models for various business applications. Develop and maintain ML pipelines , including data preprocessing, feature engineering, and model training. Work with TensorFlow, PyTorch, Scikit-learn, and Keras for model development. Deploy ML models in cloud environments (AWS, Azure, GCP) and work with Docker/Kubernetes for containerization. Perform model evaluation, hyperparameter tuning, and performance optimization . Collaborate with data scientists, engineers, and product teams to deliver AI-driven solutions. Stay up to date with the latest advancements in AI/ML and implement best practices. Write clean, scalable, and well-documented code in Python or R. Technical Skills: Programming Languages: Proficiency in languages like Python. Python is particularly popular for developing ML models and AI algorithms due to its simplicity and extensive libraries like NumPy, Pandas, and Scikit-learn. Machine Learning Algorithms: Should have a deep understanding of supervised learning (linear regression, decision trees, SVM), unsupervised learning, and reinforcement learning. Data Management and Analysis: Skills in data cleaning, feature engineering, and data transformation are crucial. Deep Learning: Familiarity with neural networks, CNNs, RNNs, and other architectures is important. Machine Learning Frameworks and Libraries: Experience with TensorFlow, PyTorch, Keras, or Scikit-learn is valuable. Natural Language Processing (NLP): Familiarity with NLP techniques like word2vec, sentiment analysis, and summarization can be beneficial. Cloud Computing: Experience with cloud-based services like AWS SageMaker, Google Cloud AI Platform, or Microsoft Azure Machine Learning. Data Preprocessing: Skills in handling missing data, data normalization, feature scaling, and data transformation. Feature Engineering: Ability to create new features from existing data to improve model performance. Data Visualization: Familiarity with visualization tools like Matplotlib, Seaborn, Plotly, or Tableau. Containerization: Knowledge of containerization tools like Docker and Kubernetes. Databases : Understanding of relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB). Data Warehousing: Familiarity with data warehousing concepts and tools like Amazon Redshift or Google BigQuery. Computer Vision: Understanding of computer vision concepts and techniques like object detection, segmentation, and image classification. Reinforcement Learning: Knowledge of reinforcement learning concepts and techniques like Q-learning and policy gradients.
Posted 1 week ago
3.0 - 7.0 years
1 - 6 Lacs
Guwahati
Hybrid
• Python/Node.js ML pipelines • Recommender systems, clustering, classification • Notebook to API to production • GCP/AWS, TensorFlow/PyTorch/ONNX • Bonus: OCR, Indian maps, logistics
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Role We’re seeking a mid-level AI/ML engineer who has 3–4 years of end-to-end model-development experience and is fluent with AWS services . You’ll join a fast-growing team that ships production-grade ML and GenAI features for global clients. Guide your team. What You’ll Do Data & Features: ingest, clean, and engineer structured/unstructured data from S3, RDS, Redshift, DynamoDB, Kinesis, etc. Modeling: build, train, and evaluate classical ML, deep-learning, and GenAI models with Amazon SageMaker, Bedrock, and related SDKs. MLOps: Use standard CI/CD and infrastructure-as-code practices to move models smoothly from development to production. Monitoring & Iteration: track drift/accuracy, run A/B tests, retrain, and tune for cost/performance. Collaboration: translate business problems into ML solutions alongside data engineers, front-end devs, and product managers. Must-have qualifications 3–4 years building and deploying AI/ML solutions in production. Hands-on expertise with the AWS AI/ML stack (SageMaker, Bedrock, Comprehend, Rekognition, Glue, Athena…). Strong Python plus ML libraries (scikit-learn, PyTorch or TensorFlow, Hugging Face). Solid grasp of data-engineering concepts (ETL pipelines, data lakes/warehouses). Ability to explain trade-offs to both technical and non-technical stakeholders. How to apply Email careers@jitglobalinfosystems.com with subject “AI/ML Engineer –
Posted 1 week ago
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About Our Company: Aerocraft Engineering India PVT Ltd based in Ahmedabad, provides services to US based Architecture, Engineering and Construction group of companies: Russell and Dawson – An Architecture/Engineering/Construction firm (www.rdaep.com) United-BIM – BIM Modeling Services Firm (www.united-bim.com) AORBIS – Procurement as a Service Provider (www.aorbis.com) For AORBIS business, We are seeking a passionate and skilled AI/ML Engineer with hands-on experience in Computer Vision, Document Processing Automation (PDFs), and LLMs . The ideal candidate will contribute to designing and deploying scalable AI solutions that extract, interpret, and act on unstructured and semi-structured data from documents using cutting-edge ML models. Familiarity with development tools like Python, GitHub, Jira, and Azure cloud is essential. Position: Senior Python & AI/ML Developer Timings: 12pm to 9pm - Monday to Friday Experience: Minimum 2-5 years Job Location: Ahmedabad (Siddhivinayak Towers, Makarba) Key Responsibilities: 1. Computer Vision & Image Processing Develop and optimize computer vision algorithms for document image processing (e.g., skew correction, OCR, layout detection). Implement models for object detection, segmentation, and keypoint detection in scanned or photographed documents. Apply pre-trained models and fine-tune them for use cases like table extraction or form understanding. 2. Machine Learning for PDF Automation Design and train models to extract structured data from unstructured PDFs (invoices, contracts, etc.). Use techniques like NLP, layout analysis, and supervised learning for content classification and entity extraction. Integrate tools such as Tesseract, PaddleOCR, Amazon Textract, or Azure Form Recognizer . 3. LLM (Large Language Models) Integration Fine-tune or prompt-engineer LLMs (e.g., OpenAI GPT, LLaMA, Mistral, Claude) for document Q&A, summarization, or data enrichment. Build pipelines that blend OCR + LLM to automate document understanding and decision-making workflows. Evaluate and deploy open-source or commercial LLMs in a secure, scalable manner. 4. DevOps & Tooling Use GitHub for code version control, CI/CD pipelines, and collaborative development. Track and manage tasks and sprints using Jira in an Agile development setup. Docker and Kubernetes for smooth orchestration with server and client. Deploy and monitor ML models and APIs in the Azure cloud environment , leveraging Azure ML, Functions, or Containers. Required Skills and Experience: Bachelor's or Master’s degree in Computer Science, AI/ML, Data Science, or a related field. 2+ years of hands-on experience with Python, OpenCV, TensorFlow/PyTorch, and ML frameworks. Strong grasp of NLP, OCR, and computer vision workflows. Experience working with PDF processing libraries (PDFMiner, PyMuPDF, PDFPlumber, etc.). Proficient in using and deploying models on Azure , with knowledge of Azure AI services. Understanding of version control with GitHub and task management via Jira . Exposure to prompt engineering and fine-tuning LLMs for domain-specific applications. Preferred Skills: Experience with open-source LLMs like LLaMA, Mistral, Falcon , or commercial APIs like OpenAI GPT-4 . Familiarity with vector databases (e.g., Qdrant, FAISS, Weaviate, ChromaDB ) and RAG-based systems. Knowledge of document standards like PDF/A, XFA, etc. Comfortable working in a fast-paced, research-oriented environment. 🎯 Key Responsibilities Build computer vision models for document layout analysis, object detection, and OCR enhancement. Automate data extraction from complex PDF documents (invoices, contracts, forms). Work with LLMs (e.g., GPT, LLaMA) for summarization, Q&A, and document intelligence tasks. Integrate AI pipelines with tools like Tesseract, Azure Form Recognizer, PDFMiner, PaddleOCR, etc. Deploy and manage ML solutions using Azure cloud services. Collaborate using GitHub (version control), Jira (Agile task tracking), and CI/CD workflows. ✅ Requirements 2–5 years of hands-on experience in AI/ML, preferably in document intelligence or vision-based systems. Strong Python skills with frameworks like TensorFlow, PyTorch, OpenCV. Proven experience in OCR, NLP, and automating document workflows. Experience with LLMs and prompt engineering or fine-tuning. Comfortable with DevOps tools: Azure, GitHub, Docker, Kubernetes and Jira. Benefits: Exposure to US Projects/Design/Standards Company provides Dinner/Snacks/Tea/Coffee Reimbursable Health Insurance 15 paid leave annually & 10 Public Holidays
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
We have Opening for + Frontend Skills + Generative AI with a Leading US Based (Product) IT Company. Work Mode: Permanent Remote Mandate Experience: Generative AI, VUE 3, Python, LLMs, AWS, AI Tools, HTML, CSS, & JavaScript. Experience: 5+ Years Responsibilities: Experience developing commercial application software with an emphasis on artificial intelligence and machine learning. Strong Python skills and familiarity with pertinent AI/ML frameworks and libraries (e.g., TensorFlow, PyTorch, Keras). Familiarity with Large Language Models (LLMs), such as Claude from Anthropic, Google's Gemini, or OpenAI's GPT models. Familiarity with quick engineering methods to maximise LLM performance. Familiarity in web application development and API integration. Strong knowledge of cloud computing systems, such as Google Cloud, AWS, and Azure. Familiarity with orchestration tools like Kubernetes and containerisation technologies like Docker. Enthusiasm about generative AI and how it may revolutionise corporate procedures. Familiarity with AI-powered coding helpers such as GitHub Copilot, Replit, Cursor, or Windsurf by Codeium. Familiarity with testing and debugging tools powered by AI. Contributions to AI projects that are open source. Strong analytical and problem-solving abilities. Frontend Development Skills: Framework for component development: Web Components & Vue 2.x. Principle of application development: wiki, WYSIWYG. Markdown is the markup language. Language of the Template: Handlebars. YAML is the configuration language. JavaScript is the programming language. Web Socket, REST, and GraphQL are protocols for interacting with the backend. Qualifications: Bachelor's or master's degree in computer science, Artificial Intelligence, or a related field. Interested Candidates, Kindly Share CVs on Chandni@thepremierconsultants.com #GenerativeAI #ArtificialIntelligence #MachineLearning #DeepLearning #LargeLanguageModels #PromptEngineering #AIDevelopment #LLMApplications #AIInnovation#Python #TensorFlow #PyTorch #Keras #JavaScript #VueJS #WebComponents #APIDevelopment #Docker #Kubernetes #CloudComputing #AWS #Azure #GoogleCloud #YAML #Markdown #Handlebars #GraphQL #RESTAPI #WebSockets#GitHubCopilot #Codeium #Replit #AIAssistedDevelopment #AIForDevelopers
Posted 1 week ago
13.0 - 19.0 years
0 - 0 Lacs
Hyderabad, Chennai
Work from Office
AIML Delivery Manager About Thryve Digital Health LLP: Thryve Digital Health LLP is an emerging global healthcare partner delivering strategic innovation, expertise, and flexibility to its healthcare partners. As a US healthcare conglomerate captive, we have direct access to deep insights that accelerate learning and keep us ahead of the curve. Thryve delivers next-generation solutions that enable our healthcare partners to provide positive experiences to their consumers. Our global collaborative of healthcare, operations, and IT experts creates innovative and sustainable processes for our clients, engaging ever-evolving consumers and managing the future of healthcare. We value our people and their diverse talents. Thryve is an equal opportunity employer, valuing integrity, diversity, and inclusion. We do not discriminate based on any protected attribute. For more information, please visit www.thryvedigital.com Role Summary: The AI Manager leads the AI team, contributing to technical strategy and decision-making. This role involves managing the day-to-day delivery of the AI/ML team, responsible for building, deploying, and maintaining robust, scalable, and efficient ML models. Essential Responsibilities: Provide technical leadership and strategic direction for the AI team, aligning with overall company objectives. Lead and mentor a team of data engineers, MLOps engineers, and machine learning engineers to achieve project objectives and deliverables (development and deployment of ML models for AI use cases across the platform). Contribute to AI/ML architecture design and implementation. Collaborate with business, engineering, infrastructure, and data science teams to translate their needs or challenges into production-grade Artificial Intelligence and Machine Learning models for batch and real-time requirements. Serve as a primary point of contact for US & Thryve leaders, ensuring clear communication and aligning with project goals and objectives. Guide the AI/ML engineers with respect to business objectives, connecting the dots, and helping the team come up with optimal ML models based on use cases. Manage day-to-day operations, ensuring adherence to process, scope, quality, and timelines. Assume the role of an engagement coordinator between the Thryve India team and US counterparts (techno-functional). Highlight risks and concerns to the leadership team and work with multiple stakeholders to establish alignment, contingency, and/or mitigation plans as required. Proactively track delivery statuses and project progress, taking ownership of all deliverables from Thryve. Communicate effectively with team members, US stakeholders, and management. Stay up-to-date with the latest trends and advancements in AI and ML, and identify opportunities for the team to implement new models and technologies. Propose and implement best engineering and research practices for scaling ML-powered features, enabling fast iteration and efficient experimentation with novel features. Required Skills and Qualifications: Bachelor's degree in engineering, computer science, or a related field. 12 to 15 years of total experience, with at least 2 years of experience leading/managing AI/ML projects, achieving clear & measurable business objectives. Strong understanding of US Health insurance with at least 3+ years of experience in the same. Experience and understanding of the entire MLOps pipeline, from data ingestion to production. Strong understanding of AI/ML concepts & algorithms, including supervised, unsupervised, and reinforcement learning, as well as MLOps. Proven ability to take successful, complex ideas from experimentation to production. Hands-on experience with Google Cloud Platform (GCP) or any public cloud platform. Excellent communication, presentation, and interpersonal skills. Proven experience in leading, motivating & inspiring teams to foster collaboration. Proficient in identifying, assessing, and calling out risks and mitigation plans to stakeholders. Experience in implementing process improvements and learning from past projects to enhance future performance. Ability to analyze external and internal processes and create strategies for service delivery optimization. Experience in working with diverse teams and stakeholders, understanding their needs and managing expectations.
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities: AI/ML Development Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for: Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience: Must-Have 3–5 years of experience in AI/ML (especially vision-based systems). Strong experience with PyTorch or TensorFlow for model development. Proficient in Python with experience building RESTful APIs. Hands-on experience with Amazon SageMaker, Lambda, API Gateway, and S3. Knowledge of OpenSSL/PyCryptodome or basic cryptographic concepts. Understanding of model deployment, serialization, and performance tuning. Nice-to-Have Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies: Python, PyTorch, TensorFlow FastAPI, Flask AWS: SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL
Posted 1 week ago
3.0 - 5.0 years
6 - 11 Lacs
Thiruvananthapuram
Work from Office
Experience Required: 3-5 years of hands-on experience in full-stack development, system design, and supporting AI/ML data-driven solutions in a production environment. Key Responsibilities Implementing Technical Designs: Collaborate with architects and senior stakeholders to understand high-level designs and break them down into detailed engineering tasks. Implement system modules and ensure alignment with architectural direction. Cross-Functional Collaboration: Work closely with software developers, data scientists, and UI/UX teams to translate system requirements into working code. Clearly communicate technical concepts and implementation plans to internal teams. Stakeholder Support: Participate in discussions with product and client teams to gather requirements. Provide regular updates on development progress and raise flags early to manage expectations. System Development & Integration: Develop, integrate, and maintain components of AI/ML platforms and data-driven applications. Contribute to scalable, secure, and efficient system components based on guidance from architectural leads. Issue Resolution: Identify and debug system-level issues, including deployment and performance challenges. Proactively collaborate with DevOps and QA to ensure resolution. Quality Assurance & Security Compliance: Ensure that implementations meet coding standards, performance benchmarks, and security requirements. Perform unit and integration testing to uphold quality standards. Agile Execution: Break features into technical tasks, estimate efforts, and deliver components in sprints. Participate in sprint planning, reviews, and retrospectives with a focus on delivering value. Tool & Framework Proficiency: Use modern tools and frameworks in your daily workflow, including AI/ML libraries, backend APIs, front-end frameworks, databases, and cloud services, contributing to robust, maintainable, and scalable systems. Continuous Learning & Contribution: Keep up with evolving tech stacks and suggest optimizations or refactoring opportunities. Bring learnings from the industry into internal knowledge-sharing sessions. Proficiency in using AI-copilots for Coding: Adaptation to emerging tools and knowledge of prompt engineering to effectively use AI for day-to-day coding needs. Technical Skills Hands-on experience with Python-based AI/ML development using libraries such as TensorFlow , PyTorch , scikit-learn , or Keras . Hands-on exposure to self-hosted or managed LLMs , supporting integration and fine-tuning workflows as per system needs while following architectural blueprints. Practical implementation of NLP/CV modules using tools like SpaCy , NLTK , Hugging Face Transformers , and OpenCV , contributing to feature extraction, preprocessing, and inference pipelines. Strong backend experience using Django , Flask , or Node.js , and API development (REST or GraphQL). Front-end development experience with React , Angular , or Vue.js , with a working understanding of responsive design and state management. Development and optimization of data storage solutions , using SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra), with hands-on experience configuring indexes, optimizing queries, and using caching tools like Redis and Memcached . Working knowledge of microservices and serverless patterns , participating in building modular services, integrating event-driven systems, and following best practices shared by architectural leads. Application of design patterns (e.g., Factory, Singleton, Observer) during implementation to ensure code reusability, scalability, and alignment with architectural standards. Exposure to big data tools like Apache Spark , and Kafka for processing datasets. Familiarity with ETL workflows and cloud data warehouse , using tools such as Airflow , dbt , BigQuery , or Snowflake . Understanding of CI/CD , containerization (Docker), IaC (Terraform), and cloud platforms (AWS, GCP, or Azure). Implementation of cloud security guidelines , including setting up IAM roles , configuring TLS/SSL , and working within secure VPC setups, with support from cloud architects. Exposure to MLOps practices , model versioning, and deployment pipelines using MLflow , FastAPI , or AWS SageMaker . Configuration and management of cloud services such as AWS EC2 , RDS , S3 , Load Balancers , and WAF , supporting scalable infrastructure deployment and reliability engineering efforts. Personal Attributes Proactive Execution and Communication: Able to take architectural direction and implement it independently with minimal rework with regular communication with stakeholders Collaboration: Comfortable working across disciplines with designers, data engineers, and QA teams. Responsibility: Owns code quality and reliability, especially in production systems. Problem Solver: Demonstrated ability to debug complex systems and contribute to solutioning. Preferred Skills: Key : Python, Django, Django ORM, HTML, CSS, Bootstrap, JavaScript, jQuery, Multi-threading, Multi-processing, Database Design, Database Administration, Cloud Infrastructure, Data Science, self-hosted LLMs Qualifications Bachelors or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Relevant certifications in cloud or machine learning are a plus. Package: 6-11 LPA
Posted 1 week ago
4.0 - 7.0 years
16 - 21 Lacs
Pune
Work from Office
Develop and deploy advanced AI solutions using the latest Generative AI techniques including: - Retrieval-Augmented Generation (RAG) pipelines. - Designing and building Agentic AI systems to automate complex decision-making workflows. - Fine-tuning Foundation and Small Language Models (SLMs) tailored for domain specific content
Posted 1 week ago
7.0 - 10.0 years
30 - 45 Lacs
Pune
Work from Office
Requirements: Our client is seeking a skilled Engineering Manager/Engineering Lead with proven growth journey from a developer to a lead role in fast-paced IT product companies. The ideal candidate should possess a robust blend of both advanced technical skills and effective people management capabilities. Desired Technical Skillset 1. Full Stack Development Expertise - Frontend: Strong command over React.js, and responsive UI/UX design principles. - Backend: - Expertise in Java (Spring Boot) for building scalable, secure microservices. - Proficient in Python and Scala for backend processing, scripting, and data handling. - API Design: Experience designing and consuming RESTful and GraphQL APIs; familiarity with OpenAPI/Swagger. 2. Database & Storage Systems - Deep experience with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, DynamoDB) databases. - Sound data modeling skills and understanding of database normalization and performance optimization. 3. Data Engineering - Designing and building robust ETL/ELT data pipelines using tools like Python, PySpark, Apache Spark, Airflow, Kafka. 4. AI/ML Application Experience - Understanding of applied machine learning concepts (classification, prediction, clustering, NLP). - Experience working with ML pipelines using scikit-learn, TensorFlow, PyTorch, or Spark MLlib. - Working knowledge of model deployment, monitoring, and integration into production systems. 5. Cloud & DevOps Proficiency - Experience with cloud platforms: AWS (preferred), Azure, or GCP. - Familiarity with CI/CD pipelines, Docker, Kubernetes, Terraform, and monitoring tools (e.g., Prometheus, Grafana). - Secure cloud-native architecture and compliance with HIPAA and HITRUST standards. Desired Leadership & Management Skills 1. Team Leadership & Growth - Proven experience in managing cross-functional engineering teams (515 people). - Capable of hiring, mentoring, and performance-managing engineers across levels. - Strong belief in fostering a culture of inclusion, innovation, and accountability. 2. Strategic Execution - Ability to translate business goals into engineering strategy, roadmap, and deliverables. - Excellent at prioritization, resource planning, and agile execution (Scrum/Kanban). 3. Stakeholder Management - Comfortable collaborating with product managers, architects, data scientists, and business leaders. - Able to clearly communicate technical decisions to non-technical stakeholders. 4. Healthcare Industry Knowledge - Familiarity with payer-side healthcare systems, including: - Claims adjudication, provider networks, member eligibility, EHR integration. - Standards such as HL7, FHIR, X12/EDI (837, 835, 834). - Knowledge of regulatory frameworks: CMS mandates, ACA, Medicaid/Medicare reporting, risk adjustment. 5. Security, Privacy & Compliance - Deep understanding of data privacy regulations (HIPAA, SOC II, HiTrust, etc.). - Experience conducting security reviews, access control, and data governance in healthcare settings.
Posted 1 week ago
8.0 years
0 Lacs
New Delhi, Delhi, India
On-site
🚀 We're Hiring: Data Scientist | 8+ Years Experience | On-Site (India) 🔍 Role Title: Data Scientist 📍 Location: India (Onsite) 🕒 Engagement Type: Full-Time Employee (FTE) 📅 Start Date: Immediate Joiners 💼 Experience: 8+ Years 💰 Salary: Flexible / As per market standards 🖥️ Work Type: On-Site (India-based candidates only) About the Role: We are on the lookout for a seasoned Data Scientist to join our dynamic team and contribute to the development of AI-driven data conversion tools . If you're passionate about working on cutting-edge AI/ML technologies and love solving real-world data transformation challenges — we’d love to hear from you! Key Responsibilities: Design and develop AI-powered data conversion applications Apply advanced AI/ML techniques for data mapping and validation Automate code generation using AI models Ensure data quality and integrity through validation techniques Collaborate closely with engineers, developers, and business analysts Maintain clear and comprehensive technical documentation Stay updated with the latest AI/Data Science trends and tools Troubleshoot and optimize AI-based systems Required Skills & Qualifications: Bachelor’s/Master’s in Data Science, Computer Science, AI , or related fields Strong programming skills in Python and SQL Hands-on with ML frameworks like TensorFlow or PyTorch Solid grasp of AI-driven data mapping, code generation, and validation Familiarity with SQL Server, MongoDB , and other databases Excellent problem-solving, collaboration, and communication skills Proven track record with 8+ years in Data Science Open, growth-oriented mindset with the ability to learn from failures Preferred Skills (Nice to Have): Experience in Financial Services domain Certifications in AI/ML or Data Science Background in ETL, data wrangling, or master data management Exposure to DevOps tools (Jira, BitBucket, Confluence) Familiarity with cloud and ML tools such as Azure ML, Databricks, Cognitive Services, Azure Synapse Interview Process: Technical Round 1 Technical Round 2
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane