Jobs
Interviews

826 Preprocess Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

India

Remote

Job Title: Machine Learning Intern Company: Optimspace.in Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with Certificate of Internship. About Optimspace.in Optimspace.in provides students and graduates with hands-on learning opportunities and career growth in Machine Learning and Data Science. Role Overview As a Machine Learning Intern, you will work on real-world projects, enhancing your practical skills in data analysis and model development. Responsibilities Design, test, and optimize machine learning models. Analyze and preprocess datasets. Develop algorithms and predictive models. Use tools like TensorFlow, PyTorch, and Scikit-learn. Document findings and create reports. Requirements Enrolled in or a graduate of a relevant program (Computer Science, AI, Data Science, or related field). Knowledge of machine learning concepts and algorithms. Proficiency in Python or R (preferred). Strong analytical and teamwork skills. Benefits Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid). Hands-on machine learning experience. Internship Certificate & Letter of Recommendation. Real-world project contributions for your portfolio. How to Apply 📩 Submit your application with "Machine Learning Intern Application" as the subject. 📅 Deadline: 11th July 2025 Note: Optimspace.in is an equal opportunity employer, welcoming diverse applicants.

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: Unified Mentor Location: Remote Duration: 3 months Application Deadline: 11th July 2025 Opportunity: Full-time role based on performance + Internship Certificate About Unified Mentor Unified Mentor provides aspiring professionals with hands-on experience in data science through industry-relevant projects, helping them build successful careers. Responsibilities Collect, preprocess, and analyze large datasets Develop predictive models and machine learning algorithms Perform exploratory data analysis (EDA) to extract insights Create data visualizations and dashboards for effective communication Collaborate with cross-functional teams to deliver data-driven solutions Requirements Enrolled in or a graduate of Data Science, Computer Science, Statistics, or a related field Proficiency in Python or R for data analysis and modeling Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) Familiarity with data visualization tools like Tableau, Power BI, or Matplotlib Strong analytical and problem-solving skills Excellent communication and teamwork abilities Stipend & Benefits Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) Hands-on experience in data science projects Certificate of Internship & Letter of Recommendation Opportunity to build a strong portfolio of data science models and applications Potential for full-time employment based on performance How to Apply Submit your resume and a cover letter with the subject line "Data Science Intern Application." Equal Opportunity: Unified Mentor welcomes applicants from all backgrounds.

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python or R for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 11th July 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 11th July 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀

Posted 1 week ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary We are looking for a skilled and passionate AI/ML Engineer to join our team and contribute to designing, developing, and deploying scalable machine learning models and AI solutions. The ideal candidate will have hands-on experience with data preprocessing, model building, evaluation, and deployment, with a strong foundation in mathematics, statistics, and software development. Key Responsibilities Design and implement machine learning models to solve business problems. Collect, preprocess, and analyze large datasets from various sources. Build, test, and optimize models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Deploy ML models using cloud services (AWS, Azure, GCP) or edge platforms. Collaborate with data engineers, data scientists, and product teams. Monitor model performance and retrain models as necessary. Stay up to date with the latest research and advancements in AI/ML. Create documentation and reports to communicate findings and model Skills & Qualifications : Bachelor's/Masters degree in Computer Science, Data Science, AI/ML, or related field. 2+ years of hands-on experience in building and deploying ML models. Proficiency in Python (preferred), R, or similar languages. Experience with ML/DL frameworks such as TensorFlow, PyTorch, Scikit-learn, XGBoost. Strong grasp of statistics, probability, and algorithms. Familiarity with data engineering tools (e.g., Pandas, Spark, SQL). Experience in model deployment (Docker, Flask, FastAPI, MLflow, etc.). Knowledge of cloud-based ML services (AWS SageMaker, Azure ML, GCP AI Platform). (ref:hirist.tech)

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Healthcare Analytics Specialist Experience Required - 3 To 5 Years Location-Gurugram(Hybrid) Position Summary The Analytics Specialist is responsible for driving insights & supporting decision-making by analyzing healthcare payer data, creating data pipelines, and managing complex analytics projects. This role involves collaborating with cross-functional teams (Operations, Product, IT, and external partners) to ensure robust data integration, reporting, and advanced analytics capabilities. The ideal candidate will have strong technical skills, payer domain expertise, and the ability to manage 3rd-party data sources effectively. Key Responsibilities Data Integration and ETL Pipelines Develop, maintain, and optimize end-to-end data pipelines, including ingestion, transformation, and loading of internal and external data sources. Collaborate with Operations to design scalable, secure, and high-performing data workflows. Implement best practices in data governance, version control, data security, and documentation. Analytics and Reporting Data Analysis: Analyze CPT-level data to identify trends, patterns, and insights relevant to healthcare services and payer rates. Build and maintain analytical models for cost, quality, and utilization metrics, leveraging tools such as Python, R, or SQL-based BI tools. Develop reports to communicate findings to stakeholders across the organization. 3rd-Party Data Management Ingest and preprocess multiple 3rd party data from multiple sources and transform it into unified structures for analytics and reporting Ensure compliance with transparency requirements and enable downstream analytics. Design automated workflows to update and validate data, working closely with external vendors and technical teams. Establish best practices for data quality checks (e.g., encounter completeness, claim-level validations) and troubleshooting. Quality Assurance and Compliance Ensure data quality by implementing validation checks, audits, and anomaly detection frameworks. Maintain compliance with HIPAA, HITECH, and other relevant healthcare regulations and data privacy requirements. Participate in internal and external audits of data processes. Continuous Improvement & Thought Leadership Stay current with industry trends, analytics tools, and regulatory changes affecting payer analytics. Identify opportunities to enhance existing data processes, adopt new technologies, and promote data-driven culture within the organization. Mentor junior analysts and share best practices in data analytics, reporting, and pipeline development. Required Qualifications Education & Experience Bachelor’s degree in health informatics, Data Science, Computer Science, Statistics, or a related field (master's degree a plus). 3-5+ years of experience in healthcare analytics, payer operations, or related fields. Technical Skills Data Integration & ETL: Proficiency in building data pipelines using tools like SQL, Python, R, or ETL platforms (e.g., Talend, Airflow, or Data Factory). Databases & Cloud: Experience working with relational databases (SQL Server, PostgreSQL) and cloud environments (AWS, Azure, GCP). BI & Visualization: Familiarity with BI tools (Tableau, Power BI, Looker) for dashboard creation and data storytelling. MRF, All Claims, & Definitive Healthcare Data: Hands-on experience (or strong familiarity) with healthcare transparency data sets, claims data ingestion strategies, and provider/facility-level data from 3rd-party sources like Definitive Healthcare. Healthcare Domain Expertise Strong understanding of claims data structures (UB-04, CMS-1500), coding systems (ICD, CPT, HCPCS), and payer processes. Knowledge of healthcare regulations (HIPAA, HITECH, transparency rules) and how they impact data sharing and management. Analytical & Problem-Solving Skills Proven ability to synthesize large datasets, pinpoint issues, and recommend data-driven solutions. Comfort with statistical analysis and predictive modeling using Python or R. Soft Skills Excellent communication and presentation skills, with the ability to convey technical concepts to non-technical stakeholders. Strong project management and organizational skills, with the ability to handle multiple tasks and meet deadlines. Collaborative mindset and willingness to work cross-functionally to achieve shared objectives. Preferred/Additional Qualifications Advanced degrees (MBA, MPH, MS in Analytics, or similar). Experience with healthcare cost transparency regulations and handling MRF data specifically for compliance. Familiarity with DataOps or DevOps practices to automate and streamline data pipelines. Certification in BI or data engineering (e.g., Microsoft Certified: Azure Data Engineer. Experience establishing data stewardship programs & leading data governance initiatives. Why Join Us? Impactful Work – Play a key role in leveraging payer data to reduce costs, improve quality, and shape population health strategies. Innovation – Collaborate on advanced analytics projects using state-of-the-art tools and platforms. Growth Opportunity – Be part of an expanding analytics team where you can lead initiatives, mentor others, and deepen your healthcare data expertise. Supportive Culture – Work in an environment that values open communication, knowledge sharing, and continuous learning. Powered by JazzHR llxBL5iYmF

Posted 2 weeks ago

Apply

0 years

0 Lacs

Kolkata metropolitan area, West Bengal, India

Remote

Job Title: AI Engineer (Contract to Hire) Location: Hybrid (Remote + Onsite in Kolkata Office) Duration : 3-Month Full-Time Contract (Potential to convert to Full-Time Employee) Start Date : 1/08/2025 About the Role: · We are seeking a skilled and motivated AI Engineer for a 3-month full-time contract position, with the opportunity to convert into a permanent role based on performance. · The ideal candidate will have a strong background in machine learning, AI model development, and data engineering , particularly with experience in Google BigQuery and other cloud or traditional databases. · You will be responsible for end-to-end model development — from sourcing and preprocessing data to training, evaluating, and deploying models for real-world applications. Key Responsibilities: · Access and query data from BigQuery and other structured/unstructured databases. · Clean, preprocess, and transform large datasets for modeling and analysis . · Design, develop, and implement AI/ML models to solve business problems. · Collaborate with data engineers, analysts, and business teams to understand data needs and project objectives. · Evaluate model performance using standard metrics and iterate for improvement. · Deploy models into production environments and monitor their effectiveness. · Document workflows, decisions, and maintain code quality with version control tools (e.g., Git) Required Skills & Qualifications: a. Education & Experience: · Bachelor's or Master’s degree in Computer Science & Engineering, Mathematics, Data Science , or a related technical field OR equivalent hands-on industry experience in AI/ML and data engineering . · Candidates without formal degrees but with a strong project portfolio , certifications (e.g., Google Professional ML Engineer, Coursera, edX) or relevant open-source contributions are encouraged to apply. b. Technical Experience: · Proficient in Google BigQuery and other databases (SQL, NoSQL). · Strong programming skills in Python and experience with ML libraries/frameworks like Scikit-learn, TensorFlow, PyTorch, Pandas, and NumPy . · Proven experience in developing, training, and deploying AI/ML models . · Familiarity with cloud platforms (Google Cloud Platform, AWS, etc.) and MLOps workflows. · Experience with version control systems like Git. c. Soft Skills: · Strong analytical thinking and problem-solving mindset. · Excellent communication and collaboration skills. · Self-driven and capable of working independently in a hybrid work setup (remote + Kolkata office). Nice to have Skills: · Familiarity with data visualization tools such as Looker, Tableau, or Power BI . · Experience working in hybrid or distributed teams . · Basic understanding of Japanese is a plus Work Arrangement: Hybrid : Primarily remote, with occasional onsite meetings at our Kolkata office. Must be available to work from Kolkata office when required. Contract & Future Opportunity: · Initial Engagement : 3 months contract with full-time (100%) commitment. · Future Opportunity : High potential for conversion to a full-time permanent role, depending on performance and business needs

Posted 2 weeks ago

Apply

6.0 years

30 - 40 Lacs

Gurugram, Haryana, India

On-site

Job Title: Data Scientist Location: Gurgaon Level/Band: Band C (Manager / Senior Manager) Department: Flight Safety About The Role We are seeking a motivated and highly skilled Data Scientist to join our Flight Safety team. This role is critical in leveraging advanced analytics and machine learning to drive safety-related decisions and improve operational processes. The successful candidate will work extensively with time-series and multivariate data to detect anomalies, uncover patterns, and develop predictive models that enhance flight safety and risk mitigation. You will play a strategic role in applying AI/ML techniques , building custom algorithms , and conducting exploratory data analysis , with a strong focus on aviation safety and operational excellence. Key Responsibilities Perform exploratory data analysis (EDA) to uncover patterns, trends, and insights from complex datasets. Apply advanced anomaly detection techniques on time-series and multivariate data to identify irregularities and potential safety risks. Develop and implement machine learning and deep learning models (e.g., ANN, RNN, CNN) for predictive safety analytics. Utilize big data analytics tools and AI/ML frameworks to support data-driven decision-making in flight safety. Design and develop custom algorithms to meet domain-specific requirements with high accuracy and reliability. Write efficient, maintainable code using Python and MATLAB for data processing, modeling, and algorithm deployment. Collect, clean, and preprocess data from multiple sources to ensure high-quality inputs for model development. Collaborate with cross-functional teams to translate analytical findings into actionable safety initiatives. Educational Background Required Qualifications & Experience: Bachelor’s degree in Engineering with a specialization in Data Science or a related field. Certification in Data Science (such as a completed Data Scientist course or equivalent). Professional Experience Minimum 6 years of hands-on experience in data science, machine learning, or analytics roles. Proven experience in developing AI/ML models, particularly in anomaly detection and predictive analytics. Key Skills Data Analysis & Exploratory Data Analysis (EDA) Anomaly Detection (Time-Series & Multivariate Data) Machine Learning & Deep Learning (ANN, RNN, CNN) Big Data Analytics & AI-driven Risk Forecasting Strong Programming Skills: Python and MATLAB Custom Algorithm Design & Development Data Collection, Preprocessing, and Integration Why Join Us? Contribute to cutting-edge data science initiatives in the aviation safety domain. Work in a collaborative and innovation-driven environment. Be part of a mission-critical team focused on predictive safety and operational excellence. Skills: analytics,learning,custom algorithm design & development,anomaly detection (time-series & multivariate data),machine learning & deep learning (ann, rnn, cnn),matlab,data analysis,data analysis & exploratory data analysis (eda),big data analytics & ai-driven risk forecasting,flight safety,machine learning,python,data science,data collection, preprocessing, and integration

Posted 2 weeks ago

Apply

0.0 - 2.0 years

2 - 5 Lacs

Pune, Maharashtra

Remote

AI/ML Engineer – Junior Location: Pune, Maharashtra Experience: 1–2 years ‍Employment: Full-time Role Overview: Join our AI/ML team to build and deploy generative and traditional ML models—from ideation and data preparation to production pipelines and performance optimization. You’ll solve real problems, handle data end-to-end, navigate the AI development lifecycle, and contribute to both model innovation and operational excellence. Key Responsibilities: ● Full AI/ML Lifecycle: Engage from problem scoping through data collection, modeling, deployment, monitoring, and iteration. ● Generative & ML Models: Build and fine-tune transformer-based LLMs (like GPT, BERT) both commercial as well as local, GANs, diffusion models; also develop traditional ML models for classification, regression, etc. Experience with DL models for Computer vision like CNN, R-CNN, etc is a plus. ● Data Engineering: Clean, label, preprocess, augment, and version datasets. Build ETL pipelines and features for model training. Experience with libraries like pandas, numpy, nltk, etc. ● Model Deployment & MLOps: Containerize models (Docker), deploy APIs/microservices, implement CI/CD for ML, monitor performance and drift . ● Troubleshooting & Optimization: Analyze errors, handle overfitting/underfitting, hallucinations, class imbalance, latency concerns; tune model performance. ● Collaboration: Partner with project managers, DevOps, backend engineers, and senior ML staff to integrate AI features. ● Innovation & Research: Stay current with GenAI (prompt techniques, RAG, LangChain, LLM models), test new architectures, contribute insights. ● Documentation: Maintain reproducible experiments, write clear docs, follow best practices Required Skills: ● Bachelor’s in CS, AI, Data Science, or related field. ● 1–2 years in ML/AI roles; hands-on with both generative and traditional models. ● Proficient in Python and ML frameworks (PyTorch, TensorFlow, Hugging Face, scikit-learn). ● Strong understanding of AI project lifecycle and MLOps principles . ● Experience in data workflows: preprocessing, feature engineering, dataset management. ● Familiarity with Docker, REST APIs, Git, and cloud platforms (AWS/GCP/Azure). ● Sharp analytical and problem-solving skills, with ability to debug and iterate models. ● Excellent communication and teamwork abilities. Preferred Skills: ● Projects involving ChatGPT, LLaMA, Stable Diffusion or similar models. ● Experience with prompt engineering, RAG pipelines, vector DBs (FAISS, Pinecone, Weaviate) . ● Exposure to CI/CD pipelines and ML metadata/versioning. ● GitHub portfolio or publications in generative AI. ● Awareness of ethics, bias mitigation, privacy, compliance in AI. Job Type: Full-time Pay: ₹200,000.00 - ₹500,000.00 per year Benefits: Provident Fund Work Location: Hybrid remote in Pune, Maharashtra

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 10th July 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

India

On-site

Role: Machine Learning Engineer – Large Language Models Roles And Responsibilities: Design, develop, and deploy large-scale language models for a range of NLP tasks such as text generation, summarization, question answering, and sentiment analysis. Fine-tune pre-trained models (e.g., GPT, BERT, T5) on domain-specific data to optimize performance and accuracy. Collaborate with data engineering teams to collect, preprocess, and curate large datasets for training and evaluation. Experiment with model architectures, hyperparameters, and training techniques to improve model efficiency and performance. Develop and maintain pipelines for model training, evaluation, and deployment in a scalable and reproducible manner. Implement and optimize inference solutions to ensure models are performant in production environments. Monitor and evaluate model performance in production, making improvements as needed. Document methodologies, experiments, and findings to share with stakeholders and other team members. Stay current with advancements in LLMs, NLP, and machine learning, and apply new techniques to existing projects. Collaborate with product managers to understand project requirements and translate them into technical solutions. 3+ years of experience in machine learning and natural language processing. Proven experience working with LLMs (such as GPT, BERT, T5, etc.) in production environments Demonstrated experience fine-tuning and deploying large-scale language models. Technical Skills: Proficiency in Python and experience with ML libraries and frameworks such as PyTorch ,TensorFlow, Hugging Face Transformers, etc. Strong understanding of deep learning architectures (RNNs, CNNs, Transformers) and hands-on experience with Transformer-based architectures. Familiarity with cloud platforms (AWS, GCP, Azure) and experience with containerization tools like Docker and orchestration with Kubernetes. Experience with data preprocessing, feature engineering, and data pipeline development. Knowledge of distributed training techniques and optimization methods for handling large datasets. Soft Skills: Excellent communication and collaboration skills, with an ability to work effectively across interdisciplinary teams. Strong analytical and problem-solving skills, with attention to detail and a passion for continuous learning. Ability to work independently and manage multiple projects in a fast-paced, dynamic environment. Preferred Qualifications: Experience with prompt engineering and techniques to maximize the effectiveness of LLMs in various applications. Knowledge of ethical considerations and bias mitigation techniques in language models. Familiarity with reinforcement learning, especially RLHF (Reinforcement Learning from Human Feedback) Experience with model compression and deployment techniques for resource-constrained environments. Contributions to open-source projects or publications in reputable machine learning journals. Professional development opportunities, including access to conferences, workshops, and training programs. A collaborative, inclusive work culture that values innovation and teamwork Qualifications: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related field. Primary skills (Must have): Python PyTorch, TensorFlow, Hugging Face Transformers Familiarity in cloud platforms-AWS, GCP, Azure docker, kubernetes Interview Details: Video screening with HR L1 - Technical Interview L2 - Technical and HR Round Note: Candidate must have own laptop. Must follow Kuwait calendar. Working Hours: 11:30 AM to 7:30 PM. Working days : Sunday to Thursday.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Artificial Intelligence & Machine Learning Intern (Remote | 3 Months) Company: INLIGHN TECH Location: Remote Duration: 3 Months Stipend (Top Performers): ₹15,000 Perks: Certificate | Letter of Recommendation | Hands-on Training About INLIGHN TECH INLIGHN TECH empowers students and recent graduates through hands-on, project-based internships. Our AI & ML Internship is tailored to develop your expertise in building intelligent systems using real-world datasets and machine learning algorithms. Role Overview As an AI & ML Intern , you’ll work on projects involving model development, data preprocessing, and algorithm implementation. You'll gain practical experience applying artificial intelligence concepts to solve real business problems. Key Responsibilities Collect and preprocess structured and unstructured data Implement supervised and unsupervised machine learning algorithms Work on deep learning models using frameworks like TensorFlow or PyTorch Evaluate model performance and tune hyperparameters Develop intelligent solutions and predictive systems Collaborate with peers on AI-driven projects Requirements Pursuing or recently completed a degree in Computer Science, AI, Data Science, or a related field Proficient in Python and libraries such as Pandas, NumPy, Scikit-learn Familiar with machine learning and deep learning concepts Knowledge of TensorFlow, Keras, or PyTorch is a plus Strong mathematical and analytical thinking Enthusiastic about AI/ML innovations and eager to learn What You’ll Gain Real-world experience developing AI and ML models Internship Completion Certificate Letter of Recommendation for high-performing interns Portfolio of AI/ML projects for career building Potential full-time offer based on performance

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Remote | 3 Months) Company: INLIGHN TECH Location: Remote Duration: 3 Months Stipend (Top Performers): ₹15,000 Perks: Certificate | Letter of Recommendation | Hands-on Training About INLIGHN TECH INLIGHN TECH empowers students and recent graduates through hands-on, project-based internships. Our Data Science Internship is designed to enhance your technical skills while solving real-world data challenges, equipping you for the industry. Role Overview As a Data Science Intern , you’ll dive deep into real datasets, apply machine learning techniques, and generate insights that support informed decision-making. This internship provides the perfect launchpad for aspiring data professionals. Key Responsibilities Collect, clean, and preprocess data for analysis Apply statistical and machine learning techniques Build models for classification, regression, and clustering tasks Develop dashboards and visualizations using Python or Power BI Present actionable insights to internal stakeholders Collaborate with a team of peers on live data projects Requirements Currently pursuing or recently completed a degree in Computer Science, Data Science, Mathematics, or a related field Solid understanding of Python and key libraries: Pandas, NumPy, Scikit-learn Familiarity with machine learning algorithms and SQL Strong analytical and problem-solving abilities Eagerness to learn and grow in a fast-paced environment What You’ll Gain Real-world experience with industry-standard tools and datasets Internship Completion Certificate Letter of Recommendation for outstanding contributors Potential for full-time opportunities A portfolio of completed data science projects

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Delhi, India

On-site

Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python or R for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 10th July 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀

Posted 2 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

Hyderabad, Telangana

On-site

General information Country India State Telangana City Hyderabad Job ID 44314 Department Development Description & Requirements Summary: As an AI/ML Developer, you’ll play a pivotal role in creating and delivering cutting-edge enterprise applications and automations using Infor’s AI, RPA, and OS platform technology. Your mission will be to identify innovative use cases, develop proof of concepts (PoCs), and deliver enterprise automation solutions that elevate workforce productivity and improve business performance for our customers. Key Responsibilities: Use Case Identification: Dive deep into customer requirements and business challenges. Identify innovative use cases that can be addressed through AI/ML solutions. Data Insights: Perform exploratory data analysis on large and complex datasets. Assess data quality, extract insights, and share findings. Data Preparation: Gather relevant datasets for training and testing. Clean, preprocess, and augment data to ensure suitability for AI tasks. Model Development: Train and fine-tune AI/ML models. Evaluate performance using appropriate metrics and benchmarks, optimizing for efficiency. Integration and Deployment: Collaborate with software engineers and developers to seamlessly integrate AI/ML models into enterprise systems and applications. Handle production deployment challenges. Continuous Improvement: Evaluate and enhance the performance and capabilities of deployed AI products. Monitor user feedback and iterate on models and algorithms to address limitations and enhance user experience. Proof of Concepts (PoCs): Develop PoCs to validate the feasibility and effectiveness of proposed solutions. Showcase the art of the possible to our clients. Collaboration with Development Teams: Work closely with development teams on new use cases. Best Practices and Requirements: Collaborate with team members to determine best practices and requirements. Innovation: Contribute to our efforts in enterprise automation and cloud innovation. Key Requirements: Experience: A minimum 3 years of hands-on experience in implementing AI/ML models in enterprise systems. AI/ML Concepts: In-depth understanding of supervised and unsupervised learning, reinforcement learning, deep learning, and probabilistic models. Programming Languages: Proficiency in Python or R, along with querying languages like SQL. Data Handling: Ability to work with large datasets, perform data preprocessing, and wrangle data effectively. Cloud Infrastructure: Experience with AWS Sagemaker or Azure ML for implementing ML solutions is highly preferred. Frameworks and Libraries: Familiarity with scikit-learn, Keras, TensorFlow, PyTorch, or NLTK is a plus. Analytical Skills: Strong critical thinking abilities to identify problems, formulate hypotheses, and design experiments. Business Process Understanding: Good understanding of business processes and how they can be automated. Domain Expertise: Familiarity with Demand Forecasting, Anomaly Detection, Pricing, Recommendation, or Analytics solutions. Global Project Experience: Proven track record of working with global customers on multiple projects. Customer Interaction: Experience facing customers and understanding their needs. Communication Skills: Excellent verbal and written communication skills. Analytical Mindset: Strong analytical and problem-solving skills. Collaboration: Ability to work independently and collaboratively. Educational Background: Bachelor’s or Master’s degree in Computer Science, Mathematics, Statistics, or a related field. Specialization: Coursework or specialization in AI, ML, Statistics & Probability, Deep Learning, Computer Vision, or NLP/NLU is advantageous. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description We are looking for a Data Scientist who is passionate about working in the healthcare AI domain. This role will involve collaborating with cross-functional teams, including senior data scientists, application developers, and radiology experts, to develop and refine AI solutions for medical imaging. You will gain hands-on experience working on advanced projects and research opportunities in Computer Vision and Deep Learning. Key Responsibilities Develop and optimize deep learning models for medical image analysis, including segmentation, classification and object detection. Preprocess and clean medical imaging datasets to enhance AI model performance and reliability. Conduct model evaluation, error analysis, and performance benchmarking to improve accuracy and generalization. Collaborate with radiologists and domain experts to refine AI model outputs and ensure clinical relevance. Experience : 1-3 years Data Science/Machine Learning experience Required Skills Strong fundamental knowledge of machine learning, computer vision and image processing Demonstrable experience training convolutional neural networks for segmentation and object detection Strong programming skills in Python and familiarity with data science and image processing libraries (e.g., NumPy, pandas, scikit-learn, opencv, PIL). Hands-on experience with deep learning frameworks like Keras or PyTorch. Experience with model evaluation and error analysis. Desired Skills Familiarity with healthcare or radiology datasets. Familiarity with ONNX format Qualification Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics, or a related field. Company Location : Baner, Pune, India (ref:hirist.tech)

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: AI/ML engineer Location: Gurgaon Hybrid Experience: 4–9 Years Job Type: [Full-time ] Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with strong AWS Cloud experience to join our data science and engineering team. The ideal candidate will design, build, and deploy scalable machine learning models and solutions while leveraging AWS services to manage infrastructure and model workflows. Key Responsibilities: Design, develop, and deploy machine learning models for predictive analytics, classification, NLP, or computer vision tasks. Use AWS services like SageMaker, S3, Lambda, Glue, EC2, EKS, and Step Functions for ML workflows and deployments. Preprocess and analyze large datasets for training and inference. Build scalable data pipelines and automate model training and evaluation. Collaborate with data engineers, scientists, and DevOps teams to productionize models. Optimize models for performance, interpretability, and scalability. Implement MLOps best practices (versioning, monitoring, model retraining pipelines). Conduct A/B testing and model performance tuning. Key Skills and Qualifications: Technical Skills: Strong proficiency in Python (including pandas, NumPy, Scikit-learn, etc.) Solid understanding of ML algorithms (regression, classification, clustering, deep learning) Experience with TensorFlow, PyTorch, or Keras Hands-on experience with AWS cloud services : SageMaker , S3 , Glue , Lambda , Step Functions , CloudWatch , IAM , etc. Experience with MLOps tools : MLflow, Docker, Git, CI/CD pipelines Knowledge of data pipeline frameworks (Airflow, AWS Glue, etc.) Familiarity with SQL and data wrangling in distributed environments (e.g., Spark) Nice to Have: Experience with NLP, LLMs, or Computer Vision Exposure to Big Data technologies (Kafka, Hadoop, etc.) Familiarity with API development using Flask or FastAPI Knowledge of Kubernetes for model container orchestration Educational Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, Statistics, or related field.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: Unified Mentor Location: Remote Duration: 3 months Application Deadline: 10th July 2025 Opportunity: Full-time role based on performance + Internship Certificate About Unified Mentor Unified Mentor provides aspiring professionals with hands-on experience in data science through industry-relevant projects, helping them build successful careers. Responsibilities Collect, preprocess, and analyze large datasets Develop predictive models and machine learning algorithms Perform exploratory data analysis (EDA) to extract insights Create data visualizations and dashboards for effective communication Collaborate with cross-functional teams to deliver data-driven solutions Requirements Enrolled in or a graduate of Data Science, Computer Science, Statistics, or a related field Proficiency in Python or R for data analysis and modeling Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) Familiarity with data visualization tools like Tableau, Power BI, or Matplotlib Strong analytical and problem-solving skills Excellent communication and teamwork abilities Stipend & Benefits Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) Hands-on experience in data science projects Certificate of Internship & Letter of Recommendation Opportunity to build a strong portfolio of data science models and applications Potential for full-time employment based on performance How to Apply Submit your resume and a cover letter with the subject line "Data Science Intern Application." Equal Opportunity: Unified Mentor welcomes applicants from all backgrounds.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Machine Learning Intern (Paid) Company: Unified Mentor Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with Certificate of Internship Application Deadline : 10th July 2025 About Unified Mentor Unified Mentor provides students and graduates with hands-on learning opportunities and career growth in Machine Learning and Data Science. Role Overview As a Machine Learning Intern, you will work on real-world projects, enhancing your practical skills in data analysis and model development. Responsibilities ✅ Design, test, and optimize machine learning models ✅ Analyze and preprocess datasets ✅ Develop algorithms and predictive models ✅ Use tools like TensorFlow, PyTorch, and Scikit-learn ✅ Document findings and create reports Requirements 🎓 Enrolled in or a graduate of a relevant program (Computer Science, AI, Data Science, or related field) 🧠 Knowledge of machine learning concepts and algorithms 💻 Proficiency in Python or R (preferred) 🤝 Strong analytical and teamwork skills Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) ✔ Hands-on machine learning experience ✔ Internship Certificate & Letter of Recommendation ✔ Real-world project contributions for your portfolio Equal Opportunity Unified Mentor is an equal-opportunity employer, welcoming candidates from all backgrounds.

Posted 2 weeks ago

Apply

0 years

5 Lacs

Gurgaon

On-site

Technology Gurgaon, India Publicis Re:Sources India Intermediate On-Site 7/7/2025 115065 Company description Re:Sources is the backbone of Publicis Groupe, the world's third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury and risk management to help Publicis Groupe agencies do what they do best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications and tools to enhance productivity, encourage collaboration and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. Overview Title GEN AI and Machine Learning Engineer Skills (must have) Bachelor's or master’s degree in Computer Science, Data Science, Engineering, or a related field. Experience on Agentic AI/ Frameworks Strong programming skills in languages such as Python, SQL etc. Build analytical approach based on business requirements, then develop, train, and deploy machine learning models and AI algorithms Exposure to GEN AI models such as OpenAI, Google Gemini, Runway ML etc. Experience in developing and deploying AI/ML and deep learning solutions with libraries and frameworks, such as TensorFlow, PyTorch, Scikit-learn, OpenCV and/or Keras. Knowledge of math, probability, and statistics. Familiarity with a variety of Machine Learning, NLP, and deep learning algorithms. Exposure in developing API using Flask/Django. Good experience in cloud infrastructure such as AWS, Azure or GCP Exposure to Gen AI, Vector DB/Embeddings, LLM (Large language Model) Skills (good to have) Experience with MLOps: MLFlow, Kubeflow, CI/CD Pipeline etc. Good to have experience in Docker, Kubernetes etc Exposure in HTML, CSS, Javascript/JQuery, Node.js, Angular/React Experience in Flask/Django is a bonus Responsibilities Collaborate with software engineers, business stake holders and/or domain experts to translate business requirements into product features, tools, projects, AI/ML, NLP/NLU and deep learning solutions. Develop, implement, and deploy AI/ML solutions. Preprocess and analyze large datasets to identify patterns, trends, and insights. Evaluate, validate, and optimize AI/ML models to ensure their accuracy, efficiency, and generalizability. Deploy applications and AI/ML model into cloud environment such as AWS/Azure/GCP etc. Monitor and maintain the performance of AI/ML models in production environments, identifying opportunities for improvement and updating models as needed. Document AI/ML model development processes, results, and lessons learned to facilitate knowledge sharing and continuous improvement.

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

India

Remote

Job Summary: We are looking for a passionate and skilled AI/ML Developer with a minimum of 2 years of experience to join our team. The ideal candidate will be responsible for developing machine learning models, implementing AI solutions, and collaborating with cross-functional teams to integrate these systems into production environments. Key Responsibilities: Design, develop, and deploy machine learning models and AI systems. Collect, preprocess, and analyze large datasets from diverse sources. Optimize and fine-tune models for performance and scalability. Work with engineering teams to integrate ML models into production-ready applications. Monitor and maintain deployed models, and retrain as needed. Stay current with the latest research and trends in AI and machine learning. Document processes, experiments, and outcomes for transparency and reproducibility. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Minimum 2 years of hands-on experience in AI/ML development. Proficient in Python and libraries such as Scikit-learn, TensorFlow, PyTorch, or Keras. Strong understanding of machine learning algorithms, neural networks, and data structures. Experience with data processing tools (Pandas, NumPy), and model deployment frameworks (e.g., Flask, FastAPI, Docker). Familiarity with cloud platforms like AWS, GCP, or Azure. Knowledge of software development best practices (version control, testing, CI/CD). Preferred Qualifications: Experience with NLP, computer vision, or reinforcement learning. Knowledge of MLOps and model monitoring tools (MLflow, Kubeflow). Contribution to open-source projects or published ML research. Exposure to large-scale distributed systems or big data tools (Spark, Hadoop). Soft Skills: Strong problem-solving skills and analytical thinking. Excellent communication and team collaboration abilities. Self-motivated and eager to learn and implement new technologies. What We Offer: Competitive salary and performance-based bonuses. Flexible working hours and remote work options. Opportunity to work on cutting-edge AI/ML projects. Career development support and training opportunities. Collaborative, inclusive, and innovation-driven work culture. Job Types: Full-time, Permanent Pay: ₹13,671.07 - ₹85,229.87 per month Benefits: Paid sick time Paid time off Location Type: In-person Schedule: Day shift Fixed shift Monday to Friday Work Location: In person Speak with the employer +91 9016790313

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Job Title: Machine Learning Intern Company: Optimspace.in Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with Certificate of Internship About Optimspace.in Optimspace.in provides students and graduates with hands-on learning opportunities and career growth in Machine Learning and Data Science. Role Overview As a Machine Learning Intern, you will work on real-world projects, enhancing your practical skills in data analysis and model development. Responsibilities Design, test, and optimize machine learning models. Analyze and preprocess datasets. Develop algorithms and predictive models. Use tools like TensorFlow, PyTorch, and Scikit-learn. Document findings and create reports. Requirements Enrolled in or a graduate of a relevant program (Computer Science, AI, Data Science, or related field). Knowledge of machine learning concepts and algorithms. Proficiency in Python or R (preferred). Strong analytical and teamwork skills. Benefits Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid). Hands-on machine learning experience. Internship Certificate & Letter of Recommendation. Real-world project contributions for your portfolio. How to Apply 📩 Submit your application with "Data Analyst Intern Application" as the subject. 📅 Deadline: 9th July 2025 Note: Optimspace.in is an equal opportunity employer, welcoming diverse applicants.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning

Posted 2 weeks ago

Apply

0 years

3 - 5 Lacs

Bengaluru, Karnataka, India

On-site

Passionbits — a synthetic video ad engine that allows fashion and lifestyle brands to choose creators, scripts, wardrobes, and more to get custom videos made for their socials and ads. Community of 3000+ creators & studios, powering on-demand video content for enterprise marketers. Helping creators monetise their on/behind camera skills, instead of their audience - to help add passive earnings Pre-seed funded. Early revenue traction. Founding roles. Role Description We are seeking a passionate and talented Deep Learning Engineer to join our dynamic team. This role offers a unique opportunity to work on cutting-edge AI models that drive our video ad creation and optimization platform. Responsibilities Model Development: Assist in designing, developing, and improving deep learning models for video generation and dynamic visual content creation. Research & Innovation: Conduct research on the latest advancements in deep learning and apply relevant techniques to enhance our platform. Data Processing: Analyze and preprocess large datasets to train and validate models, ensuring high-quality outputs. Integration: Collaborate with the engineering team to integrate AI models seamlessly into our existing systems. Optimization: Optimize model performance and scalability to ensure efficient deployment and real-time processing. Collaboration: Participate in brainstorming sessions to develop new features and enhancements for the platform. Documentation: Document your work and present findings to the team, ensuring clear communication of technical concepts. Technical Skills Strong understanding of deep learning concepts and frameworks (e.g., TensorFlow, PyTorch). Proficiency in programming languages such as Python. Familiarity with computer vision techniques and video processing. Soft Skills Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a collaborative team. Strong communication skills, both written and verbal. Preferred Experience With generative models (e.g., GANs, VAEs). Knowledge of video editing and processing tools. Previous internship or project experience in deep learning or AI development. Skills: deep learning,optimization,pytorch,gans,models,research,python,computer vision,video processing,processing,tensorflow

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies