Jobs
Interviews

9276 Pytorch Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Roles and Responsibilities: Working along with the Business Analyst, Testing teams and coordinating for smooth delivery aligned with business requirements Design, develop, and deploy advanced AI/ML models across NLP, Computer Vision, and Generative AI projects. Research and implement GenAI applications using Azure OpenAI, MS Copilot Studio, Azure AI Foundry, and Azure Cognitive Services. Build agentic AI systems using frameworks like MS Autogen, LangChain, and Semantic Kernel for autonomous workflows and task orchestration. Performing code optimization and identifying potential issues and defects. Adhere to industry standard best practices for AI/ML development, evaluation and deployment, ensuring maintainable, scalable, and secure solutions. Analyzing the technical & functional feasibility of new use cases which needs to be automated Working on test procedures and scenarios for the pre-UAT phases Implementing change requirements supporting a structured change control process Monitoring and working on solving issues on day-to-day basis for the solutions developed and deployed Working with Business Analyst and SME for feasibility analysis Attending meetings for the solution development and problems identified with the solution Capable of taking up complex challenges in AI/ML/GenAI domain Mentor and train fellow COE members to cross skill on AI/ML technologies Knowledge and Skills Requirements: Agile mindset and team player. 7+ years of hands-on experience designing, developing, deploying, and maintaining AI/ML and Data Science solutions. Good to have experience in designing, developing, deploying and supporting UI/UX/Front end for the AI/ML solutions. Solid experience in Python with libraries such as TensorFlow, Keras, PyTorch, Scikit-learn, Hugging Face Transformers, SpaCy, and OpenCV. Proven expertise in GenAI and Agentic AI using frameworks like MS Autogen, LangChain, Semantic Kernel, and prompt engineering techniques. Strong experience with RAG pipelines, vector stores (e.g., Azure Cognitive Search,), and enterprise knowledge base integration. Proficient in Azure AI ecosystem: Azure Cognitive Services (Language, Vision, Speech), Azure Machine Learning, Azure AI Foundry, and Azure DevOps. Familiar with Git for version control and collaborative development. Strong knowledge and experience using LLM models. Hands on experience in designing, developing, and consuming SOAP/REST APIs Experience with Lean / Agile development methodologies. Capable of delivering on multiple competing priorities with little supervision Good verbal and written communication skills Willing to work in 24/7 shifts.

Posted 5 days ago

Apply

7.0 years

0 Lacs

India

Remote

AI Learning Experience Designer As an AI Learning Experience Designer (ID/LED), you will be responsible for designing and contextualizing cutting-edge, instructor-led AI training programs for both technical and non-technical audiences. You will collaborate with subject matter experts (SMEs), instructional designers, and product managers to create engaging, outcome-driven learning experiences that help learners grasp, apply, and innovate with AI technologies. Terms and Details: ● This is a flexible, contractor position on a per-project basis ● We would expect someone to provide at least 15-20 hours per week when on a project ● This role will be 100% remote Requirements: ● Has created content for an instructor-led program that focuses on AI concepts. ● Background in a client-facing role, either supporting or managing, is highly desirable ● Experience collaborating with AI subject matter experts to translate complex concepts into structured, engaging learning experiences. ● 7+ years of instructional design or learning experience design with a focus on AI, data science, or emerging technologies. ● Knowledge of instructional design methodologies, including Bloom’s Taxonomy, ADDIE, and experiential learning models. ● Strong knowledge of AI concepts such as machine learning, large language models (LLMs), neural networks, and ethical AI considerations. ● Thrives working in fast-paced, dynamic environments. ● Familiarity with Google Suite, and LMS content management and authoring tools. ● Willingness to flex your schedule to join meetings in different time zones, particularly in the United States and Europe, Middle East, and Asia. Nice to Have: ● Technical proficiency in AI and coding (Python, TensorFlow, PyTorch, or similar AI frameworks). Including knowledge of GitHub. ● Familiarity with Virtual Machines (VM) ● Hands-on experience developing AI training labs, coding exercises, or real-world AI case studies. Key Responsibilities: Discovery ● Work with leading LED or project managers to define key skills, competencies, learning objectives and pathways within the discipline. ● Support sales teams or product teams in identifying client and learner needs and goals and proposing learning solutions. Design & Development ● Follow our client's design and development standards, best practices, and processes for core and customized curriculum, materials, and assets. ● Design and develop AI-focused learning experiences that cater to learners at varying levels of expertise and technical proficiency. ● Create instructor-led training materials (slides, hands-on labs, case studies, assessments) aligned with best practices in AI education. ● Partner with AI subject matter experts (SMEs) to design practical, interactive exercises, such as coding challenges, AI model-building workshops, and ethical AI decision-making scenarios. ● Curate and contextualize existing AI content to align with emerging industry trends, ensuring materials are always relevant and up to date. ● Design structured learning pathways for AI skills development, from foundational AI literacy to advanced deep learning applications. ● Work with vendors, contractors and freelancers in content, assessment, and support material production. Delivery ● Develop instructor enablement materials to support live classroom training delivery (e.g., facilitator guides, learner workbooks, real-world case studies). ● Provide guidance to instructors on best practices for AI education, including effective strategies for live coding demonstrations, interactive discussions, and problem-solving activities. ● Work closely with instructors, learning facilitators, and client teams to tailor learning programs for diverse audiences, including enterprise clients, startups, and government organizations. ● Collect, synthesize and analyze data on learning impact for continuous improvement, refinement, and content updating. ● Where applicable, work with translation vendors in the localization of content. Productization ● Work with product management and learning teams to standardize patterns and components for base curricular products. ● Work with product management and learning teams to build out the library of product components, exemplars, and documentation.

Posted 5 days ago

Apply

0 years

0 Lacs

India

Remote

Company: Lead India Location: Remote (India) Employment Type: Internship (Full-Time) Duration: 3 Months Stipend: ₹16,000–₹17,000/month Lead India is seeking a curious and driven Machine Learning Intern to join our data science and AI team. This internship is ideal for individuals who are passionate about building intelligent systems and want hands-on experience in applying machine learning techniques to real-world problems. Key Responsibilities Assist in designing and developing machine learning models and pipelines Preprocess and clean data for training and evaluation Implement and test ML algorithms using Python and relevant libraries (e.g., scikit-learn, TensorFlow, PyTorch) Analyze model performance and suggest improvements Collaborate with data scientists and engineers to deploy models in production Document experiments, findings, and workflows clearly Required Skills Strong understanding of machine learning fundamentals and algorithms Proficiency in Python and libraries such as NumPy, pandas, scikit-learn Familiarity with deep learning frameworks like TensorFlow or PyTorch (preferred) Basic knowledge of statistics and data analysis Experience with Jupyter notebooks and version control (Git) Good problem-solving and communication skills Eligibility Currently pursuing or recently completed a degree in Computer Science, Data Science, Mathematics, or a related field Academic or project experience in machine learning or AI (preferred) Portfolio or GitHub showcasing ML projects (preferred) What We Offer Hands-on experience in machine learning and AI development Mentorship from experienced data scientists and engineers Internship Certificate upon successful completion Potential for full-time employment based on performance

Posted 5 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Computer Vision Engineer - Manufacturing Defects Analysis Location: Gurugram Employment Type: Full-time Department: Engineering / R&D / AI/ML Team Key Responsibilities: Design, implement, and optimize computer vision and deep learning algorithms for automated detection of manufacturing defects (e.g., Scratches, Dents, holes, Surface finishing, Cosmetic stains, Dimensions, Cracks, Edge Chip off, Black & white spots, Misalignments, Labels & Signs, Puncher dots, foreign particles). Collect, annotate, analyze, and preprocess structured and unstructured image/video datasets including images, sensor streams, and production logs from manufacturing lines. Develop, validate, and deploy largescale machine learning and statistical models to detect, classify, and predict manufacturing defects. Collaborate with manufacturing, QA, and IT teams to integrate vision systems into production lines for comprehensive defect analysis. Conduct exploratory data analysis (EDA) to identify patterns, root causes, and risk factors of quality issues. Develop proof-of-concept demos and production-grade models & pipelines for real-time defect detection. Fine-tune models for accuracy, speed, and robustness in variable manufacturing environments. Present analytical findings to stakeholders and recommend actionable process improvements. Monitor and maintain deployed models, retraining and refining as new data becomes available. Document models, design choices, workflows, best practices and insights in a reproducible and scalable manner. Qualifications: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Robotics, or related field. 2+ years of hands-on experience in computer vision, image processing, multi-modal sensor fusion and deep learning, preferably in a manufacturing or industrial context. Proficiency in Python, with strong knowledge in libraries such as OpenCV, PyTorch, TensorFlow, or Keras. Building and evaluating machine learning models (classification, anomaly detection, time series, etc.). Experience designing & training CNNs and other architectures for visual inspection tasks. Familiarity with deployment on edge devices (e.g., NVIDIA Jetson, Intel Movidius) is a plus. Good understanding of manufacturing processes and common defects (preferred). Experience with data labelling, MLOps practices, database systems and data pipeline tools, Cloud platforms for data science work flows Strong problem-solving, team collaboration, and communication skills, with an ability to translate complex insights into actionable outcomes. Application Process: Please submit your resume, cover letter, and any relevant portfolio or GitHub links to hr@weare86.com

Posted 5 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description We're looking for a proactive and hands-on MLOps Engineer to join our R& D team and drive internal innovation. In this role, you will work closely with software engineers to research and operationalize cutting-edge models and build robust infrastructure that accelerates experimentation and enables scalable deployment of ML solutions. Your primary mission will be to build, operationalize, and maintain machine learning models that help our teams extract insights from data faster and develop proof-of-concepts more efficiently across departments. This is a unique opportunity to be at the center of internal experimentation developing reusable ML components, automating analysis workflows, and turning raw data into powerful internal tools that accelerate decision-making. Collaborate with analysts, product managers, and engineers to understand internal data needs and POC requirements. Design, build, and maintain scalable MLOps infrastructure (CI/CD, model versioning, orchestration, monitoring). Create reusable pipelines and tools for faster model development and experimentation. Streamline and automate the process of generating visualizations, dashboards, and reports using ML-based techniques. Set up infrastructure and practices for model tracking, versioning, and reproducibility (e. g., MLflow, DVC). Ensure reproducibility and traceability of experiments and models across environments. Monitor and troubleshoot model performance in staging and production environments. Manage and optimize GPU/compute environments, data storage solutions, and ML model serving layers. Implement testing, validation, and rollback mechanisms for ML models. Stay current with MLOps trends and best practices; introduce tools and practices that improve team productivity. Support PoC projects with rapid prototyping and deployment of smart data solutions. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3+ years of experience in DevOps, MLOps, or infrastructure engineering in a production setting. Proficiency with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Strong experience with Python and ML frameworks (e. g., TensorFlow, PyTorch, Scikit-learn). Familiarity with ML lifecycle tools like MLflow, DVC, Weights, and Biases etc. Solid understanding of data pipelines, version control, and CI/CD systems (e. g., GitHub Actions, Jenkins) Nice-to-Have Experience working in an R& D or experimental environment. Exposure to distributed training, model compression, or edge deployment. Experience building internal tools, dashboards, or data products used by non-technical teams. Ability to communicate ML concepts clearly to non-technical stakeholders. Comfortable in fast-paced, iterative environments with shifting priorities. Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 5 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Who We Are At Dolat Capital, we’re a team of traders, tech enthusiasts, and problem solvers driven by a passion for markets and innovation. We specialize in ultra-low latency market-making and active trading across Equities, Futures, and Options. With top-tier fill rates and cutting-edge technology, we thrive where finance meets engineering. Responsibilities Quant Trading & Strategy – Design and optimize high-performance strategies for derivatives, options, and ETFs. Monitor PnL and Sharpe ratios (6+), while actively identifying market inefficiencies in real time. Trading Systems Development – Build low-latency, high-throughput systems and simulation tools. Focus on fast execution, real-time risk, and performance tracking. Algo & Quant Analysis – Use statistical methods and large-scale HFT data (Python/C++) to validate and refine strategies in live environments. AI/ML Integration – Develop AI/ML models for signal detection, alpha generation, and market prediction using large datasets. System & Network Optimization – Enhance system performance with deep expertise in concurrent programming, TCP/UDP, and network protocols. Collaboration & Mentorship – Work closely with traders, engineers, and data scientists. Represent Dolat in tech mentorship and campus events. Qualifications Education : B.Tech in Computer Science/IT and a Master’s degree in Computer Science or Computational Finance from a U.S. university (mandatory) . Strong grasp of data structures, algorithms , and object-oriented programming (C++). Proficient in Python and C++ , with hands-on experience in Linux systems programming . Experience with AI/ML frameworks such as TensorFlow, PyTorch, or Scikit-learn. Familiarity with distributed systems and high-concurrency architectures . Solid understanding of network programming , including TCP/UDP protocols. Excellent analytical and problem-solving abilities, with a passion for technology-driven innovation in financial markets.

Posted 5 days ago

Apply

1.0 years

3 - 4 Lacs

Velachery, Chennai, Tamil Nadu

On-site

Job description Job Role: ML Engineer Experience : 1+ years of experience Location: Chennai, Tamil Nadu About the Company: Kaizen being a reputed company is involved in the business of Kaizen Secure Voiz Private Limited is Six-year young company specializing in unified voice interactions management using latest technologies with Global presence for Supply, implementation, testing and commissioning of voice biometrics solution. KSV has good experience in design, development and implementation of such voice biometrics- based technologies and has enabled multi-channel applications such as mobile applications, voice bots, interactive voice response and call center-based services to target group. KSV has proposed options to customers, keeping the world trend on latest technologies. KSV has deployed similar solutions for various companies and understands domain very well. While KSV has proposed voice biometrics-based Interactions management platform is suitable for further scaling and improvements of customer experience using voice analytics solutions later. The introduction of such automation will not only save huge money for the company, but also exponentially increase the customer experience. Responsibilities: Develop and implement Voice Biometric algorithms and models. Evaluate and analyze the performance of Voice Biometric systems. Implement advanced security measures to protect biometric data and ensure compliance with privacy regulations. Optimize the performance of biometric systems to ensure fast and accurate identification and authentication. Fine-tune algorithms and parameters to improve system efficiency and reliability. Collaborate with cross-functional teams to integrate Voice Biometric solutions into various applications and systems. Stay updated with the latest advancements in Voice Biometric technology and contribute to research activities. Impeccable analytical and problem-solving skills Extensive math and computer skills, with a deep understanding of probability, statistics, and algorithms In-depth knowledge of machine learning frameworks, like Keras, PyTorch, etc Familiarity with data structures, data modelling, and software architecture Excellent time management and organizational skills Requirements: Bachelor's degree in computer science, data science, mathematics, or a related field. Master’s degree in computational linguistics, data analytics, or similar will be advantageous. At least two years' experience as a machine learning engineer. Advanced proficiency with Python, Java, and C code writing. In-depth knowledge of Deep Learning and Machine Learning Algorithms. Strong knowledge of Kaldi, Speechbrain, Wespeaker, 3D-Speaker, Unispeech, Nemo etc. Proficiency in speaker verification, Speaker Identification, Speaker Diarization, Anti-spoofing. Strong programming and algorithm development skills. Ability to work independently and as part of a team. Basic knowledge in signal processing. Job Type: Full-time Job Types: Full-time, Permanent Pay: ₹370,000.00 - ₹450,000.00 per year Schedule: Day shift Work Location: In person Application Deadline: 18/08/2025

Posted 5 days ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role- VP of Engineering Experience- 8-10 years Location- Bangalore Notice Period- upto 30 days What You’ll Own: ● Hands-On Technical Leadership & Core Tech Stack Development ○ Architect and code the first scalable version of our booking portal, routing engine, mobile-first CRM, and operational dashboard. ○ Contribute directly to the codebase, setting the standard for engineering excellence and coding culture. ○ Build systems that handle India-scale logistics, real-time demo scheduling, and payment flows, with high reliability and low latency. ○ Lead backend architecture and microservices strategy using tools like Go, Node.js, Kafka, Postgres, Redis, Kubernetes, and Terraform. ○ Coordinate API strategy across frontend (React), mobile (React Native), and edge interfaces, using GraphQL and gRPC contracts. ● Full-Stack & Platform Ownership ○ Collaborate with frontend engineers on React-based interfaces; enforce design system and performance best practices. ○ Work closely with mobile engineers on React Native, helping optimize cold-start time, offline sync, and background processing. ○ Oversee API versioning, mobile/web contract integrity, and cross-platform interface stability. ○ Enable observability, tracing, and proactive alerting across the platform (Grafana, Prometheus, Sentry, etc.). ● Systems Thinking, Automation & DevOps ○ Design and implement scalable, resilient, and modular backend architectures (evolving from monolith to microservices). ○ Integrate and automate CRM, logistics, inventory, payments, and customer apps into a cohesive real-time ERP-lite system. ○ Champion CI/CD pipelines, zero-downtime deploys, infrastructure as code (Terraform), and rollback safety protocols. ○ Set and uphold engineering SLAs and SLOs (e.g., 99.9% uptime, sub-1s booking latency). ● AI-Enabled Systems & Innovation ○ Drive the integration of AI/ML into operational workflows: predictive routing, lead scoring, demand forecasting, personalized journeys. ○ Collaborate with data and product teams to deploy models using frameworks like TensorFlow, PyTorch, or OpenAI APIs. ○ Ensure infrastructure supports scalable ML workflows and retraining cycles. ● Security, Compliance & Performance ○ Implement secure coding practices and enforce API security (OAuth2, RBAC, audit logging). ○ Lead efforts around payment data protection, customer data privacy, and infra-level security (SOC 2 readiness). ○ Champion system performance tuning, cost optimization, and scalability testing (load testing, caching, indexing). ● Leadership & Cross-Functional Collaboration ○ Hire, mentor, and grow engineers across specializations: backend, frontend, mobile, data, and DevOps. ○ Foster a culture of autonomy, excellence, ownership, and rapid iteration. ○ Collaborate with Product, Design, Ops, and CX to shape roadmap, triage bugs, and ship high- impact features. Qualifications: ● Technical Depth: Proven track record of designing, building, and scaling complex software systems from scratch. Strong proficiency in at least one modern backend language (e.g., Go, Python, Node.js, Java) and experience with relevant frameworks and databases. ● Architectural Acumen: Demonstrated ability to architect scalable, fault-tolerant, and secure systems. Experience with distributed systems, microservices, message queues (Kafka, RabbitMQ), and cloud-native architectures (Kubernetes, Docker). ● Hands-on Experience: A genuine passion for coding and a willingness to be hands-on with technical challenges, debugging, and code reviews. ● AI/ML Exposure: Experience with integrating AI/ML models into production systems, understanding of data pipelines for AI, and familiarity with relevant tools/frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is highly desirable. ● Leadership & Mentorship: Experience leading and mentoring engineering teams, fostering a collaborative and high-performance environment. Ability to attract, hire, and retain top engineering talent. ● Problem-Solving: Exceptional analytical and problem-solving skills, with a pragmatic approach to delivering solutions in a fast-paced, ambiguous environment.

Posted 5 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Greetings ! One our our client TOP MNC Giant looking for GEN AI and Machine Learning Engineer's Important Notes: Please share only those profiles who can join immediately or within 7 days. Base Locations: Gurgaon and Bengaluru (hybrid setup 3 days work from office). Role : Associate and Sr Associate L1/L2 (Multiple Positions) SKILLS :  Bachelor's or master’s degree in Computer Science, Data Science, Engineering, or a related field.  Experience on Agentic AI/ Frameworks  Strong programming skills in languages such as Python, SQL/NoSQL etc.  Build analytical approach based on business requirements, then develop, train, and deploy machine learning models and AI algorithms  Exposure to GEN AI models such as OpenAI, Google Gemini, Runway ML etc.  Experience in developing and deploying AI/ML and deep learning solutions with libraries and frameworks, such as TensorFlow, PyTorch, Scikit-learn, OpenCV and/or Keras.  Knowledge of math, probability, and statistics.  Familiarity with a variety of Machine Learning, NLP, and deep learning algorithms.  Exposure in developing API using Flask/Django.  Good experience in cloud infrastructure such as AWS, Azure or GCP  Exposure to Gen AI, Vector DB/Embeddings, LLM (Large language Model) GOOD TO HAVE : Experience with MLOps: MLFlow, Kubeflow, CI/CD Pipeline etc. Good to have experience in Docker, Kubernetes etc Exposure in HTML, CSS, Javascript/JQuery, Node.js, Angular/React Experience in Flask/Django is a bonus RESPONSIBILITIES : Collaborate with software engineers, business stake holders and/or domain experts to translate business requirements into product features, tools, projects, AI/ML, NLP/NLU and deep learning solutions.  Develop, implement, and deploy AI/ML solutions.  Preprocess and analyze large datasets to identify patterns, trends, and insights.  Evaluate, validate, and optimize AI/ML models to ensure their accuracy, efficiency, and generalizability.  Deploy applications and AI/ML model into cloud environment such as AWS/Azure/GCP etc.  Monitor and maintain the performance of AI/ML models in production environments, identifying opportunities for improvement and updating models as needed.  Document AI/ML model development processes, results, and lessons learned to facilitate knowledge sharing and continuous improvement. INTERESTED CANDIDATES PERFECT MATCH TO THE JD AND WHO CAN JOIN ASAP ONLY DO APPLY ALONG WITH BELOW MENTIONED DETAILS : Total exp : Relevant exp in AI/ ML : Applying for Gurgaon and Bengaluru : Open for Hybrid : Current CTC : Expected CTC : Can join ASAP : Will call you once we receive your updated profile along with above mentioned details. Thanks, Venkat Solti solti.v@anlage.co.in

Posted 5 days ago

Apply

8.0 - 10.0 years

0 Lacs

Maharashtra, India

On-site

GEN AI Lead Description We are seeking a seasoned GEN AI Lead to spearhead our generative AI initiatives in India. The ideal candidate will have extensive experience in AI and a passion for driving innovation through advanced technologies. Responsibilities Lead the design and implementation of generative AI projects within the organization. Collaborate with cross-functional teams to identify opportunities for AI integration and enhancement. Develop and maintain AI models, algorithms, and frameworks to drive innovation. Ensure the scalability and reliability of AI solutions deployed in production environments. Stay updated with the latest advancements in generative AI technologies and methodologies. Provide mentorship and guidance to junior team members in AI best practices. Skills and Qualifications 8-10 years of experience in AI/ML with a focus on generative models. Strong proficiency in programming languages such as Python, R, or Java. Experience with deep learning frameworks such as TensorFlow, PyTorch, or Keras. Solid understanding of natural language processing (NLP) and computer vision techniques. Proven track record of deploying AI solutions in a production environment. Excellent problem-solving and analytical skills. Strong communication skills to convey complex AI concepts to non-technical stakeholders.

Posted 5 days ago

Apply

10.0 - 15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

AI Solutions Architect with total of around 10-15 years of experience and atleast 5-8 years of relevant data science, analytics and AI/ML. • Develop strategies/solutions to solve problems in logical yet creative ways, leveraging state-of-the-art machine learning, deep learning and GEN AI techniques. • Technically lead a team of data scientists to produce project deliverables on time and with high quality. • Identify and address client needs in different domains, by analyzing large and complex data sets, processing, cleansing, and verifying the integrity of data, and performing exploratory data analysis (EDA) using state-of-the-art methods. • Select features, build and optimize classifiers/regressors, etc. using machine learning and deep learning techniques. • Enhance data collection procedures to include information that is relevant for building analytical systems, and ensure data quality and accuracy. • Perform ad-hoc analysis and present results in a clear manner to both technical and non-technical stakeholders. • Create custom reports and presentations with strong data visualization and storytelling skills to effectively communicate analytical conclusions to senior officials in a company and other stakeholders. • Expertise in data mining, EDA, feature selection, model building, and optimization using machine learning and deep learning techniques. • Strong programming skills in Python. • Excellent communication and interpersonal skills, with the ability to present complex analytical concepts to both technical and non-technical stakeholders. Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production. Standard Skills: In-depth understanding of manufacturing workflows, production planning, and quality control. Familiarity with ISA-95 and ISA-88 standards for manufacturing systems. Experience working with shop floor automation and IoT devices. Good To have skills: MES Certifications, AI/ML, regulatory experience, and emerging technologies like IoT or edge computing.

Posted 5 days ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

We are seeking a talented Data Science Engineer with strong expertise in satellite data analytics, thermal imaging, and coding to drive innovation in the geospatial domain. This role involves analyzing thermal and optical imagery from both satellites and drones to uncover impactful applications across agriculture, environment, and urban sectors . Key Responsibilities: Analyze and process thermal and optical imagery from satellite and drone platforms. Develop data-driven geospatial solutions for applications including: Crop health monitoring , water stress detection, and irrigation planning Yield prediction using fused thermal and optical data Soil moisture estimation and early detection of drought conditions Forest fire detection , hotspot mapping, and risk assessment Greenhouse gas (GHG) detection and other emission monitoring Urban heat island mapping and infrastructure-level thermal assessments Design and implement machine learning and deep learning models to extract actionable insights from geospatial datasets. Work closely with product and engineering teams to build scalable, real-time solutions that can be deployed in operational environments. Required Qualifications: Master’s degree in Data Science, Remote Sensing, Geospatial Data Science , or a related discipline. Minimum 3 years of professional or research experience. Strong coding skills in Python , with proficiency in libraries such as NumPy, Pandas, Scikit-learn, TensorFlow/PyTorch, and GDAL . Solid foundation in remote sensing techniques , image preprocessing, and satellite data analytics. Experience applying ML/DL/computer vision techniques to satellite or drone imagery for real-world geospatial applications.

Posted 5 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a talented MERN Stack Intern to join our dynamic team of innovators and help drive the development of our AI products from conception to deployment. As a MERN Stack Intern, you will collaborate closely with our multidisciplinary team of AI engineers, and domain experts to build robust and scalable software solutions that integrate AI technologies into real-world applications. The ideal candidate is passionate about leveraging technology to tackle complex challenges and has a strong background in both frontend and backend development. Title : MERN Stack Intern Internship Duration : 6 Months - Followed by PPO Location: Hyderabad (Onsite) No of Openings : 3 Work Timings : 2:00 PM to 11 PM IST Strictly 2024/2025 passed outs only Roles & Responsibilities: Collaborate with cross-functional teams to understand project requirements and translate them into technical specifications. Develop front-end components using React.js and integrate them with back-end services built with Node.js and Express.js . Design and implement RESTful APIs to facilitate communication between the front-end and back-end systems. Utilize MongoDB or other databases to store and retrieve data efficiently. Implement AI and machine learning algorithms to enhance product functionality and user experience. Write clean, maintainable, and well-documented code following best practices and coding standards. Perform code reviews, debugging, and troubleshooting to ensure the quality and reliability of the software. Required Skills: Bachelor’s or Master’s Degree in Computer Science, Engineering or a related field Proficiency in JavaScript, HTML, CSS, a nd related web technologies. Strong understanding of the MERN (MongoDB, Express.js, React.js, Node.js) stack. Familiarity with AI and machine learning concepts and frameworks (e.g., TensorFlow, PyTorch, scikit-learn) will be a plus. Excellent problem-solving skills and attention to detail. Ability to work effectively in a collaborative team environment. Strong communication skills and a passion for learning new technologies. About Techolution: Techolution is a consulting firm on track to become one of the most admired brands in the world for "innovation done right" . Our purpose is to harness our expertise in novel technologies to deliver more profits for our enterprise clients while helping them deliver a better human experience for the communities they serve. With that, we are now fully committed to helping our clients build the enterprise of tomorrow by making the leap from Lab Grade AI to Real World AI . We are honored to have recently received the prestigious Inc 500 Best In Business award, a testament to our commitment to excellence. While we are big enough to be trusted by some of the greatest brands in the world, we are small enough to care about delivering meaningful ROI-generating innovation at a guaranteed price for each client that we serve. Let’s give you more insights! One of our amazing products with Artificial Intelligence and Internet of Things: 1. https://faceopen.com/ : Our proprietary and powerful AI Powered user identification system which is built on artificial intelligence technologies such as image recognition, deep neural networks, and robotic process automation. (No more touching keys, badges or fingerprint scanners ever again!) Some videos you wanna watch! Life at Techolution GoogleNext 2023 Ai4 - Artificial Intelligence Conferences 2023 WaWa - Solving Food Wastage Saving lives - Brooklyn Hospital Innovation Done Right on Google Cloud Techolution featured on Worldwide Business with KathyIreland Techolution presented by ION World’s Greatest Visit us @ www.techolution.com : To know more about our revolutionary core practices and getting to know in detail about how we enrich the human experience with technology.

Posted 5 days ago

Apply

6.0 years

0 Lacs

India

Remote

Job Description As a Senior Databricks Data Engineer , your responsibilities include: Technical Requirements Gathering and Development of Functional Specifications. Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks, Data Factory, and other Azure services. Implement and optimize Spark jobs, data transformations, and data processing workflows in Databricks. Develop and integrate custom machine learning models using Azure Machine Learning, MLflow, and other relevant libraries. Leverage Azure DevOps and CI/CD best practices to automate the deployment and management of data pipelines and infrastructure. Conducting troubleshooting on data models. Work with the Agile multicultural teams in Asia, the EU, Canada, and the USA. Profile Requirements For this position of Azure Databricks Data Engineer , we are looking for someone with: (Required) At least 6 years of experience in developing and maintaining data pipelines using Azure Databricks, Azure Data Factory, and Spark. (Required) Hands on experience with Unity Catalog (Required) Fluent English communication and soft skills. (Required) Knowledge and Experience in CICD such as Terraform, ARM, and Bicep Script. (Required) Solid technical skills in Python, and SQL (Required) Familiarity with machine learning concepts, tools, and libraries (e.g., TensorFlow, PyTorch, Scikit-learn, MLflow) (Required) Strong problem-solving, communication, and analytical skills. Willingness to learn and expand technical skills in other areas. Adastra Culture Manifesto Servant Leadership Managers are servants to employees. Managers are elected to make sure that employees have all the processes, resources, and information they need to provide services to clients in an efficient manner. Any manager up to the CEO is visible and reachable for a chat regardless their title. Decisions are taken with a consent in an agile manner and executed efficiently in no overdue time. We accept that wrong decisions happen, and we appreciate the learning before we adjust the process for a continuous improvement. Employees serve clients. Employees listen attentively to client needs and collaborate internally as a team to cater to them. Managers and employees work together to get things done and are accountable to each other. Corporate KPIs are transparently reviewed on monthly company events with all employees. Performance Driven Compensation We recognize and accept that some of us are more ambitious, more gifted, or more hard-working. We also recognize that some of us look for a stable income and lesser hassle at a different stage of their careers. There is a place for everyone, we embrace and need this diversity. Grades in our company are not based on number of years of experience, they are value driven based on everyone’s ability to deliver independently their work to clients and/or lead others. There is no “anniversary/annual” bonus, we distribute bonuses on a monthly recurring basis as an instant gratification for performance and this bonus is practically unlimited. There is no “annual indexation” of salaries, you may be upgraded several times within the year, or none, based on your own pace of progress, ambitions, relevant skillset and recognition by clients. Work-Life Integration We challenge the notion of work-life balance, we embrace the notion of work-life integration instead. This philosophy looks into our lives as a single whole where we serve ourselves, our families and our clients in an integrated manner. We encourage 100% flexible working hours where you arrange your day. This means you are free when you have little work, but this also means extra effort if you are behind schedule. Working on a Western project also means nobody bothers you during the whole day but you may have to jump on a scrum call in the evening to talk to your team overseas. We appreciate time and we minimize time spent on Adastra meetings. We are also a remote-first company. While we have our collaboration offices and social events, we encourage people to work 100% remote from home whenever possible. This means saving time and money on commute, staying home with elderly and little ones, not missing the special moments in life. This also means you can work from any of our other offices in Europe, North America or Australia, or move to a place with lower cost of living without impacting your income. We trust you by default until you fail our trust. Global Diversity Adastra Thailand is an international organization. We hire globally and our biggest partners and clients are in Europe, North America and Australia. We work on teams with individuals from different culture, ethnicity, sexual preference, political views or religion. We have zero tolerance to anyone who doesn’t pay respect to others or is abusive in any way. We speak different languages to one another, but we speak English when we are together or with clients. Our company is a safe space where communication is encouraged but boundaries regarding sensitive topics are respected. We accept and converge together to serve our teams and clients and ultimately have good time at work. Lifelong Learning On annual average we invest 25% of our working hours to personal development and upskilling outside project work, regardless of seniority or role. We feature more than 400 courses on our Training Repo and we continue to actively purchase or tailor hands-on content. We certify people on our expense. We like to say we are technology agnostic; we learn the principles of data management and we apply it on different use cases and different technology stacks. We believe that the juniors today are the seniors tomorrow, we treat everyone with respect and mentor them into the roles they deserve. We encourage seniors to give back to the IT community through leadership and mentorship. On your last day with us we may give you an open dated job offer so that you feel welcome to return home as others did before you. More About Adastra: Visit Adastra (adastracorp.com) and/or contact us: at HRIN@adastragrp.com

Posted 5 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: DevOps Engineer Location: Gurugram (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert, you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow, Prefect, or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools: MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving: TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking: MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms: AWS, Azure, or GCP with relevant certifications • Containerization: Docker, Kubernetes (CKA/CKAD preferred) • CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC: Terraform, CloudFormation, Pulumi, Ansible • Monitoring: Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools: Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time: Apache Kafka, Apache Spark, Apache Flink, Redis • Databases: PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing: Snowflake, BigQuery, Redshift, Databricks • Data Versioning: DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security: Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing: Experience with petabyte-scale data processing and real-time analytics • Performance Optimization: Advanced system optimization, distributed computing, caching strategies • API Development: REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams, product managers, and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics, SLAs, and enterprise ROI Growth Opportunities • Career Path: Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth: Work with cutting-edge enterprise AI/ML technologies • Leadership: Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure: Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime: Maintain 99.9%+ availability for enterprise clients • Deployment Frequency: Enable daily deployments with zero downtime • Performance: Ensure optimal response times and system performance • Cost Optimization: Achieve 20-30% annual infrastructure cost reduction • Security: Zero security incidents and full compliance adherence Business Impact • Time to Market: Reduce deployment cycles and improve development velocity • Client Satisfaction: Maintain 95%+ enterprise client satisfaction scores • Team Productivity: Improve engineering team efficiency by 40%+ • Scalability: Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 5 days ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0725-0232 Employment Type: Full Time Position Description: Job Title: Data Analyst (Artificial Intelligence Solutions Analyst) Position: Senior software engineer/AIML engineer Experience: 3+ years Category: Software Development/ Engineering Shift: General Shift Main location: Bangalore Position ID: J0725-0232 Employment Type: Full Time Education Qualification: Any graduation or related field or higher with minimum 3 years of relevant experience. Position Description: We are seeking a skilled and experienced AI/ML Engineer to join our Bangalore-based team. The ideal candidate will have a strong background in machine learning, data preprocessing, and deploying AI solutions in cloud environments. This role requires a hands-on professional with a consulting mindset who can translate business needs into actionable AI/ML solutions. The candidate will lead and support the design, implementation, and optimization of AI-powered tools across various business or public sector environments. The role also includes client interaction, mentoring junior staff, and contributing to end-to-end project lifecycles—from data ingestion to model deployment and monitoring. Your future duties and responsibilities: Lead the design and implementation of AI/ML solutions for clients Translate business problems into technical requirements and model-driven solutions Perform data preparation tasks, including cleansing, preprocessing, and handling missing or inconsistent data Work with AI frameworks like TensorFlow, PyTorch, and Scikit-learn Leverage cloud-based AI platforms such as AWS SageMaker, Azure ML, or GCP Vertex AI Provide functional support and guidance during implementation and post-deployment phases Analyze and redesign business processes as required by AI implementations Mentor junior team members and support business stakeholders in AI adoption Engage directly with clients and senior stakeholders to drive solution outcomes Collaborate across global virtual teams and contribute to best practices in AI solution delivery Required qualifications to be successful in this role: Master’s degree in computer science, Data Science, AI/ML, or a related technical discipline Certifications in AI/ML or cloud platforms (e.g., AWS Certified Machine Learning, Azure AI Engineer Associate, TensorFlow Developer Certificate) Experience with MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines for ML models) Exposure to Natural Language Processing (NLP), Computer Vision, or Deep Learning projects Understanding of data privacy, model interpretability, and responsible AI principles Experience working in cross-functional teams and global delivery models Prior experience in client-facing roles within consulting or IT services companies Strong business acumen and ability to communicate AI/ML concepts to non-technical stakeholders Must-Have Skills: 3+ years of hands-on experience in AI/ML development or consulting Proficiency in data preparation, including cleansing, feature engineering, and data validation Strong knowledge of AI/ML frameworks (TensorFlow, PyTorch, Scikit-learn) Experience with cloud-based ML platforms (AWS SageMaker, Azure ML, or GCP Vertex AI) Excellent problem-solving and communication skills Ability to work in dynamic and globally distributed teams Good-to-Have Skills: Prior consulting experience with public or private sector clients Familiarity with DevOps and MLOps practices for model deployment and monitoring Exposure to business process mapping and improvement initiatives Experience leading or supporting client-facing workshops or strategy sessions Skills: Artificial Intelligence Data Analysis English What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 5 days ago

Apply

2.0 years

0 Lacs

Bapatla, Andhra Pradesh

On-site

Experience : 0 - 1 Salary : 2.2 - 2.4LPA Location : Bapatla, Andhra Pradesh Job Description: Python with AI/ML Developer We are looking for a resilient, forward-thinking Python Developer with expertise in AI and Machine Learning deployment to join our growing technology team. As part of this role, you will be responsible for building, deploying, and maintaining scalable ML models and backend systems that support AI-driven products. This role requires a passion for innovation, an eye for detail, and the ability to translate complex problems into effective technical solutions. A successful candidate will be deeply familiar with Python, AI/ML frameworks, deployment strategies, and production-level best practices. You will work closely with cross-functional teams to ensure that our AI solutions are efficient, scalable, and aligned with user needs and business goals. Responsibilities: Develop, test, and deploy machine learning models using Python and AI frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Collaborate with data scientists, product teams, and engineers to translate prototypes into production-ready systems. Build and manage APIs and backend services using Flask, FastAPI, or similar frameworks. Deploy ML models into cloud or on-prem environments using Docker, Kubernetes, and CI/CD pipelines. Monitor and optimize model performance, latency, and resource usage post-deployment. Automate data preprocessing, feature engineering, and model training workflows. Write clean, maintainable, and scalable code with a strong focus on quality and performance. Stay current with the latest research and best practices in AI, ML, and MLOps. Troubleshoot and resolve issues related to ML model performance, deployment, and data pipelines. Document technical architecture, processes, and best practices for internal teams. Requirements: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. 2+ years of experience in Python development with a focus on AI/ML projects. Strong knowledge of ML/AI tools and frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Experience deploying and maintaining ML models in production environments. Proficiency with containerization (Docker) and familiarity with orchestration tools (Kubernetes is a plus). Knowledge of REST APIs and microservice architecture. Experience working with cloud platforms (AWS, Azure, or GCP). Strong problem-solving skills and a proactive, team-oriented mindset. Version control with Git and collaboration using GitHub or GitLab. Understanding of MLOps workflows and model lifecycle management. Preferred Skills: Experience with data streaming platforms (Kafka, Apache Spark). Familiarity with MLflow, Airflow, DVC, or similar MLOps tools. Exposure to computer vision, NLP, or deep learning projects. Knowledge of database systems (vector). Knowledge of LLM. Experience building interactive dashboards using Streamlit, Dash, or Plotly. Contributions to open-source AI/ML projects or participation in Kaggle competitions.

Posted 5 days ago

Apply

0 years

0 Lacs

India

Remote

You can also send your applications here: team.vigyaana@gmail.com Pay:150-200 INR/hr Vigyaana is an innovative edtech platform committed to advancing knowledge in Artificial Intelligence (AI). We aim to provide high-quality AI education to students, professionals, and enthusiasts looking to upskill in the ever-evolving AI landscape. Job Description We are seeking passionate and knowledgeable AI teaching faculty members to join our team. The ideal candidates will have basic expertise in AI and related fields, a passion for teaching, and the ability to create engaging learning experiences for students. Responsibilities Design and deliver AI-related courses, including but not limited to Machine Learning, Deep Learning, Natural Language Processing, and Generative AI. Develop high-quality course materials, including presentations, assignments, and hands-on projects. Conduct live sessions, online lectures, record videos and interactive discussions. Mentor and guide students through their AI learning journey. Evaluate student progress through assessments, quizzes, and projects. Stay updated with the latest advancements in AI and integrate them into course content. Qualifications & Skills Bachelor's, Master’s or PhD in Computer Science, Artificial Intelligence, Data Science, or a related field. Strong expertise in Python, TensorFlow, PyTorch, and other AI frameworks. Practical industry experience in AI applications is a plus. Excellent communication and presentation skills. Ability to explain complex AI concepts in an easy-to-understand manner. Experience with online teaching platforms and e-learning tools. Why Join Us? Opportunity to shape the future of AI education. Flexible work environment (remote options available). Pay: 150-200 INR/hr Access to a dynamic community of AI experts and learners. A chance to contribute to cutting-edge AI education initiatives. If you are passionate about AI and education, we invite you to be a part of our mission to make AI learning accessible and impactful. Apply now and help us build the next generation of AI talent!

Posted 5 days ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Role Description We are hiring a Senior Environmental Data Scientist to lead the technical development of nature and biodiversity data solutions. This is a high-impact individual contributor role for an environmental scientist first and foremost who is additionally an accomplished data scientist and programmer. You’ll be responsible for transforming scientific research into scalable analytics, building robust environmental data products, and supporting product development through direct technical contribution. You will also play a soft leadership role — supporting and mentoring junior data scientists, guiding generalists on environmental matters, and helping shape the long-term data science capacity of the Nature & Biodiversity team. This is a senior-level hire with a clear path to team leadership as our company grows. You will report to the Head of Nature & Biodiversity Products . Job Duties And Responsibilities Lead Development of Data Solutions: Design and implement advanced data pipelines, metrics, and models that assess how businesses interface with nature. Apply Environmental Science at Scale: Translate robust environmental science into analytical workflows that can support business decisions and regulatory needs. Drive Methodological Rigor: Incorporate peer-reviewed methodologies and scientific best practices into product development; stay ahead of innovations in the field. Architect Scalable Data Solutions: Develop performant, production-ready code and collaborate with engineers to build tools for spatial, temporal, and exploratory analysis. Mentor and Guide: Support junior data scientists, serve as the go-to environmental expert across functions, and help build the team’s overall environmental data science capacity. Engage with Frameworks: Apply knowledge of sustainability disclosure and risk frameworks (e.g. TNFD, ESRS, SBTN, SFDR) to develop solutions that meet evolving stakeholder needs. Collaborate and Communicate: Work cross-functionally with product, research, and engineering teams to translate scientific insight into real-world impact. Represent your work with external stakeholders as needed. Requirements Experience, Qualifications And Skills Environmental Expertise: PhD (preferred) or Master’s in environmental science, ecology, conservation, geosciences, or a closely related field. Experience: 5+ years applying data science to environmental or sustainability contexts; experience in a product-oriented or startup environment is a must. Programming & Engineering: Expert Python developer with strong engineering discipline (e.g., Git, unit testing, CI/CD); experience building high-quality analytical code. Geospatial & Remote Sensing: Advanced skills in spatial analysis, GIS tools, and remote sensing data workflows (e.g., raster/vector processing, spatial joins, indexing). Data Science & Machine Learning: Proficiency in statistical modelling, spatial ML, and fundamental AI/ML methods (e.g., scikit-learn, PyTorch, foundation models). Data Systems: Hands-on experience with relational and spatial databases (e.g., PostGIS), cloud data tools (e.g., Snowflake), and handling unstructured and structured data. Framework Fluency: Demonstrated ability to interpret and implement solutions aligned with environmental frameworks such as TNFD, ESRS, SFDR, and SBTN. Communication: Ability to explain complex ideas clearly to both technical and non-technical audiences; experience with data storytelling and visualization is a plus. Team Fit: Collaborative, proactive, impact-driven, and adaptable — comfortable with the fast pace and opportunities of a growing startup. Preferred Qualifications Deep experience with one or more particular nature-related domains, such as: biodiversity impact modeling; physical risk analysis; nature risk valuation. Experience contributing to or leading cross-disciplinary scientific or open-source projects. Work experience at corporate sustainability offices, financial institutions, regulatory bodies, or nature data providers.

Posted 5 days ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities AI/ML Engineer Exp : 4—8 Years Location: Mumbai Key Responsibilities: Develop and implement machine learning models and algorithms tailored to solve specific business challenges. Collaborate with cross-functional teams to understand business requirements and translate them into data-driven solutions. Conduct data preprocessing and feature engineering to optimize model performance. Evaluate model accuracy and performance, and apply techniques to improve and refine models. Deploy machine learning solutions in production environments and monitor their performance. Stay updated on the latest AI/ML trends, technologies, and best practices. Qualifications: Experience: 2-6 years of hands-on experience in AI/ML development. Education: Bachelor's or Master’s degree in Computer Science, Data Science, Engineering, or a related field. Technical Skills: Proficiency in programming languages such as Python, R, or Java. Experience with machine learning frameworks and libraries, such as TensorFlow, PyTorch, or scikit-learn. Strong understanding of data structures, algorithms, and statistical methods. Experience with cloud platforms (AWS, Google Cloud, Azure) is a plus. Analytical Skills: Demonstrated ability to analyze complex data sets and derive actionable insights. Understanding of GenAI/Agentic AI is a plus Problem-Solving: Creative thinker with strong problem-solving skills and attention to detail. Communication: Excellent communication skills, with the ability to explain complex concepts to non-technical stakeholders. Teamwork: Ability to work collaboratively in a fast-paced environment. Mandatory Skill Sets AI/ML/Python/Pytorch Preferred Skill Sets AI/ML/Python/Pytorch Years Of Experience Required 4--8 years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 5 days ago

Apply

4.0 - 5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Develop machine learning models for cyber security and networking domain. Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions. Should be self-starter and produce impactful outcomes in a start-up environment. Open for technical challenges & hunger for growth Mine and analyze data from company databases to drive optimization and improvement of product development. Develop custom data models and algorithms to apply to data sets. Use predictive modelling to increase and optimize customer experiences, revenue generation. Coordinate with different functional teams to implement models and monitor outcomes. Develop processes and tools to monitor and analyze model performance and data accuracy. Key Responsibilities: Lead and manage a cross-functional team of engineers and data scientists in the re-engineering of our network security and management products using AI-driven solutions Collaborate with product managers and stakeholders to define project goals, objectives, and deliverables Architect, design, and implement cutting-edge AI solutions to enhance network security, optimize management processes, and improve product performance Oversee the development and deployment of AI models, algorithms, and data pipelines while ensuring scalability, reliability, and security Communicate progress, challenges, and solutions effectively to senior management and stakeholders Requirements Qualifications : Bachelor’s degree in Engineering with 4 to 5 years of experience in AI/ML Experience working with machine learning models for threat intelligence, anomaly detection Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications. Should be proficient in TensorFlow, PyTorch, Keras and Natural Language Processing (NLP) is essential. Excellent written and verbal communication skills for coordinating across teams. A drive to learn and master new technologies and techniques. Benefits Our Benefits Include: Health Insurance, Performance-Based Bonuses (For Specific Roles), Wellness Programs – Access to mental health support, meditation sessions, and ergonomic workspaces to ensure a healthy and productive work environment, Team Lunches & Celebrations – Regular team lunches, parties, and win celebrations to foster a collaborative and fun work culture.

Posted 5 days ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: AI/ML Engineer Exp : 4—8 Years Location: Mumbai Key Responsibilities: · Develop and implement machine learning models and algorithms tailored to solve specific business challenges. · Collaborate with cross-functional teams to understand business requirements and translate them into data-driven solutions. · Conduct data preprocessing and feature engineering to optimize model performance. · Evaluate model accuracy and performance, and apply techniques to improve and refine models. · Deploy machine learning solutions in production environments and monitor their performance. · Stay updated on the latest AI/ML trends, technologies, and best practices. Qualifications: · Experience: 2-6 years of hands-on experience in AI/ML development. · Education: Bachelor's or Master’s degree in Computer Science, Data Science, Engineering, or a related field. Technical Skills: · Proficiency in programming languages such as Python, R, or Java. · Experience with machine learning frameworks and libraries, such as TensorFlow, PyTorch, or scikit-learn. · Strong understanding of data structures, algorithms, and statistical methods. · Experience with cloud platforms (AWS, Google Cloud, Azure) is a plus. · Analytical Skills: Demonstrated ability to analyze complex data sets and derive actionable insights. · Understanding of GenAI/Agentic AI is a plus Problem-Solving: Creative thinker with strong problem-solving skills and attention to detail. Communication: Excellent communication skills, with the ability to explain complex concepts to non-technical stakeholders. Teamwork: Ability to work collaboratively in a fast-paced environment. Mandatory skill sets: AI/ML/Python/Pytorch Preferred skill sets: AI/ML/Python/Pytorch Years of experience required: 4--8 years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 5 days ago

Apply

0 years

0 Lacs

India

Remote

Machine Learning Intern (Paid) Company: Unified Mentor Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with Certificate of Internship Application Deadline : 05th August 2025 About Unified Mentor Unified Mentor provides students and graduates with hands-on learning opportunities and career growth in Machine Learning and Data Science. Role Overview As a Machine Learning Intern, you will work on real-world projects, enhancing your practical skills in data analysis and model development. Responsibilities ✅ Design, test, and optimize machine learning models ✅ Analyze and preprocess datasets ✅ Develop algorithms and predictive models ✅ Use tools like TensorFlow, PyTorch, and Scikit-learn ✅ Document findings and create reports Requirements 🎓 Enrolled in or a graduate of a relevant program (Computer Science, AI, Data Science, or related field) 🧠 Knowledge of machine learning concepts and algorithms 💻 Proficiency in Python or R (preferred) 🤝 Strong analytical and teamwork skills Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) ✔ Hands-on machine learning experience ✔ Internship Certificate & Letter of Recommendation ✔ Real-world project contributions for your portfolio Equal Opportunity Unified Mentor is an equal-opportunity employer, welcoming candidates from all backgrounds.

Posted 6 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Machine Learning Engineer: Experience-5+ Yrs. Location-Pune/Bangalore/Hyderabad/Chennai JD- Expert-level proficiency in Google Cloud Platform (GCP) , demonstrating deep practical experience with Vertex AI , Big Query, Apache Beam, Cloud Storage, Pub/Sub, Cloud Composer (Apache Airflow), Cloud Run, Kubernetes Engine (GKE) concepts (for custom model serving), Docker. Strong experience leveraging GPUs/TPUs for accelerated ML training. Mastery of Python, TensorFlow and/or PyTorch, NLP libraries (e.g. spaCy, NLTK). Large-scale model training techniques, including distributed training, transfer learning, fine-tuning pre-trained models, and efficient data loading strategies Develop, fine-tune, and deploy LLMs using Vertex AI and GCP-native tools . Build and maintain NLP pipelines for tasks such as text classification, NER, question answering, summarization, and translation. Implement prompt engineering and retrieval-augmented generation (RAG) for enterprise applications.

Posted 6 days ago

Apply

2.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Job Title: Software Engineer - Machine Learning Engineer – LLM/GenAI Focus Location: Airoli, Navi Mumbai Experience Required: Minimum 2 years’ relevant experience in Python Key Responsibilities Design, develop, and implement machine learning (ML) and natural language processing (NLP) models and solutions. Build, fine-tune, and deploy Large Language Models (LLMs) and Generative AI (GenAI) applications. Write, maintain, and optimize Python code for ML and NLP projects. Work with databases to collect, clean, and manage datasets for model training and evaluation. Deploy ML models and Python applications on public cloud platforms (Azure/AWS). Collaborate with cross-functional teams to define business requirements and deliver solutions. (Good to have) Support and implement best practices for LLMOps, including monitoring and lifecycle management of LLM/GenAI models. Requirements Relevant experience: Minimum 2 years in machine learning, NLP, or AI solution development. Python programming: 1–2 years of hands-on experience. ML/NLP: 2+ years practical experience building and deploying ML/NLP solutions. LLM and GenAI: At least 1 year working with LLM (Large Language Models) and Generative AI technologies. Databases: 1+ years of experience working with any modern database (SQL/NoSQL). Cloud Deployment: 1+ years of experience deploying ML models or Python applications on Azure or AWS. Good to have: Familiarity with LLMOps tools or frameworks for managing LLM/GenAI models. Preferred Skills Familiarity with major ML/NLP libraries (e.g., scikit-learn, TensorFlow, PyTorch, spaCy, Hugging Face Transformers). Understanding of software engineering best practices, version control, and model lifecycle management. Strong analytical and problem-solving skills. Effective communication and teamwork abilities. Customize the job title and location as needed for your organization. Let me know if you need further tailoring such as salary range, team structure, or a company introduction.

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies