Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space, it has footprint across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Data Scientist/Senior Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. _____________________________________________________________________________ Department: Data & Analytics Location: Cyber Hub, Gurugram, Haryana (5 days in office) Job Type: Permanent, Full-Time (40 Hours) Reports To: Senior Manager Data Science & Analytics _____________________________________________________________________________ About The Role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Roles & Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses. Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3 - 4 years for Data Scientist Relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics, etc.) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.)
Posted 1 day ago
0 years
0 Lacs
Goregaon, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organizations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organizational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organizations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Key Responsibilities · Python, TensorFlow or PyTorch, Scikit-learn, and XGBoost, computer vision and NLP, MLOPs. · Recommendation algorithms (collaborative filtering, content-based filtering). · Experienced with MLOps tools and cloud platforms - any of GCP , Azure, Databricks, or AWS. · VertexAI experience where working with model deployment, model training pipelines should be part of experience · Experience with real-world ML applications in retail such as recommendation systems, demand forecasting, inventory optimization, or customer segmentation. · Experience in Retail Mandatory skill sets: · Python, TensorFlow or PyTorch, Scikit-learn, and XGBoost, computer vision and NLP, MLOPs. · Experienced with MLOps tools and cloud platforms - any of GCP , Azure, Databricks, or AWS. Preferred skill sets: · Experienced with MLOps tools and cloud platforms - any of GCP , Azure, Databricks, or Years of experience required: 7-11 Education qualification: B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Artificial Intelligence Markup Language Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation Developer– Senior We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 4 to 6 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation. Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP. Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI. Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility). Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer : At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation Developer– Senior We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 4 to 6 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation. Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP. Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI. Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility). Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer : At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Immediate joiner or 15 days Notice Gen AI Lead/Developer/Engineer Pan India or Chennai Required Experience: 10+ years of experience in data science, machine learning, or AI fields 5+ years in leadership roles related to AI governance, strategy, or innovation Proven track record of establishing AI governance frameworks and standards Experience collaborating with and influencing distributed AI teams Strong technical background in modern AI/ML technologies and methodologies Experience with large language models (LLMs) and generative AI applications Technical Expertise Requirements: Deep expertise in modern AI frameworks (PyTorch, TensorFlow, JAX) and their enterprise applications Hands-on experience with large language models (e.g., GPT, Claude, Llama) and prompt engineering Proficiency in implementing Retrieval-Augmented Generation (RAG) architectures Experience with vector databases (e.g., Pinecone, Weaviate, Milvus) and embeddings models Knowledge of MLOps practices and tools for model governance, monitoring, and lifecycle management Expertise in evaluating and governing foundation models for specific business domains Experience implementing AI governance tools and frameworks for responsible AI Understanding of AI infrastructure requirements, including GPU/TPU optimization Experience with AWS AI services (Bedrock, SageMaker, etc.) or other cloud AI platforms Knowledge of AI evaluation metrics and techniques for measuring model performance Experience with multimodal AI systems (text, image, audio) and their applications Proficiency in Python and related data science libraries (pandas, scikit-learn, etc.) Understanding of data engineering principles for AI workloads Experience with containerization and orchestration (Docker, Kubernetes) for AI deployments
Posted 1 day ago
6.0 years
22 - 34 Lacs
Noida, Uttar Pradesh, India
On-site
About Us CLOUDSUFI, a Google Cloud Premier Partner , a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance. Our Values We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community. Equal Opportunity Statement CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/. Role Overview As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership. Key Responsibilities Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents. Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability. Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization. Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems. End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms. Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development. Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture. Required Skills And Qualifications Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field. 6+ years of professional experience in a Data Scientist, AI Engineer, or related role. Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn). Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions. Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS. Experience with GCS is a plus. Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription. Experience with containerization technologies, specifically Docker. Solid understanding of software engineering principles and experience building APIs and microservices. Preferred Qualifications A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus. Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch). Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins). Proven ability to lead technical teams and mentor other engineers. Experience developing custom tools or packages for data science workflows. Skills:- Natural Language Processing (NLP), Large Language Models (LLM) tuning, Generative AI, Python, CI/CD, NodeJS (Node.js), Docker, Google Cloud Platform (GCP) and Amazon Web Services (AWS)
Posted 1 day ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description We suggest you enter details here. Role Description This is a full-time, on-site role for an AI MLOps FastAPI Engineer located in Gurugram. The AI MLOps FastAPI Engineer will be responsible for developing and maintaining API services using FastAPI, implementing MLOps practices, managing machine learning models in production, and ensuring the scalability and stability of the AI solutions. The Engineer will also collaborate with data scientists, software engineers, and stakeholders to streamline the deployment process and optimize model performance. Key tasks include deploying models, monitoring, logging, and automating workflows to improve the efficiency and reliability of AI systems. Qualifications Proficiency in FastAPI, RESTful API development Experience with MLOps practices, including CI/CD pipelines, containerization, and orchestration tools like Docker and Kubernetes Strong understanding of machine learning frameworks and libraries such as TensorFlow, PyTorch, or scikit-learn Familiarity with cloud platforms like AWS, Azure, or GCP Excellent problem-solving skills and the ability to troubleshoot issues in production environments Strong communication skills and the ability to work collaboratively in a team environment Bachelor's or Master’s degree in Computer Science, Engineering, or a related field Prior experience in the AI or Eucation in AI is a plus Willingness to learn and adapt to new technologies and methodologies
Posted 1 day ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location: In-Person (sftwtrs.ai Lab) Experience Level: Early Career / 1–3 years About sftwtrs.ai sftwtrs.ai is a leading AI lab focused on security automation, adversarial machine learning, and scalable AI-driven solutions for enterprise clients. Under the guidance of our Principal Scientist, we combine cutting-edge research with production-grade development to deliver next-generation AI products in cybersecurity and related domains. Role Overview As a Research Engineer I , you will work closely with our Principal Scientist and Senior Research Engineers to ideate, prototype, and implement AI/ML models and pipelines. This role bridges research and software development: you’ll both explore novel algorithms (especially in adversarial ML and security automation) and translate successful prototypes into robust, maintainable code. This position is ideal for someone who is passionate about pushing the boundaries of AI research while also possessing strong software engineering skills. Key Responsibilities Research & Prototyping Dive into state-of-the-art AI/ML literature (particularly adversarial methods, anomaly detection, and automation in security contexts). Rapidly prototype novel model architectures, training schemes, and evaluation pipelines. Design experiments, run benchmarks, and analyze results to validate research hypotheses. Software Development & Integration Collaborate with DevOps and MLOps teams to containerize research prototypes (e.g., Docker, Kubernetes). Develop and maintain production-quality codebases in Python (TensorFlow, PyTorch, scikit-learn, etc.). Implement data pipelines for training and inference: data ingestion, preprocessing, feature extraction, and serving. Collaboration & Documentation Work closely with Principal Scientist and cross-functional stakeholders (DevOps, Security Analysts, QA) to align on research objectives and engineering requirements. Author clear, concise documentation: experiment summaries, model design notes, code review comments, and API specifications. Participate in regular code reviews, design discussions, and sprint planning sessions. Model Deployment & Monitoring Assist in deploying models to staging or production environments; integrate with internal tooling (e.g., MLflow, Kubeflow, or custom MLOps stack). Implement automated model-monitoring scripts to track performance drift, data quality, and security compliance metrics. Troubleshoot deployment issues, optimize inference pipelines for latency and throughput. Continuous Learning & Contribution Stay current with AI/ML trends—present findings to the team and propose opportunities for new research directions. Contribute to open-source libraries or internal frameworks as needed (e.g., adding new modules to our adversarial-ML toolkit). Mentor interns or junior engineers on machine learning best practices and coding standards. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or a closely related field. Research Experience: 1–3 years of hands-on experience in AI/ML research or equivalent internships. Familiarity with adversarial machine learning concepts (evasion attacks, poisoning attacks, adversarial training). Exposure to security-related ML tasks (e.g., anomaly detection in logs, malware classification using neural networks) is a strong plus. Development Skills: Proficient in Python, with solid experience using at least one major deep-learning framework (TensorFlow 2.x, PyTorch). Demonstrated ability to write clean, modular, and well-documented code (PEP 8 compliant). Experience building data pipelines (using pandas, Apache Beam, or equivalent) and integrating with RESTful APIs. Software Engineering Practices: Familiarity with version control (Git), CI/CD pipelines, and containerization (Docker). Comfortable writing unit tests (pytest or unittest) and conducting code reviews. Understanding of cloud services (AWS, GCP, or Azure) for training and serving models. Analytical & Collaborative Skills: Strong problem-solving mindset, attention to detail, and ability to work under tight deadlines. Excellent written and verbal communication skills; able to present technical concepts clearly to both research and engineering audiences. Demonstrated ability to collaborate effectively in a small, agile team. Preferred Skills (Not Mandatory) Experience with MLOps tools (MLflow, Kubeflow, or TensorFlow Extended). Hands-on knowledge of graph databases (e.g., JanusGraph, Neo4j) or NLP techniques (transformer models, embeddings). Familiarity with security compliance standards (HIPAA, GDPR) and secure software development practices. Exposure to Rust or Go for high-performance inference code. Contributions to open-source AI or security automation projects. Why Join Us? Cutting-Edge Research & Production Impact: Work on adversarial ML and security–automation projects that go from concept to real-world deployment. Hands-On Mentorship: Collaborate directly with our Principal Scientist and Senior Engineers, learning best practices in both research methodology and production engineering. Innovative Environment: Join a lean, highly specialized team where your contributions are immediately visible and valued. Professional Growth: Access to conferences, lab resources, and continuous learning opportunities in AI, cybersecurity, and software development. Competitive Compensation & Benefits: Attractive salary, health insurance, and opportunities for performance-based bonuses. How to Apply Please send a résumé/CV, a brief cover letter outlining relevant AI/ML projects, and any GitHub or portfolio links to careers@sftwtrs.ai with the subject line “RE: Research Engineer I Application.” sftwtrs.ai is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Posted 1 day ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Scientist AI Garage is responsible for establishing Mastercard as an AI powerhouse. AI will be leveraged and implemented at scale within Mastercard providing a foundational, competitive advantage for the future. All internal processes, all products and services will be enabled by AI continuously advancing our value proposition, consumer experience, and efficiency. Opportunity Join Mastercard's AI Garage @ Gurgaon, a newly created strategic business unit executing on identified use cases for product optimization and operational efficiency securing Mastercard's competitive advantage through all things AI. The AI professional will be responsible for the creative application and execution of AI use cases, working collaboratively with other AI professionals and business stakeholders to effectively drive the AI mandate. Role Ensure all AI solution development is in line with industry standards for data management and privacy compliance including the collection, use, storage, access, retention, output, reporting, and quality of data at Mastercard Adopt a pragmatic approach to AI, capable of articulating complex technical requirements in a manner this is simple and relevant to stakeholder use cases Gather relevant information to define the business problem interfacing with global stakeholders Creative thinker capable of linking AI methodologies to identified business challenges Identify commonalities amongst use cases enabling a microservice approach to scaling AI at Mastercard, building reusable, multi-purpose models Develop AI/ML solutions/applications leveraging the latest industry and academic advancements Leverage open and closed source technologies to solve business problems Ability to work cross-functionally, and across borders drawing on a broader team of colleagues to effectively execute the AI agenda Partner with technical teams to implement developed solutions/applications in production environment Support a learning culture continuously advancing AI capabilities Experience All About You 3+ years of experience in the Data Sciences field with a focus on AI strategy and execution and developing solutions from scratch Demonstrated passion for AI competing in sponsored challenges such as Kaggle Previous experience with or exposure to: Deep Learning algorithm techniques, open source tools and technologies, statistical tools, and programming environments such as Python, R, and SQL Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning Classical Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Recommender Systems - Collaborative Filtering, FPMC, FISM, Fossil Deep Learning algorithm techniques like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks – Feedforward, CNN, LSTM’s GRU’s is a plus. Optimization techniques – Activity regularization (L1 and L2), Adam, Adagrad, Adadelta concepts; Cost Functions in Neural Nets – Contrastive Loss, Hinge Loss, Binary Cross entropy, Categorical Cross entropy; developed applications in KRR, NLP, Speech and Image processing Deep Learning frameworks for Production Systems like Tensorflow, Keras (for RPD and neural net architecture evaluation), PyTorch and Xgboost, Caffe, and Theono is a plus Exposure or experience using collaboration tools such as: Confluence (Documentation) Bitbucket/Stash (Code Sharing) Shared Folders (File Sharing) ALM (Project Management) Knowledge of payments industry a plus Experience with SAFe (Scaled Agile Framework) process is a plus Effectiveness Effective at managing and validating assumptions with key stakeholders in compressed timeframes, without hampering development momentum Capable of navigating a complex organization in a relentless pursuit of answers and clarity Enthusiasm for Data Sciences embracing the creative application of AI techniques to improve an organization's effectiveness Ability to understand technical system architecture and overarching function along with interdependency elements, as well as anticipate challenges for immediate remediation Ability to unpack complex problems into addressable segments and evaluate AI methods most applicable to addressing the segment Incredible attention to detail and focus instilling confidence without qualification in developed solutions Core Capabilities Strong written and oral communication skills Strong project management skills Concentration in Computer Science Some international travel required Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Who We Are At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Job Description We are seeking a highly skilled GenAI Developer to join our dynamic, global team. The ideal candidate will have a strong background in applied generative AI. This role will involve developing and implementing AI solutions, working with various technologies, and collaborating with cross-functional teams to drive innovation. The GenAI Developer will play a crucial role in advancing our GenAI capabilities and contributing to the success of our Wealth Management division. Key Responsibilities Work with stakeholders to understand requirements and deliver AI solutions across several domains in Wealth Management. Stay updated with the latest advancements in AI and machine learning technologies. Conduct research and experiments to improve AI capabilities within the division. Required Competencies Retrieval-Augmented Generation (RAG): Experience in developing and implementing RAG models to enhance information retrieval and generation tasks. Vector Stores: Knowledge of Vector Stores for efficient data storage and retrieval. Prompt Engineering: Skills in designing and optimizing prompts for AI models to improve accuracy and relevance. Large Language Model APIs (LLM APIs): Understanding of different LLMs, both commercial and open source, and their capabilities (e.g., OpenAI, Gemini, Llama, Claude). Programming Languages: Proficiency in Python, Java, or other relevant programming languages. Data Analysis: Strong analytical skills and experience with data analysis tools. Problem-Solving: Excellent problem-solving abilities and attention to detail. Communication: Strong verbal and written communication skills. Preferred Competencies Graph RAG: Proficiency in using Graph RAG for complex data relationships and insights. Knowledge Graphs: Expertise in building and managing Knowledge Graphs to represent and query complex data structures. Machine Learning Frameworks: Experience with TensorFlow, PyTorch, or similar frameworks. Experience with cloud platforms such as AWS, Google Cloud, or Azure. Familiarity with natural language processing (NLP) and computer vision technologies. Previous experience in a similar role or industry. Master’s or Ph.D. in Computer Science, Data Science, or a related field. Goldman Sachs Engineering Culture At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here! © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity.
Posted 1 day ago
0.0 - 1.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
An extraordinarily talented group of individuals work together every day to drive TNS' success, from both professional and personal perspectives. Come join the excellence! Overview Transaction Network Services (TNS), a Koch Industries company is seeking a talented and motivated data scientist to work within our AI Labs, India. As a Data Scientist, you will play a crucial role in analyzing complex datasets, building statistical and deep learning models, and implementing machine learning solutions. You will work closely with cross-functional teams to extract insights from data and contribute to data-driven decision-making processes. Responsibilities Primary Responsibilities: Utilize expertise in statistical analysis, machine learning, and deep learning techniques to solve complex international business problems. Develop, train, and deploy predictive models using machine learning frameworks and tools. Perform data preprocessing, feature engineering, and exploratory data analysis to identify patterns and trends. Collaborate with domain experts and stakeholders to understand business requirements and translate them into analytical solutions. Apply cloud engineering principles to design and deploy scalable and efficient AI solutions in the cloud environment. Collaborate with software engineers to integrate machine learning models into production systems. Implement MLOps practices to automate model training, deployment, and monitoring processes. Communicate complex findings and insights to both technical and non-technical stakeholders through clear and concise reports and visualizations. Qualifications Education/Experience: Advanced degree in computer science, machine learning, statistical methods, or related field. 0-1 years of industry experience in a data science role. Proficiency in Python programming language and experience with popular data science and machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Knowledge of MLOps practices and experience with tools such as Docker, AWS EMR, or AWS Sagemaker. Understanding of data preprocessing techniques, feature engineering, and exploratory data analysis. Demonstrated ability to work with large data sources with a focus on data privacy and security. Solid foundation in software engineering principles. Background in software development process and tools with a focus on Jira. Experienced in working in a geographically distributed team environment. Experience working for leadership located in other countries and cultures as well as large time zone shifts. Communications: Excellent communication & presentation skills. Strong teamwork, communication skills, passion, creativity, productivity & learning agility. Strong written and verbal communications skills working with internationally based colleagues. Ability to articulate and interpret analytical results from developed programs. Qualifications 0 - 1 years of experience If you are passionate about technology, love personal growth and opportunity, come see what TNS is all about! TNS is an equal opportunity employer. TNS evaluates qualified applicants without regard to race, color, religion, gender, national origin, age, sexual orientation, gender identity or expression, protected veteran status, disability/handicap status or any other legally protected characteristic.
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Title: AI Engineer · Level: Mid-Level · Responsibilities: o Design, develop, and implement AI/ML models and algorithms. o Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions. o Write clean, efficient, and well-documented code. o Collaborate with data engineers to ensure data quality and availability for model training and evaluation. o Work closely with senior team members to understand project requirements and contribute to technical solutions. o Troubleshoot and debug AI/ML models and applications. o Stay up-to-date with the latest advancements in AI/ML. o Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models. o Develop and deploy AI solutions on Google Cloud Platform (GCP). o Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy. o Utilize Vertex AI for model training, deployment, and management. o Integrate and leverage Google Gemini for specific AI functionalities. · Qualifications: o Bachelor's degree in Computer Science, Artificial Intelligence, or a related field. o 3+ years of experience in developing and implementing AI/ML models. o Strong programming skills in Python. o Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn. o Good understanding of machine learning concepts and techniques. o Ability to work independently and as part of a team. o Strong problem-solving skills. o Good communication skills. o Experience with Google Cloud Platform (GCP) is preferred. o Familiarity with Vertex AI is a plus.
Posted 1 day ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description About the Role We're seeking mid and senior level DevOps Engineers to join the Nielsen Enterprise IT team to help develop and support our Generative AI solutions. This is an exciting opportunity for anyone interested in joining a highly skilled and dynamic infrastructure development team with a mission to develop, deploy, scale and optimize cloud systems focused around AI for the thousands of software engineers who work on Nielsen's exciting array of media products and services. The ideal candidate is a Tech Generalist who is excited about emerging technologies, like AI, and who is always eager to learn new things. You'll work in our modern, newly designed office spaces in either Bangalore or Mumbai, collaborating with cross-functional teams on internal Nielsen projects that are transforming how we operate. Qualifications Responsibilities Development of Generative AI Solutions and Automation for Nielsen Enterprise IT Hosting of Open Source Software Solutions using AWS and the LGTM stack Design and implement CI/CD pipelines for internal AI/ML models and applications Develop Python code to integrate AI libraries into Nielsen's systems Build and maintain infrastructure as code using Terraform for AI workloads Create monitoring, logging, and alerting systems for Nielsen's AI applications Optimize infrastructure for handling large-scale data processing and model processing Implement security best practices for Nielsen's internal AI systems and data Provide periodic L1 Support for Cloud Operations and Architecture guidance Participate in periodic On-Call shifts during working hours Required Bachelor's in Computer Sciences Engineering or similar discipline Great communication skills in English 4+ years of professional experience across development and operations AWS expertise (2+ years) Strong understanding of networking fundamentals Experience in cloud solutions design and development Strong Python programming skills (2+ years) Experience with Infrastructure as Code tools like Terraform or CloudFormation (1+ years) Experience with Git and CI/CD solutions Enthusiasm for emerging technologies, particularly Generative AI Preferred Master's in Cloud Technologies or similar field Previous experience working with Generative AI projects Knowledge of MLOps principles and tools Experience with AI frameworks (PyTorch, TensorFlow, Hugging Face) Multi-cloud experience Experience with API development and deployment Knowledge of database technologies (SQL, NoSQL) Nielsen Internal Projects You May Work On Internal chatbots and knowledge bases leveraging Generative AI GenAI for Enterprise, Finance and HR solutions Cloud infrastructure to support Nielsen's large-scale data processing needs DevOps automation across Nielsen's global development teams Optimization of deployment processes for Nielsen's media measurement products Implementation of AI capabilities within Nielsen's existing technology stack Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 1 day ago
0.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 0 -2 years of experience as AI/ML engineer or similar role. Strong knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with model development and deployment processes. Proficiency in programming languages such as Python. Experience with data preprocessing, feature engineering, and model evaluation techniques. Familiarity with cloud platforms (e.g., AWS) and containerization (e.g., Docker, Kubernetes). Familiarity with version control systems (e.g., GitHub). Proficiency in data manipulation and analysis using libraries such as NumPy and Pandas. Good to have knowledge of deep learning, ML Ops: Kubeflow, MLFlow, Nextflow. Knowledge on text Analytics, NLP, Gen AI Mandatory Skill Sets ML Ops, AI / ML Preferred Skill Sets ML Ops, AI / ML Years Of Experience Required 0 - 2 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Strategy {+ 22 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Machine Learning Engineer Experience Level: 5 - 8 Years Location: Chennai About the Role: We are seeking a highly skilled and experienced Machine Learning Engineer to join our innovative team within the U.S. healthcare payer sector. You will be instrumental in designing, developing, and deploying cutting-edge machine learning models that address critical business challenges, improve operational efficiency, and enhance member outcomes. This role demands a strong understanding of ML principles, robust software engineering practices, and familiarity with the unique complexities of healthcare data. Responsibilities: Design, develop, and implement scalable machine learning models and algorithms to solve complex problems related to claims processing, fraud detection, risk stratification, member engagement, and predictive analytics within the payer landscape. Collaborate closely with data scientists, product managers, and other engineering teams to translate business requirements into technical specifications and deliver end-to-end ML solutions. Develop and optimize ML model training pipelines, ensuring data quality, feature engineering, and efficient model iteration. Conduct rigorous model evaluation, hyperparameter tuning, and performance optimization using statistical analysis and best practices. Integrate ML models into existing applications and systems, ensuring seamless deployment and operation. Write clean, well-documented, and production-ready code, adhering to high software engineering standards. Participate in code reviews, contribute to architectural discussions, and mentor junior engineers. Stay abreast of the latest advancements in machine learning, healthcare technology, and industry best practices, actively proposing innovative solutions. Ensure all ML solutions comply with relevant healthcare regulations and data privacy standards (e.g., HIPAA). Required Technical Skills: Programming Language: Expert proficiency in Python. Machine Learning Libraries: Strong experience with PyTorch and scikit-learn. Version Control: Proficient with Git and GitHub. Testing: Solid understanding and experience with Python unittest framework and Pytest for unit, integration, and API testing. Deployment: Hands-on experience with Dockerized deployment on AWS or Azure cloud platforms. CI/CD: Experience with CI/CD pipelines using AWS CodePipeline or similar alternatives (e.g., Jenkins, GitLab CI). Cloud Platforms: Experience with AWS or Azure services relevant to ML workloads (e.g., Sagemaker, EC2, S3, Azure ML, Azure Functions). If anyone Interested, please share your resume to vidya@neoware.ai
Posted 1 day ago
5.0 years
0 Lacs
Chandigarh
On-site
bebo Technologies is a leading complete software solution provider. bebo stands for 'be extension be offshore'. We are a business partner of QASource, inc. USA[www.QASource.com]. We offer outstanding services in the areas of software development, sustenance engineering, quality assurance and product support. bebo is dedicated to provide high-caliber offshore software services and solutions. Our goal is to 'Deliver in time-every time'. For more details visit our website: www.bebotechnologies.com Let's have a 360 tour of our bebo premises by clicking on below link: https://www.youtube.com/watch?v=S1Bgm07dPmMKey Skill Set Required: 5–7 years of software development experience, with 3+ years focused on building ML systems. Advanced programming skills in Python; working knowledge of Java, Scala, or C++ for backend services. Proficiency with ML frameworks: TensorFlow, PyTorch, Scikit-learn. Experience deploying ML solutions in cloud environments (AWS, GCP, Azure) using tools like SageMaker, Vertex AI, or Databricks. Strong grasp of distributed systems, CI/CD for ML, containerization (Docker/K8s), and serving frameworks. Deep understanding of algorithms, system design, and data pipelines. Experience with MLOps platforms (MLflow, Kubeflow, TFX) and feature stores. Familiarity with LLMs, RAG architectures, or multimodal AI. Experience with real-time data and streaming systems (Kafka, Flink, Spark Streaming). Exposure to governance/compliance in regulated industries (e.g., healthcare, finance). Published research, patents, or contributions to open-source ML tools is a strong plus
Posted 1 day ago
5.0 years
3 - 8 Lacs
Thiruvananthapuram
On-site
Job Requirements Design and develop AI/ML-based applications with a focus on deployment on embedded hardware platforms (e.g., Renesas RZ/V2H, NVIDIA Jetson, STM32, etc.) Port and optimize AI models for real-time performance on resource-constrained embedded systems Perform model quantization, pruning, and conversion (e.g., ONNX, TensorRT, TVM, TFLite, DRP-AI) for deployment End-to-end AI model lifecycle development including data preparation, training, validation, and inference optimization Customize and adapt AI network architectures for specific edge AI use cases (e.g., object detection, classification, audio detection) Data Preparation & Preprocessing: Collect, organize, and preprocess audio/image datasets. Work Experience Minimum 5 years of experience in AI/ML application development. Strong Python programming skills, including AI frameworks such as PyTorch, TensorFlow, Keras. Solid experience in developing deep learning-based solutions for Computer Vision, Imaging and Audio. Deep understanding of DL architectures like CNN, FCN and their application to visual tasks. Experience in model optimization techniques such as quantization, pruning, layer fusion, and INT8 calibration for edge inference. Hands-on experience in deploying AI models on embedded platforms. Proficiency in tools such as OpenCV, ONNX, TVM, TFLite, or custom inference engines. Understanding of system constraints like memory, compute, and power on edge devices. Exposure to real-time audio processing, video processing and robotics.
Posted 1 day ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Company Founded in 2011, ReNew, is one of the largest renewable energy companies globally, with a leadership position in India. Listed on Nasdaq under the ticker RNW, ReNew develops, builds, owns, and operates utility-scale wind energy projects, utility-scale solar energy projects, utility-scale firm power projects, and distributed solar energy projects. In addition to being a major independent power producer in India, ReNew is evolving to become an end-to-end decarbonization partner providing solutions in a just and inclusive manner in the areas of clean energy, green hydrogen, value-added energy offerings through digitalisation, storage, and carbon markets that increasingly are integral to addressing climate change. With a total capacity of more than 13.4 GW (including projects in pipeline), ReNew’s solar and wind energy projects are spread across 150+ sites, with a presence spanning 18 states in India, contributing to 1.9 % of India’s power capacity. Consequently, this has helped to avoid 0.5% of India’s total carbon emissions and 1.1% India’s total power sector emissions. In the over 10 years of its operation, ReNew has generated almost 1.3 lakh jobs, directly and indirectly. ReNew has achieved market leadership in the Indian renewable energy industry against the backdrop of the Government of India’s policies to promote growth of this sector. ReNew stands committed to providing clean, safe, affordable, and sustainable energy for all and has been at the forefront of leading climate action in India. Job Description As a Data Scientist, you will play a key role in designing, building, and deploying scalable machine learning solutions, with a focus on real-world applications including Generative AI, optimization, forecasting, and operational analytics. You will work closely with data scientists, engineers, and business stakeholders to take AI models from ideation to production, ensuring high-quality delivery and integration within ReNew’s technology ecosystem. Roles and Responsibilities Build and deploy production-grade ML pipelines for varied use cases across operations, manufacturing, supply chain, and more Work hands-on in designing, training, and fine-tuning models across traditional ML, deep learning, and GenAI (LLMs, diffusion models, etc.) Collaborate with data scientists to transform exploratory notebooks into scalable, maintainable, and monitored deployments Implement CI/CD pipelines, version control, and experiment tracking using tools like MLflow, DVC, or similar Do shadow deployment and A/B testing of production models Partner with data engineers to build data pipelines that support real-time or batch model inference Ensure high availability, performance, and observability of deployed ML solutions using MLOps best practices Conduct code reviews, performance tuning, and contribute to ML infrastructure improvements Support the end-to-end lifecycle of ML products Contribute to knowledge sharing, reusable component development, and internal upskilling initiatives Eligibility Criteria Bachelor's in Computer Science, Engineering, Data Science, or related field. Master’s degree preferred 4–6 years of experience in developing and deploying machine learning models, with significant exposure to MLOps practices Experience in implementing and productionizing Generative AI applications using LLMs (e.g., OpenAI, HuggingFace, LangChain, RAG architectures) Strong programming skills in Python; familiarity with ML libraries such as scikit-learn, TensorFlow, PyTorch Hands-on experience with tools like MLflow, Docker, Kubernetes, FastAPI/Flask, Airflow, Git, and cloud platforms (Azure/AWS) Solid understanding of software engineering fundamentals and DevOps/MLOps workflows Exposure to at least 2-3 industry domains (energy, manufacturing, finance, etc.) preferred Excellent problem-solving skills, ownership mindset, and ability to work in agile cross-functional teams Main Interfaces The role will involve close collaboration with data scientists, data engineers, business stakeholders, platform teams, and solution architects. Job Description As a Data Scientist, you will play a key role in designing, building, and deploying scalable machine learning solutions, with a focus on real-world applications including Generative AI, optimization, forecasting, and operational analytics. You will work closely with data scientists, engineers, and business stakeholders to take AI models from ideation to production, ensuring high-quality delivery and integration within ReNew’s technology ecosystem. Roles and Responsibilities Build and deploy production-grade ML pipelines for varied use cases across operations, manufacturing, supply chain, and more Work hands-on in designing, training, and fine-tuning models across traditional ML, deep learning, and GenAI (LLMs, diffusion models, etc.) Collaborate with data scientists to transform exploratory notebooks into scalable, maintainable, and monitored deployments Implement CI/CD pipelines, version control, and experiment tracking using tools like MLflow, DVC, or similar Do shadow deployment and A/B testing of production models Partner with data engineers to build data pipelines that support real-time or batch model inference Ensure high availability, performance, and observability of deployed ML solutions using MLOps best practices Conduct code reviews, performance tuning, and contribute to ML infrastructure improvements Support the end-to-end lifecycle of ML products Contribute to knowledge sharing, reusable component development, and internal upskilling initiatives Eligibility Criteria Bachelor's in Computer Science, Engineering, Data Science, or related field. Master’s degree preferred 4–6 years of experience in developing and deploying machine learning models, with significant exposure to MLOps practices Experience in implementing and productionizing Generative AI applications using LLMs (e.g., OpenAI, HuggingFace, LangChain, RAG architectures) Strong programming skills in Python; familiarity with ML libraries such as scikit-learn, TensorFlow, PyTorch Hands-on experience with tools like MLflow, Docker, Kubernetes, FastAPI/Flask, Airflow, Git, and cloud platforms (Azure/AWS) Solid understanding of software engineering fundamentals and DevOps/MLOps workflows Exposure to at least 2-3 industry domains (energy, manufacturing, finance, etc.) preferred Excellent problem-solving skills, ownership mindset, and ability to work in agile cross-functional teams Main Interfaces The role will involve close collaboration with data scientists, data engineers, business stakeholders, platform teams, and solution architects.
Posted 1 day ago
5.0 years
9 - 13 Lacs
Gurgaon
On-site
Job Title: Senior Software Engineer – AI/ML (Tech Lead) Experience: 5+ Years Location: Gurugram Notice Period: Immediate Joiners Only Roles & Responsibilities Design, develop, and deploy robust, scalable AI/ML-driven products and features across diverse business verticals. Provide technical leadership and mentorship to a team of engineers, ensuring delivery excellence and skill development. Drive end-to-end execution of projects — from architecture and coding to testing, deployment, and post-release support. Collaborate cross-functionally with Product, Data, and Design teams to align technology efforts with product strategy. Build and maintain ML infrastructure and model pipelines , ensuring performance, versioning, and reproducibility. Lead and manage engineering operations — including monitoring, incident response, logging, performance tuning, and uptime SLAs. Take ownership of CI/CD pipelines , DevOps processes, and release cycles to support rapid, reliable deployments. Conduct code reviews , enforce engineering best practices, and manage team deliverables and timelines. Proactively identify bottlenecks or gaps in engineering or operations and implement process improvements. Stay current with trends in AI/ML, cloud technologies, and MLOps to continuously elevate team capabilities and product quality. Tools & Platforms Languages & Frameworks: Python, FastAPI, PyTorch, TensorFlow, Hugging Face Transformers MLOps & Infrastructure: MLflow, DVC, Airflow, Docker, Kubernetes, Terraform, AWS/GCP CI/CD & DevOps: GitHub, GitLab CI/CD, Jenkins Monitoring & Logging: Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Sentry Project & Team Management: Jira, Notion, Confluence Analytics: Mixpanel, Google Analytics Collaboration & Prototyping: Slack, Figma, Miro Job Type: Full-time Pay: ₹900,000.00 - ₹1,300,000.00 per year Application Question(s): Total years of experience you have in developing AI/ML based tools ? Total years of experience you have in developing AI/ML projects ? Total years of experience you have in Handling team ? Current CTC? Expected CTC? In how many days you can join us if gets shortlisted? Current Location ? Are you ok to work from office (Gurugram , sector 54)? Rate your English communication skills out of 10 (1 is lowest and 10 is highest)? Please mention your all tech skills which makes you a fit for this role ? Have you gone through the JD and ok to perform all roles and responsibilities ? Work Location: In person
Posted 1 day ago
4.0 years
0 Lacs
South Delhi, Delhi, India
On-site
One of our client is seeking a dynamic AI Developer who will be responsible for designing, developing, and deploying AI models and systems. This is a critical role that demands creativity, technical excellence, and a passion for real-world impact. Key Responsibilities: • Develop, train, and deploy AI/ML models for user profiling, career path prediction, skill gap analysis, and recommendations • Work on natural language processing (NLP) modules for personalized report generation • Integrate AI algorithms with different data sets. • Collaborate with frontend and backend teams to ensure seamless model integration • Optimize models for performance, scalability, and security • Continuously improve the models based on user feedback and evolving data • Research and implement cutting-edge AI techniques relevant to personal growth, career development, and behavioral analytics • Maintain clear documentation for models, datasets, and APIs Qualification & Experience: • 4+ years of experience in AI/ML development (startup experience is a big plus) • Strong proficiency in Python and ML libraries like TensorFlow, PyTorch, Scikit-learn, Hugging Face, etc. • Experience with NLP (spaCy, NLTK, Transformers) • Knowledge of recommender systems, predictive analytics, and classification models • Good understanding of database systems and APIs (REST, GraphQL) • Ability to work with unstructured data (text, survey inputs, astrological charts) • Familiarity with MLOps, model deployment, and cloud platforms (AWS/GCP/Azure) Personal Attributes: • Highly self-driven, proactive, and solution-oriented • Passionate about personal development, career transformation, and making a meaningful impact • Comfortable working in an early-stage startup, where responsibilities are fluid and opportunities are immense What We Offer: • A chance to be an early team member in a disruptive, high-impact startup • Equity options and performance-linked bonuses • Flexible work environment and hours • Direct mentorship and opportunities to lead AI initiatives • A platform to innovate, experiment, and leave a significant legacy Important: The candidate has to be from Delhi/NCR
Posted 1 day ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
YOUR ROLE Data is the foundation of our innovation. We are seeking a Manager, Data Science with expertise in NLP and Generative AI to lead the development of cutting-edge AI-driven solutions in healthcare. This role requires a deep understanding of healthcare data and the ability to design and implement advanced language models that extract insights, automate workflows, and enhance clinical decision-making. We’re looking for a visionary leader who can define and build the next generation of AI-driven tools, leveraging LLMs, deep learning, and predictive analytics to personalize care based on patients’ clinical and behavioral history. If you’re passionate about pushing the boundaries of AI in healthcare, we’d love to hear from you! THE THINGS YOU’LL BE DOING ▶ Team Leadership & Development: Build, mentor, and manage a team of data scientists, and machine learning engineers. Foster a culture of collaboration, innovation, and technical excellence. ▶ Roadmap Execution: Define and execute on the quarterly AI/ML roadmap, setting clear goals, priorities, and deliverables for the team. ▶ Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them. ▶ Define technical architecture to productize Innovaccer’s machine-learning algorithms and take them to market with partnerships with different organizations. ▶ Work with our data platform and applications team to help them successfully integrate the data science capability or algorithms in their product/workflows. ▶ Project & Stakeholder Management: Work closely with cross-functional teams, including product managers, engineers, and business leaders, to align AI/ML initiatives with company objectives. WHAT LANDS YOU THIS ROLE ▶ Masters in Computer Science, Computer Engineering or other relevant fields (PhD Preferred) ▶ 8+ years of experience in Data Science (healthcare experience will be a plus) ▶ Strong experience with deep learning techniques to build NLP/Computer vision models as well as state of art GenAI pipelines - Has demonstrable experience deploying deep learning models in production at scale with interactive improvements- would require hands-on expertise with at least 1 deep learning frameworks like Pytorch or Tensorflow. ▶ Strong hands-on experience in building GenAI applications - building LLM based workflows along with optimization techniques - knowledge of implementing agentic workflows is a plus. ▶ Has keen interest in research and stays updated with key advancements in the area of AI and ML in the industry. Having patents/publications in any area of AI/ML is a great add on. ▶ Hands on experience with at least one ML platform among Databricks, Azure ML, Sagemaker s ▶ Strong written and spoken communication skills WHAT DO WE OFFER? ▶ An ocean full of opportunities; learn, experiment and implement ▶ We foster people who are willing to try new things every day and never be frightened to fail ▶ Exceptional growth opportunities
Posted 1 day ago
2.0 years
5 - 10 Lacs
Mohali
On-site
PROFILE: Python Django Developer EXPERIENCE: 2-5 Years LOCATION: MOHALI (WFO only) LOCAL candidates are preferred | Immediate joiners | ChicMic Studios is looking for a highly skilled and experienced Python Django Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities ● Develop and maintain web applications using Django and Flask frameworks. ● Design and implement RESTful APIs using Django Rest Framework (DRF). ● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. ● Build and integrate APIs for AI/ML models into existing systems. ● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn. ● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. ● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. ● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker. ● Ensure the scalability, performance, and reliability of applications and deployed models. ● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. ● Write clean, maintainable, and efficient code following best practices. ● Conduct code reviews and provide constructive feedback to peers. ● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications ● B.Tech/ MCA ● 3+ years of professional experience as a Python Developer. ● Proficient in Python with a strong understanding of its ecosystem. ● Extensive experience with Django and Flask frameworks. ● Hands-on experience with AWS services for application deployment and management. ● Strong knowledge of Django Rest Framework (DRF) for building APIs. ● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn. ● Experience with transformer architectures for NLP and advanced AI solutions. ● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). ● Familiarity with MLOps practices for managing the machine learning lifecycle. ● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. ● Excellent problem-solving skills and the ability to work independently and as part of a team. ● Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. For more information, reach us at https://www.chicmicstudios.in/ Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Benefits: Flexible schedule Leave encashment Provident Fund Application Question(s): Are you an immediate joiner? Could you please confirm your highest qualification—whether it is B.Tech or MCA? Experience: Django: 3 years (Required) Work Location: In person
Posted 1 day ago
0 years
4 - 4 Lacs
Lucknow
On-site
Speech Recognition Engineer (1–2 yrs exp) Job Title: Speech Recognition Engineer Location: lucknow Salary: ₹35,000 – ₹40,000 per month Job Type: Full-Time Qualifications: - B.Tech/M.Tech in Signal Processing, AI or MSc in Speech & Language Processing Skills: - Experience with Whisper, DeepSpeech, or Kaldi - Knowledge of ASR/TTS models, acoustic modeling - Familiarity with speech corpora and phonetics - Python, PyTorch/TensorFlow Responsibilities: - Build and train custom speech recognition models - Analyze acoustic data and improve accuracy - Integrate ASR modules into real-time systems - Collaborate with ML and product teams Job Type: Full-time Pay: ₹35,000.00 - ₹40,000.00 per month Work Location: In person
Posted 1 day ago
0 years
9 - 10 Lacs
Ahmedabad
On-site
Lead AI/ML projects involving LLMs,fine-tuning,retrieval- augmented generation (RAG), agentic systems, and embedding models. Guide the design and development of scalable Python-based AI systems using frameworks like LangChain, LangGraph, and Hugging Face. Develop and integrate advanced chatbot systems (text and voice) using tools such as OpenAI, Claude, Gemini, and vLLM. Architect and implement solutions using Model Context Protocol (MCP) to manage structured prompt flows, context switching, and modular multi-step agent reasoning. Collaborate closely with cross-functional teams including product, data, IT, and support teams to translate business needs into AI solutions. Ensure proper orchestration and error handling in agentic workflows and decision automation systems. Contribute to RCD by evaluating emerging AI frameworks and building production-grade reusable components. Communicate effectively with stakeholders, manage expectations, and provide technical direction throughoutthe projectlifecycle. Oversee code quality, model performance, and deployment pipelines across cloud platforms like AWS. Manage vector search pipelines, data storage solutions (e.g., Neo4j, Postgres, Pinecone), and model inference optimization. Mentor and guide junior team members in development, best practices, and research. Strong expertise in: o Python, PyTorch, TensorFlow, LangChain, LangGraph o LLMs (OpenAI, Claude, LLama3, Gemini), RAG, PeFT, LoRA o Agentic workflows, Prompt engineering, Distributed model training o Model Context Protocol (MCP) – designing modular, reusable prompt frameworks and managing dynamic reasoning contexts o Vector databases (Pinecone, Redis), Graph databases (Neo4j) o MLOps tools like MLFlow, HuggingFace, gRPC, Kafka Proven experience deploying models and applications in cloud environments (AWS) Exceptional communication and stakeholder management skills. Demonstrated ability to lead small teams, manage priorities, and deliver undertighttimelines. Job Type: Full-time Pay: ₹80,000.00 - ₹90,000.00 per month Benefits: Cell phone reimbursement Work Location: In person
Posted 1 day ago
0 years
16 Lacs
Ahmedabad
On-site
Opening for Team Lead - Generative AI / AI-ML Specialist Role Overview: We’re seeking an experienced Data Scientist / Team Lead with deep expertise in Generative AI (GenAI) to design and implement cutting-edge AI models that solve real-world business problems. You’ll work with LLMs, GANs, RAG frameworks, and transformer-based architectures to create production-ready solutions across domains. Key Responsibilities: Design, develop, and fine-tune Generative AI models (LLMs, GANs, Diffusion models, etc.) Work on RAG (Retrieval-Augmented Generation) and transformer-based architectures for contextual responses and document intelligence Customize and fine-tune Large Language Models (LLMs) for domain-specific applications Build and maintain robust ML pipelines and infrastructure for training, evaluation, and deployment Collaborate with engineering teams to integrate models into end-user applications Stay current with the latest GenAI research, open-source tools, and frameworks Analyze model outputs, evaluate performance, and ensure ethical AI practices. Required Skills: Strong proficiency in Python and ML/DL libraries: TensorFlow, PyTorch, HuggingFace Transformers Deep understanding of LLMs, RAG, GANs, Autoencoders, and other GenAI architectures Experience with fine-tuning models using LoRA, PEFT, or similar techniques Familiarity with Vector Databases (e.g., FAISS, Pinecone) and embedding generation Experience working with datasets, data preprocessing, and synthetic data generation Good knowledge of NLP, prompt engineering, and language model safety Experience with APIs, model deployment, and cloud platforms (AWS/GCP/Azure) Nice to Have: Prior work with Chatbots, Conversational AI, or AI Assistants Familiarity with LangChain, LLMOps, or Serverless Model Deployment Background in MLOps, containerization (Docker/Kubernetes), and CI/CD pipelines Knowledge of OpenAI, Anthropic, Google Gemini, or Meta LLaMA models What We Offer: An opportunity to work on real-world GenAI products and POCs Collaborative environment with constant learning and innovation Competitive salary and growth opportunities 5-day work week with a focus on work-life balance Work from office Job Types: Full-time, Permanent Pay: Up to ₹1,600,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required) Work Location: In person
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19405 Jobs | Bengaluru
Accenture in India
15976 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11281 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France