Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 3.0 years
35 Lacs
Mumbai
Work from Office
Job Insights: 1. Develop and maintain AI models on time series & financial date for predictive modelling, including data collection, analysis, feature engineering, model development, evaluation, backtesting and monitoring. 2. Identify areas for model improvement through independent research and analysis, and develop recommendations for updates and enhancements. 3. Working with expert colleagues, Quant and business representatives to examine the results and keep models grounded in reality. 4. Documenting each step of the development and informing decision makers by presenting them options and results. 5. Ensure the integrity and security of data. 6. Provide support for production models delivered by the Mumbai team but potentially as well for other models to any of the Asian/EU/US time zones. Qualifications: Bachelors or Masters degree in a numeric subject with understanding of economics and markets (eg.: Economics with a speciality in Econometrics, Finance, Computer Science, Applied Maths, Engineering, Physics) 2. Knowledge of key concepts in Statistics and Mathematics such as Statistical methods for Machine learning, Probability Theory and Linear Algebra. 3. Knowledge of Monte Carlo Simulations, Bayesian modelling & Causal Inference. 4. Experience with Machine Learning & Deep Learning concepts including data representations, neural network architectures, custom loss functions. 5. Proven track record of building AI models on time-series & financial data. 6. Programming skills in Python and knowledge of common numerical and machine-learning packages (like NumPy, scikit-learn, pandas, PyTorch, PyMC, statsmodels). 7. Ability to write clear and concise code in python. 8. Intellectually curious and willing to learn challenging concepts daily.
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Associate Data Scientist Location: Mumbai Job Type: Full-time Experience: 0-6months About The Role We are seeking a highly motivated Associate Data Scientist with a strong passion for energy, technology, and data-driven decision-making. In this role, you will be responsible for developing and refining energy load forecasting models , analyzing customer demand patterns , and improving forecasting accuracy using advanced time series analysis and machine learning techniques . Your insights will directly support risk management, operational planning, and strategic decision-making across the company. If you thrive in a fast-paced, dynamic environment and enjoy solving complex data science challenges , we’d love to hear from you! Key Responsibilities Develop and enhance energy load forecasting models using time series forecasting, statistical modeling, and machine learning techniques. Analyze historical and real-time energy consumption data to identify trends and improve forecasting accuracy. Investigate discrepancies between forecasted and actual energy usage, providing actionable insights. Automate data pipelines and forecasting workflows to streamline processes across departments. Monitor day-over-day forecast variations and communicate key insights to stakeholders. Work closely with internal teams and external vendors to refine forecasting methodologies. Perform scenario analysis to assess seasonal patterns, anomalies, and market trends. Continuously optimize forecasting models, leveraging techniques like ARIMA, Prophet, LSTMs, and regression-based models. Qualifications & Skills 0-6months of experience in data science, preferably in energy load forecasting, demand prediction, or a related field. Strong expertise in time series analysis, forecasting algorithms, and statistical modeling. Proficiency in Python, with experience using libraries such as pandas, NumPy, scikit-learn, statsmodels, and TensorFlow/PyTorch. Experience working with SQL and handling large datasets. Hands-on experience with forecasting models like ARIMA, SARIMA, Prophet, LSTMs, XGBoost, and random forests. Familiarity with feature engineering, anomaly detection, and seasonality analysis. Strong analytical and problem-solving skills with a data-driven mindset. Excellent communication skills, with the ability to translate technical findings into business insights. Ability to work independently and collaboratively in a fast-paced, dynamic environment. Strong attention to detail, time management, and organizational skills. Preferred Qualifications (Nice To Have) Experience working with energy market data, smart meter analytics, or grid forecasting. Knowledge of cloud platforms (AWS) for deploying forecasting models. Experience with big data technologies such as Spark or Hadoop. Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Greater Bengaluru Area
On-site
Job Title : Senior Data Scientist (SDS 2) Experience: 4+ years Location : Bengaluru (Hybrid) Company Overview: Akaike Technologies is a dynamic and innovative AI-driven company dedicated to building impactful solutions across various domains . Our mission is to empower businesses by harnessing the power of data and AI to drive growth, efficiency, and value. We foster a culture of collaboration , creativity, and continuous learning , where every team member is encouraged to take initiative and contribute to groundbreaking projects. We value diversity, integrity, and a strong commitment to excellence in all our endeavors. Job Description: We are seeking an experienced and highly skilled Senior Data Scientist to join our team in Bengaluru. This role focuses on driving innovative solutions using cutting-edge Classical Machine Learning, Deep Learning, and Generative AI . The ideal candidate will possess a blend of deep technical expertise , strong business acumen, effective communication skills , and a sense of ownership . During the interview, we look for a proven track record in designing, developing, and deploying scalable ML/DL solutions in a fast-paced, collaborative environment. Key Responsibilities: ML/DL Solution Development & Deployment: Design, implement, and deploy end-to-end ML/DL, GenAI solutions, writing modular, scalable, and production-ready code. Develop and implement scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Design and implement custom models and loss functions to address data nuances and specific labeling challenges. Ability to model in different marketing scenarios of a product life cycle ( Targeting, Segmenting, Messaging, Content Recommendation, Budget optimisation, Customer scoring, risk and churn ), and data limitations(Sparse or incomplete labels, Single class learning) Large-Scale Data Handling & Processing: Efficiently handle and model billions of data points using multi-cluster data processing frameworks (e.g., Spark SQL, PySpark ). Generative AI & Large Language Models (LLMs): Leverage in-depth understanding of transformer architectures and the principles of Large and Small Language Models . Practical experience in building LLM-ready Data Management layers for large-scale structured and unstructured data . Apply foundational understanding of LLM Agents, multi-agent systems (e.g., Agent-Critique, ReACT, Agent Collaboration), advanced prompting techniques, LLM eval uation methodologies, confidence grading, and Human-in-the-Loop systems. Experimentation, Analysis & System Design: Design and conduct experiments to test hypotheses and perform Exploratory Data Analysis (EDA) aligned with business requirements. Apply system design concepts and engineering principles to create low-latency solutions capable of serving simultaneous users in real-time. Collaboration, Communication & Mentorship: Create clear solution outlines and e ffectively communicate complex technical concepts to stakeholders and team members. Mentor junior team members, providing guidance and bridging the gap between business problems and data science solutions. Work closely with cross-functional teams and clients to deliver impactful solutions. Prototyping & Impact Measurement: Comfortable with rapid prototyping and meeting high productivity expectations in a fast-paced development environment. Set up measurement pipelines to study the impact of solutions in different market scenarios. Must-Have Skills: Core Machine Learning & Deep Learning: In-depth knowledge of Artificial Neural Networks (ANN), 1D, 2D, and 3D Convolutional Neural Networks (ConvNets), LSTMs , and Transformer models. Expertise in modeling techniques such as promo mix modeling (MMM) , PU Learning , Customer Lifetime Value (CLV) , multi-dimensional time series modeling, and demand forecasting in supply chain and simulation. Strong proficiency in PU learning, single-class learning, representation learning, alongside traditional machine learning approaches. Advanced understanding and application of model explainability techniques. Data Analysis & Processing: Proficiency in Python and its data science ecosystem, including libraries like NumPy, Pandas, Dask, and PySpark for large-scale data processing and analysis. Ability to perform effective feature engineering by understanding business objectives. ML/DL Frameworks & Tools: Hands-on experience with ML/DL libraries such as Scikit-learn, TensorFlow/Keras, and PyTorch for developing and deploying models. Natural Language Processing (NLP): Expertise in traditional and advanced NLP techniques, including Transformers (BERT, T5, GPT), Word2Vec, Named Entity Recognition (NER), topic modeling, and contrastive learning. Cloud & MLOps: Experience with the AWS ML stack or equivalent cloud platforms. Proficiency in developing scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Problem Solving & Research: Strong logical and reasoning skills. Good understanding of the Python Ecosystem and experience implementing research papers. Collaboration & Prototyping: Ability to thrive in a fast-paced development and rapid prototyping environment. Relevant to Have: Expertise in Claims data and a background in the pharmaceutical industry . Awareness of best software design practices . Understanding of backend frameworks like Flask. Knowledge of Recommender Systems, Representative learning, PU learning. Benefits and Perks: Competitive ESOP grants. Opportunity to work with Fortune 500 companies and world-class teams. Support for publishing papers and attending academic/industry conferences. Access to networking events, conferences, and seminars. Visibility across all functions at Akaike, including sales, pre-sales, lead generation, marketing, and hiring. Appendix Technical Skills (Must Haves) Having deep understanding of the following Data Processing : Wrangling : Some understanding of querying database (MySQL, PostgresDB etc), very fluent in the usage of the following libraries Pandas, Numpy, Statsmodels etc. Visualization : Exposure towards Matplotlib, Plotly, Altair etc. Machine Learning Exposure : Machine Learning Fundamentals, For ex: PCA, Correlations, Statistical Tests etc. Time Series Models, For ex: ARIMA, Prophet etc. Tree Based Models, For ex: Random Forest, XGBoost etc.. Deep Learning Models, For ex: Understanding and Experience of ConvNets, ResNets, UNets etc. GenAI Based Models : Experience utilizing large-scale language models such as GPT-4 or other open-source alternatives (such as Mistral, Llama, Claude) through prompt engineering and custom finetuning. Code Versioning Systems : Github, Git If you're interested in the job opening, please apply through the Keka link provided here: https://akaike.keka.com/careers/jobdetails/26215 Show more Show less
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – AI and DATA – Statistical Modeler-Senior At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our EY- GDS AI and Data team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. Technical Skills: Statistical Programming Languages: Python, R Libraries & Frameworks: Pandas, NumPy, Scikit-learn, StatsModels, Tidyverse, caret Data Manipulation Tools: SQL, Excel Data Visualization Tools: Matplotlib, Seaborn, ggplot2, Machine Learning Techniques: Supervised and unsupervised learning, model evaluation (cross-validation, ROC curves) 5-7 years of experience in building statistical forecast models for pharma industry Deep understanding of patient flows,treatment journey across both Onc and Non Onc Tas. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and advisory services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Greater Bengaluru Area
On-site
Job Description: We are looking for a Data Scientist with expertise in Python, Azure Cloud, NLP, Forecasting, and large-scale data processing. The role involves enhancing existing ML models, optimising embeddings, LDA models, RAG architectures, and forecasting models, and migrating data pipelines to Azure Databricks for scalability and efficiency. Key Responsibilities: Model Development Model Development & Optimisation Train and optimise models for new data providers, ensuring seamless integration. Enhance models for dynamic input handling. Improve LDA model performance to handle a higher number of clusters efficiently. Optimise RAG (Retrieval-Augmented Generation) architecture to enhance recommendation accuracy for large datasets. Upgrade Retrieval QA architecture for improved chatbot performance on large datasets. Forecasting & Time Series Modelling Develop and optimise forecasting models for marketing, demand prediction, and trend analysis. Implement time series models (e.g., ARIMA, Prophet, LSTMs) to improve business decision-making. Integrate NLP-based forecasting, leveraging customer sentiment and external data sources (e.g., news, social media). Data Pipeline & Cloud Migration Migrate the existing pipeline from Azure Synapse to Azure Databricks and retrain models accordingly - Note: this is required only for the AUB role(s) Address space and time complexity issues in embedding storage and retrieval on Azure Blob Storage. Optimise embedding storage and retrieval in Azure Blob Storage for better efficiency. MLOps & Deployment Implement MLOps best practices for model deployment on Azure ML, Azure Kubernetes Service (AKS), and Azure Functions. Automate model training, inference pipelines, and API deployments using Azure services. Experience: Experience in Data Science, Machine Learning, Deep Learning and Gen AI. Design, Architect and Execute end to end Data Science pipelines which includes Data extraction, data preprocessing, Feature engineering, Model building, tuning and Deployment. Experience in leading a team and responsible for project delivery. Experience in Building end to end machine learning pipelines with expertise in developing CI/CD pipelines using Azure Synapse pipelines, Databricks, Google Vertex AI and AWS. Experience in developing advanced natural language processing (NLP) systems, specializing in building RAG (Retrieval-Augmented Generation) models using Langchain. Deploy RAG models to production. Have expertise in building Machine learning pipelines and deploy various models like Forecasting models, Anomaly Detection models, Market Mix Models, Classification models, Regression models and Clustering Techniques. Maintaining Github repositories and cloud computing resources for effective and efficient version control, development, testing and production. Developing proof-of-concept solutions and assisting in rolling these out to our clients. Required Skills & Qualifications: Hands-on experience with Azure Databricks, Azure ML, Azure Synapse, Azure Blob Storage, and Azure Kubernetes Service (AKS). Experience with forecasting models, time series analysis, and predictive analytics. Proficiency in Python (NumPy, Pandas, TensorFlow, PyTorch, Statsmodels, Scikit-learn, Hugging Face, FAISS). Experience with model deployment, API optimisation, and serverless architectures. Hands-on experience with Docker, Kubernetes, and MLflow for tracking and scaling ML models. Expertise in optimising time complexity, memory efficiency, and scalability of ML models in a cloud environment. Experience with Langchain or equivalent and RAG and multi-agentic generation Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 4 weeks ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are seeking an exceptional and highly motivated Lead Data Scientist with a PhD in Data Science, Computer Science, Applied Mathematics, Statistics, or a closely related quantitative field, to spearhead the design, development, and deployment of an automotive OEM’s next-generation Intelligent Forecast Application. This pivotal role will leverage cutting-edge machine learning, deep learning, and statistical modeling techniques to build a robust, scalable, and accurate forecasting system crucial for strategic decision-decision-making across the automotive value chain, including demand planning, production scheduling, inventory optimization, predictive maintenance, and new product introduction. The successful candidate will be a recognized expert in advanced forecasting methodologies, possess a strong foundation in data engineering and MLOps principles, and demonstrate a proven ability to translate complex research into tangible, production-ready applications within a dynamic industrial environment. This role demands not only deep technical expertise but also a visionary approach to leveraging data and AI to drive significant business impact for a leading automotive OEM. Role Description Strategic Leadership & Application Design: Lead the end-to-end design and architecture of the Intelligent Forecast Application, defining its capabilities, modularity, and integration points with existing enterprise systems (e.g., ERP, SCM, CRM). Develop a strategic roadmap for forecasting capabilities, identifying opportunities for innovation and the adoption of emerging AI/ML techniques (e.g., generative AI for scenario planning, reinforcement learning for dynamic optimization). Translate complex business requirements and automotive industry challenges into well-defined data science problems and technical specifications. Advanced Model Development & Research: Design, develop, and validate highly accurate and robust forecasting models using a variety of advanced techniques, including: Time Series Analysis: ARIMA, SARIMA, Prophet, Exponential Smoothing, State-space models. Machine Learning: Gradient Boosting (XGBoost, LightGBM), Random Forests, Support Vector Machines. Deep Learning: LSTMs, GRUs, Transformers, and other neural network architectures for complex sequential data. Probabilistic Forecasting: Quantile regression, Bayesian methods to capture uncertainty. Hierarchical & Grouped Forecasting: Managing forecasts across multiple product hierarchies, regions, and dealerships. Incorporate diverse data sources, including historical sales, market trends, economic indicators, competitor data, internal operational data (e.g., production schedules, supply chain disruptions), external events, and unstructured data. Conduct extensive exploratory data analysis (EDA) to identify patterns, anomalies, and key features influencing automotive forecasts. Stay abreast of the latest academic researchand industry advancements in forecasting, machine learning, and AI, actively evaluating and advocating for their practical application within the OEM. Application Development & Deployment (MLOps): Architect and implement scalable data pipelines for ingestion, cleaning, transformation, and feature engineering of large, complex automotive datasets. Develop robust and efficient code for model training, inference, and deployment within a production environment. Implement MLOps best practices for model versioning, monitoring, retraining, and performance management to ensure the continuous accuracy and reliability of the forecasting application. Collaborate closely with Data Engineering, Software Development, and IT Operations teams to ensure seamless integration, deployment, and maintenance of the application. Performance Evaluation & Optimization: Define and implement rigorous evaluation metrics for forecasting accuracy (e.g., MAE, RMSE, MAPE, sMAPE, wMAPE, Pinball Loss) and business impact. Perform A/B testing and comparative analyses of different models and approaches to continuously improve forecasting performance. Identify and mitigate sources of bias and uncertainty in forecasting models. Collaboration & Mentorship: Work cross-functionally with various business units (e.g., Sales, Marketing, Supply Chain, Manufacturing, Finance, Product Development) to understand their forecasting needs and integrate solutions. Communicate complex technical concepts and model insights clearly and concisely to both technical and non-technical stakeholders. Provide technical leadership and mentorship to junior data scientists and engineers, fostering a culture of innovation and continuous learning. Potentially contribute to intellectual property (patents) and present findings at internal and external conferences. Qualifications Education: PhD in Data Science, Computer Science, Statistics, Applied Mathematics, Operations Research, or a closely related quantitative field. Experience: 5+ years of hands-on experience in a Data Scientist or Machine Learning Engineer role, with a significant focus on developing and deploying advanced forecasting solutions in a production environment. Demonstrated experience designing and developing intelligent applications, not just isolated models. Experience in the automotive industry or a similar complex manufacturing/supply chain environment is highly desirable. Technical Skills: Expert proficiency in Python (Numpy, Pandas, Scikit-learn, Statsmodels) and/or R. Strong proficiency in SQL. Machine Learning/Deep Learning Frameworks: Extensive experience with TensorFlow, PyTorch, Keras, or similar deep learning libraries. Forecasting Specific Libraries: Proficiency with forecasting libraries like Prophet, Statsmodels, or specialized time series packages. Data Warehousing & Big Data Technologies: Experience with distributed computing frameworks (e.g., Apache Spark, Hadoop) and data storage solutions (e.g., Snowflake, Databricks, S3, ADLS). Cloud Platforms: Hands-on experience with at least one major cloud provider (Azure, AWS, GCP) for data science and ML deployments. MLOps: Understanding and practical experience with MLOps tools and practices (e.g., MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines). Data Visualization: Proficiency with tools like Tableau, Power BI, or similar for creating compelling data stories and dashboards. Analytical Prowess: Deep understanding of statistical inference, experimental design, causal inference, and the mathematical foundations of machine learning algorithms. Problem Solving: Proven ability to analyze complex, ambiguous problems, break them down into manageable components, and devise innovative solutions. Preferred Qualifications Publications in top-tier conferences or journals related to forecasting, time series analysis, or applied machine learning. Experience with real-time forecasting systems or streaming data analytics. Familiarity with specific automotive data types (e.g., telematics, vehicle sensor data, dealership data, market sentiment). Experience with distributed version control systems (e.g., Git). Knowledge of agile development methodologies. Soft Skills Exceptional Communication: Ability to articulate complex technical concepts and insights to a diverse audience, including senior management and non-technical stakeholders. Collaboration: Strong interpersonal skills and a proven ability to work effectively within cross-functional teams. Intellectual Curiosity & Proactiveness: A passion for continuous learning, staying ahead of industry trends, and proactively identifying opportunities for improvement. Strategic Thinking: Ability to see the big picture and align technical solutions with overall business objectives. Mentorship: Desire and ability to guide and develop less experienced team members. Resilience & Adaptability: Thrive in a fast-paced, evolving environment with complex challenges. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less
Posted 1 month ago
5.0 - 9.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Dreaming big is in our DNA Its who we are as a company Its our culture Its our heritage And more than ever, its our future A future where were always looking forward Always serving up new ways to meet lifes moments A future where we keep dreaming bigger We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential The power we create together when we combine your strengths with ours is unstoppable Are you ready to join a team that dreams as big as you do AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology The teams are transforming Operations through Tech and Analytics, Do You Dream Big We Need You, Job Description Job Title: Senior ML Engineer Location: Bangalore Reporting to: Director Data Analytics Purpose of the role Anheuser-Busch InBev (AB InBev)s Supply Analytics is responsible for building competitive differentiated solutions that enhance brewery efficiency through data-driven insights We optimize processes, reduce waste, and improve productivity by leveraging advanced analytics and AI-driven solutions, Senior MLE, will be responsible for the end-to-end deployment of machine learning models on edge devices You will take ownership of all aspects of edge deployment, including model optimization, scaling complexities, containerization, and infrastructure management, ensuring high availability and performance, Key tasks & accountabilities Lead the entire edge deployment lifecycle, from model training to deployment and monitoring on edge devices Develop, and maintain a scalable Edge ML pipeline that enables real-time analytics at brewery sites, Optimize and containerize models using Portainer, Docker, and Azure Container Registry (ACR) to ensure efficient execution in constrained edge environments, Own and manage the GitHub repository, ensuring structured, well-documented, and modularized code for seamless deployments, Establish robust CI/CD pipelines for continuous integration and deployment of models and services, Implement logging, monitoring, and alerting for deployed models to ensure reliability and quick failure recovery Ensure compliance with security and governance best practices for data and model deployment in edge environments, Document the thought process & create artifacts on team repo/wiki that can be used to share with business & engineering for sign off, Review code quality and design developed by the peers, Significantly improve the performance & reliability of our code that creates high quality & reproducible results, Develop internal tools/utils that improve productivity of entire team, Collaborate with other team members to advance the teams ability to ship high quality code fast! Mentor/coach junior team members to continuously upskill them, Maintain basic developer hygiene that includes but not limited to, writing tests, using loggers, readme to name a few, Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Academic degree in, but not limited to, Bachelors or master's in computer application, Computer science, or any engineering discipline, Previous Work Experience 5+ years of real-world experience to develop scalable & high-quality ML models, Strong problem-solving skills with an owners mindset?proactively identifying and resolving bottlenecks, Technical Skills Required Proficiency with pandas, NumPy, SciPy, scikit-learn, stats models, TensorFlow, Good understanding of statistical computing, parallel processing, Experience with advanced TensorFlow distributed, NumPy, joblib, Good understanding of memory management & parallel processing in python, Profiling & optimization of production code, Strong at Python coding Exposure to working in IDEs such as VSC or PyCharm, Experience in code versioning using Git, maintaining modularized code base for multiple deployments, Experience in working in an Agile environment, In depth understand of data bricks (Workflows, cluster creation, repo management), In depth understanding of machine learning solution in Azure cloud, Best practices in coding standards, unit testing, and automation, Proficiency in Docker, Kubernetes, Portainer, and container orchestration for edge computing, Other Skills Required Experience in real-time analytics and edge AI deployments Exposure to DevOps practices, including infrastructure automation and monitoring tools Contributions to OSS or Stack overflow, And above all of this, an undying love for beer! We dream big to create future with more cheers
Posted 1 month ago
0.0 - 2.0 years
0 Lacs
Gurugram, Haryana
On-site
Position : AI / ML Engineer Job Type : Full-Time Location : Gurgaon, Haryana, India Experience : 2 Years Industry : Information Technology Domain : Demand Forecasting in Retail/Manufacturing Job Summary We are seeking a skilled Time Series Forecasting Engineer to enhance existing Python microservices into a modular, scalable forecasting engine. The ideal candidate will have a strong statistical background, expertise in handling multi-seasonal and intermittent data, and a passion for model interpretability and real-time insights. Key Responsibilities Develop and integrate advanced time-series models: MSTL, Croston, TSB, Box-Cox. Implement rolling-origin cross-validation and hyperparameter tuning. Blend models such as ARIMA, Prophet, and XGBoost for improved accuracy. Generate SHAP-based driver insights and deliver them to a React dashboard via GraphQL. Monitor forecast performance with Prometheus and Grafana; trigger alerts based on degradation. Core Technical Skills Languages : Python (pandas, statsmodels, scikit-learn) Time Series : ARIMA, MSTL, Croston, Prophet, TSB Tools : Docker, REST API, GraphQL, Git-flow, Unit Testing Database : PostgreSQL Monitoring : Prometheus, Grafana Nice-to-Have : MLflow, ONNX, TensorFlow Probability Soft Skills Strong communication and collaboration skills Ability to explain statistical models in layman terms Proactive problem-solving attitude Comfort working cross-functionally in iterative development environments Job Type: Full-time Pay: ₹400,000.00 - ₹800,000.00 per year Application Question(s): Do you have at least 2 years of hands-on experience in Python-based time series forecasting? Have you worked in retail or manufacturing domains where demand forecasting was a core responsibility? Are you currently authorized to work in India without sponsorship? Have you implemented or used ARIMA, Prophet, or MSTL in any of your projects? Have you used Croston or TSB models for forecasting intermittent demand? Are you familiar with SHAP for model interpretability? Have you containerized a forecasting pipeline using Docker and exposed it through a REST or GraphQL API? Have you used Prometheus and Grafana to monitor model performance in production? Work Location: In person Application Deadline: 05/06/2025 Expected Start Date: 05/06/2025
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6462 Jobs | Ahmedabad
Amazon
6351 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane