Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 4.0 years
0 Lacs
Mumbai, Maharashtra
Remote
We are seeking a motivated Data Scientist with 2–4 years of experience to join our data team. You will work with large datasets, build predictive models, and deliver actionable insights that improve project performance and reduce risk. Key Responsibilities: Analyze large datasets to extract insights and trends Build ML models (prediction, classification, clustering) using Python Work with AEC domain teams to define data solutions Clean, transform, and validate data (SQL, Python) Automate ML workflows and reporting Create visualizations using Power BI, Tableau, or Plotly Document processes for reproducibility Qualifications: BE/BTech/MTech in Data Science, CS, Statistics, or related fields Strong Python & SQL skills Proficiency in libraries: scikit-learn, pandas, TensorFlow, etc. Knowledge of OOPs and ML concepts Experience with data visualization tools AEC industry knowledge (Revit, Navisworks, BIM 360) is a plus Strong communication & presentation skills Location: Andheri | Industry: AEC (Architecture, Engineering, Construction) Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Life insurance Paid sick time Paid time off Provident Fund Work from home Schedule: Day shift Fixed shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): What is your current CTC? What is your expected CTC? What is your notice period? Location: Mumbai, Maharashtra (Required) Work Location: In person
Posted 1 week ago
3.0 - 7.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Experienced in -React.JS, Plotly DASH, HTML5, AngularJS, PowerBI, Grafana. Should have worked on frontend technologies Responsible for creating and executing user interface component using React.js concepts & workflows such as Redux, Flux & Webpack
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Coupa makes margins multiply through its community-generated AI and industry-leading total spend management platform for businesses large and small. Coupa AI is informed by trillions of dollars of direct and indirect spend data across a global network of 10M+ buyers and suppliers. We empower you with the ability to predict, prescribe, and automate smarter, more profitable business decisions to improve operating margins. Why join Coupa? 🔹 Pioneering Technology: At Coupa, we're at the forefront of innovation, leveraging the latest technology to empower our customers with greater efficiency and visibility in their spend. 🔹 Collaborative Culture: We value collaboration and teamwork, and our culture is driven by transparency, openness, and a shared commitment to excellence. 🔹 Global Impact: Join a company where your work has a global, measurable impact on our clients, the business, and each other. Learn more on Life at Coupa blog and hear from our employees about their experiences working at Coupa. The Impact of Lead AI Engineer to Coupa: XXXX What you will do: Collaborate closely with product managers, engineers, and cross-functional stakeholders to define metrics and outcomes for AI-driven product features and capabilities Design, develop, and deploy sophisticated AI models, agents, and intelligent assistants leveraging Large Language Models (LLMs), including both language processing and advanced reasoning tasks Engineer robust, scalable, and reliable prompt-based solutions to effectively integrate LLMs into production-grade SaaS applications Implement experimentation frameworks, conduct rigorous evaluations, and iterate rapidly based on user feedback and data-driven insights Establish automated monitoring and quality assurance mechanisms for continuous evaluation and enhancement of model performance Provide strategic vision and technical leadership to navigate evolving trends and emerging methodologies within AI, particularly focusing on leveraging generative AI capabilities What you will bring to Coupa: Strong analytical and problem-solving skills focused on creating impactful AI-driven product experiences Proven expertise in Python for developing, deploying, and optimizing complex AI models and pipelines Extensive experience with Large Language Models (e.g., GPT-4o, Claude, LLaMA), including prompt engineering, fine-tuning, and evaluating performance Solid understanding of various machine learning paradigms (supervised, unsupervised, reinforcement learning) and practical experience deploying these methods at scale Deep knowledge of reasoning frameworks, agent architectures, retrieval-augmented generation (RAG), and related AI techniques Excellent proficiency in modern data architectures and experience with distributed data frameworks (e.g., Spark, Hadoop) Familiarity with cloud infrastructure (AWS, Azure, GCP) for deploying scalable AI/ML services Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders A strong drive to stay abreast of cutting-edge AI research and proactively integrate relevant innovations into production systems 5+ years of experience building, deploying, and managing AI/ML models in production environments Educational background in Computer Science, Data Science, Statistics, Mathematics, or related fields, or equivalent professional experience Experience with modern ML and AI development tools (e.g., Hugging Face, LangChain, PyTorch, TensorFlow) and visualization platforms (e.g., Streamlit, D3, Plotly, Matplotlib) Coupa complies with relevant laws and regulations regarding equal opportunity and offers a welcoming and inclusive work environment. Decisions related to hiring, compensation, training, or evaluating performance are made fairly, and we provide equal employment opportunities to all qualified candidates and employees. Please be advised that inquiries or resumes from recruiters will not be accepted. By submitting your application, you acknowledge that you have read Coupa’s Privacy Policy and understand that Coupa receives/collects your application, including your personal data, for the purposes of managing Coupa's ongoing recruitment and placement activities, including for employment purposes in the event of a successful application and for notification of future job opportunities if you did not succeed the first time. You will find more details about how your application is processed, the purposes of processing, and how long we retain your application in our Privacy Policy. Show more Show less
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
Mumbai, Maharashtra
On-site
Job Information Date Opened 06/17/2025 Industry AEC Job Type Permanent Work Experience 1 - 3 Years City Mumbai State/Province Maharashtra Country India Zip/Postal Code 400093 About Us Axium Global (formerly XS CAD), established in 2002, is a UK-based MEP (M&E) and architectural design and BIM Information Technology Enabled Services (ITES) provider with an ISO 9001:2015 and ISO 27001:2022 certified Global Delivery Centre in Mumbai, India. With additional presence in the USA, Australia and UAE, our global reach allows us to provide services to customers with the added benefit of local knowledge and expertise. Axium Global is established as one of the leading pre-construction planning services companies in the UK and India, serving the building services (MEP), retail, homebuilder, architectural and construction sectors with high-quality MEP engineering design and BIM solutions. Job Description We are looking for a motivated and analytical Data Scientist with 2–4 years of experience to join our growing data team. The ideal candidate should be comfortable working with large datasets, building predictive models and generating actionable insights. Experience or familiarity with the AEC industry is a strong plus, as the role involves working on data generated from engineering, construction and design workflows. As a Data Scientist, you will play a key role in turning raw data into insights that support strategic decision-making. You will collaborate with cross-functional teams including software developers and domain experts. Your analytical models and tools will help enhance project performance, reduce risk and drive operational efficiency. Key Roles and Responsibilities: Analyze large datasets to identify trends, patterns and actionable insights Design and implement machine learning and neural network models for predictions, classifications and clustering Collaborate with AEC domain teams to understand data requirements and propose technical solutions Clean, transform, and validate data using SQL, Python Support automation of machine learning (ML) workflows and model reporting pipelines Document data science processes and results for transparency and reproducibility Create dashboards and visualizations using tools like Power BI, Tableau or Plotly Qualifications and Experience Required: BE/BTech/MTech degree in computer science, Data Science, Engineering, Statistics or a related field 2–4 years of hands-on experience in a Data Science or related role Strong programming skills in Python, and proficiency with SQL Experience with data science libraries (e.g pytorch, scikit-learn, pandas, NumPy, TensorFlow, XGBoost) Clarity with the concepts of Object-Oriented Programming (OOPs) Good understanding of statistics, machine learning techniques and data modeling Experience with data visualization tools such as Power BI, Tableau or Matplotlib Familiarity with the AEC industry and tools like Revit, Navisworks, BIM 360 or project scheduling data is a plus Strong communication skills and ability to present data findings to non-technical stakeholders Compensation: The selected candidate will receive competitive compensation and remuneration policies in line with qualifications and experience. Compensation will not be a constraint for the right candidate. What We Offer: A fulfilling working environment that is respectful and ethical A stable and progressive career opportunity State-of-the-art office infrastructure with the latest hardware and software for professional growth In-house, internationally certified training division and innovation team focusing on training and learning the latest tools and trends. Culture of discussing and implementing a planned career growth path with team leaders Transparent fixed and variable compensation policies based on team and individual performances, ensuring a productive association.
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
As the member Data Science, this is your opportunity to be part of a next-gen data science team. As a Junior Data Scientist, youll work across business analytics, data engineering, and advanced modelling projects. You will gain hands-on experience in building clean data pipelines, generating user-level insights, and contributing to models that decode complex consumer behaviour. If you're eager to grow at the intersection of data engineering, analytics, and AI, this role is your launchpad. You will influence strategic decisions, pioneer ground breaking technologies, and deliver insights that empower businesses across industries. If youre a visionary who thrives on solving complex challenges and driving transformative change, this role is your canvas to create, inspire, and lead. What Youll Do Learn, Build, and Apply : Support the development of VTIONs behavioral data models through cleaning, transformation, and feature engineering. Assist in the construction and automation of data pipelines with engineering teams. Engage in business analytics projectstranslating behavioral data into marketing and user insights Bridge Data And Business Deliver exploratory analyses, cohort insights, and trend reports to support strategic decisions. Contribute to real-time dashboards and visualizations for internal and client use (Power BI, Tableau, Plotly). Support Innovation Work on classification, clustering, and segmentation models using Python (Scikit-learn, pandas, NumPy). Collaborate & Evolve Partner with data scientists, engineers, and product teams to deliver scalable solutions. Participate in sprint reviews, brainstorms, and post-analysis reviews to drive learning and iteration. Continuously explore new tools, methods, and technologies to elevate your own practice. What Were Looking For Proven Leadership : 5+ years of experience in data science, with at least 3 years leading high-performing teams. Technical Mastery Python (pandas, NumPy, scikit-learn, matplotlib) SQL for data querying Exposure to cloud platforms (AWS, GCP preferred) Bonus : Familiarity with Git, Airflow, dbt, or vector databases Analytics Mindset Ability to interpret business problems into data questions Understanding of basic statistics, experiment design, and metrics interpretation Curiosity And Drive Eagerness to learn in a dynamic, fast-paced environment Strong communication skills and a collaborative spirit Why Join Us Contribute to real-time, high-impact data applications Be mentored by senior leaders in AI, LLMs, and semantic intelligence Build a career path that spans across analytics, engineering, and research Thrive in a culture that values creativity, continuous learning, and ownership (ref:hirist.tech) Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary JOB DESCRIPTION We are seeking an experienced and innovative Data Scientist to join our team. The ideal candidate will leverage data-driven insights to solve complex problems, optimize business processes, and contribute to strategic decision-making. This role requires expertise in statistical analysis, machine learning, and data visualization to extract valuable insights from large datasets. Responsibilities Key Responsibilities: Collect, clean, and preprocess structuredandunstructureddata from various sources. Apply statisticalmethods and machinelearningalgorithms to analyze data and identify patterns. Develop predictive and prescriptive models to support business goals. Collaborate with stakeholders to define data-driven solutions for business challenges. Visualize data insights using tools like PowerBI , Tableau , or Matplotlib . Perform A / Btesting and evaluate model accuracy using appropriate metrics. Optimize machine learning models for scalability and performance. Document processes and communicate findings to non-technical stakeholders. Stay updated with advancements in data science techniques and tools. Qualifications Required Skills and Qualifications: Proficiency in programming languages like Python , R , or Scala . Strong knowledge of machinelearningframeworks such as TensorFlow , PyTorch , or Scikit − learn . Experience with SQL and NoSQLdatabases for data querying and manipulation. Understanding of bigdatatechnologies like Hadoop , Spark , or Kafka . Ability to perform statisticalanalysis and interpret results. Experience with datavisualizationlibraries like Seaborn , Plotly , or D 3. js . Excellent problem-solving and analytical skills. Strong communication skills to present findings to technical and non-technical audiences. Preferred Qualifications Master's or PhD in DataScience , Statistics , ComputerScience , or a related field. Experience with cloudplatforms (e.g., AWS, Azure, GCP) for data processing and model deployment. Knowledge of NLP ( NaturalLanguageProcessing ) and computervision . Familiarity with DevOpspractices and containerizationtools like Docker and Kubernetes . Exposure to time − seriesanalysis and forecastingtechniques . Certification in data science or machine learning tools is a plus. About Us ABOUT US Bristlecone is the leading provider of AI-powered application transformation services for the connected supply chain. We empower our customers with speed, visibility, automation, and resiliency – to thrive on change. Our transformative solutions in Digital Logistics, Cognitive Manufacturing, Autonomous Planning, Smart Procurement and Digitalization are positioned around key industry pillars and delivered through a comprehensive portfolio of services spanning digital strategy, design and build, and implementation across a range of technology platforms. Bristlecone is ranked among the top ten leaders in supply chain services by Gartner. We are headquartered in San Jose, California, with locations across North America, Europe and Asia, and over 2,500 consultants. Bristlecone is part of the $19.4 billion Mahindra Group. Equal Opportunity Employer Bristlecone is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status . Information Security Responsibilities Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System. Take part in information security training and act while handling information. Report all suspected security and policy breach to InfoSec team or appropriate authority (CISO). Understand and adhere to the additional information security responsibilities as part of the assigned job role. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai
On-site
We are seeking a data scientist with strong technical expertise in machine learning and statistical analysis to develop and deploy impactful data solutions. The role involves designing and implementing predictive models (e.g., regression, classification, NLP, time series), collaborating with stakeholders to solve business problems, and driving strategic decisions through experimentation and data-driven insights Job Description: Key responsibilities: 1. Modeling & Machine Learning Design and implement robust machine learning solutions (e.g., regression, classification, NLP, time series, recommendation systems). Evaluate and tune models using appropriate metrics (AUC, RMSE, precision/recall, etc.). Work on feature engineering, model interpretability, and performance optimization. 2. Data-Driven Business Strategy Partner with stakeholders to identify key opportunities where data science can drive business value. Translate business problems into data science projects with clearly defined deliverables and success metrics. Provide actionable recommendations based on data analysis and model outputs. 3. Analytics and Experimentation Conduct deep-dive exploratory analysis to uncover trends and insights. Apply statistical methods to test hypotheses, forecast trends, and measure campaign effectiveness Design and analyze A/B tests and other experiments to support product and marketing decisions. Automate data pipelines and dashboards for ongoing monitoring of model and business performance. Technical Skills: Languages: Proficient in Python, MySQL. Libraries/Frameworks: scikit-learn, Pandas, NumPy, Time series, ARIMA, Bayesian model, Market Mix model, Regression, XGBoost, LightGBM, TensorFlow, PyTorch. Statistical Method: Proficient in statistical techniques such as hypothesis testing, regression analysis, time-series analysis Databases : PostgreSQL, BigQuery, MySQL. Visualization: Plotly, Seaborn, Matplotlib. Deployment: Experience in Gitlab, CI/CD pipelines(Good to have). Cloud Platforms: Familiarity with AWS, GCP, or Azure services(Good to have). Location: Chennai Brand: Paragon Time Type: Full time Contract Type: Permanent
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Delhi, Delhi
On-site
Job Title: Data Scientist – Financial Analytics Location: Onsite – Okhla Vihar, New Delhi Experience: 2-3 Years About the Role: We are seeking a passionate and results-driven Data Scientist to join our analytics team at our Okhla Vihar office. This role is ideal for someone with 2–3 years of hands-on experience in applying data science techniques to financial datasets, with a strong foundation in Python, SQL, and time series forecasting. You will play a key role in analyzing complex financial data, building predictive models, and delivering actionable insights to support critical business decisions. Key Responsibilities: Analyze large volumes of structured and unstructured financial data to extract meaningful insights. Build and evaluate predictive models using machine learning techniques (e.g., Scikit learn). Perform time series analysis and forecasting for financial indicators (e.g., market trends, portfolio performance, cash flows). Design and implement robust feature engineering pipelines to improve model accuracy. Develop risk modeling frameworks to assess financial risk (e.g., credit risk, market risk). Write complex and optimized SQL queries for data extraction and transformation. Leverage Python libraries like Pandas, NumPy, and SciPy for data preprocessing and manipulation. Create clear and insightful data visualizations using Matplotlib, Seaborn, or Plotly to communicate findings. Work closely with finance and strategy teams to translate business needs into data-driven solutions. Monitor and fine-tune models in production to ensure continued relevance and accuracy. Document models, assumptions, and methodologies for auditability and reproducibility. Required Skills & Experience: 2–3 years of experience as a Data Scientist, preferably in a financial services or fintech environment. Proficient in Python (Pandas, Scikit-learn, NumPy, SciPy, etc.). Strong experience in SQL for querying large datasets. Deep understanding of time series modeling (ARIMA, SARIMA, Prophet, etc.). Experience with feature selection, feature transformation, and data imputation techniques. Solid understanding of financial concepts such as ROI, risk/return, volatility, portfolio analysis, and pricing models. Exposure to risk modeling (credit scoring, stress testing, scenario analysis). Strong analytical and problem-solving skills with attention to detail. Experience with data visualization tools – Matplotlib, Seaborn, or Plotly. Ability to interpret model outputs and convey findings to both technical and non technical stakeholders. Excellent communication and collaboration skills. Preferred Qualifications: Bachelor’s or Master’s degree in Data Science, Statistics, Finance, Economics, Computer Science, or a related field. Experience working with financial time series datasets (e.g., stock prices, balance sheets, trading data). Understanding of regulatory frameworks and compliance in financial analytics. Familiarity with cloud platforms (AWS, GCP) is a plus. Experience working in agile teams. What We Offer: Competitive salary and performance incentives Onsite role at a modern office located in Okhla Vihar, New Delhi A collaborative and high-growth work environment Opportunities to work on real-world financial data challenges Exposure to cross-functional teams in finance, technology, and business strategy How to Apply: If you’re excited to work at the intersection of data science and finance, and want to be part of a dynamic team solving real-world financial challenges, we’d love to hear from you. Please send your resume, portfolio (if applicable), and a brief note about why you're a great fit to [your email/HR contact here]. Job Type: Full-time Pay: ₹600,000.00 - ₹1,200,000.00 per year Schedule: Morning shift Work Location: In person
Posted 1 week ago
7.0 years
0 Lacs
Delhi, India
On-site
Role Expectations: Data Collection and Cleaning: Collect, organize, and clean large datasets from various sources (internal databases, external APIs, spreadsheets, etc.). Ensure data accuracy, completeness, and consistency by cleaning and transforming raw data into usable formats. Data Analysis: Perform exploratory data analysis (EDA) to identify trends, patterns, and anomalies. Conduct statistical analysis to support decision-making and uncover insights. Use analytical methods to identify opportunities for process improvements, cost reductions, and efficiency enhancements. Reporting and Visualization: Create and maintain clear, actionable, and accurate reports and dashboards for both technical and non-technical stakeholders. Design data visualizations (charts, graphs, and tables) that communicate findings effectively to decision-makers. Worked on PowerBI , Tableau and Pythoin Libraries for Data visualization like matplotlib , seaborn , plotly , Pyplot , pandas etc Experience in generating the Descriptive , Predictive & prescriptive Insights with Gen AI using MS Copilot in PowerBI. Experience in Prompt Engineering & RAG Architectures Prepare reports for upper management and other departments, presenting key findings and recommendations. Collaboration: Work closely with cross-functional teams (marketing, finance, operations, etc.) to understand their data needs and provide actionable insights. Collaborate with IT and database administrators to ensure data is accessible and well-structured. Provide support and guidance to other teams regarding data-related questions or issues. Data Integrity and Security: Ensure compliance with data privacy and security policies and practices. Maintain data integrity and assist with implementing best practices for data storage and access. Continuous Improvement: Stay current with emerging data analysis techniques, tools, and industry trends. Recommend improvements to data collection, processing, and analysis procedures to enhance operational efficiency. Qualifications: Education: Bachelor's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field. A Master's degree or relevant certifications (e.g., in data analysis, business intelligence) is a plus. Experience: Proven experience as a Data Analyst or in a similar analytical role (typically 7+ years). Experience with data visualization tools (e.g., Tableau, Power BI, Looker). Strong knowledge of SQL and experience with relational databases. Familiarity with data manipulation and analysis tools (e.g., Python, R, Excel, SPSS). Worked on PowerBI , Tableau and Pythoin Libraries for Data visualization like matplotlib , seaborn , plotly , Pyplot , pandas etc Experience with big data technologies (e.g., Hadoop, Spark) is a plus. Technical Skills: Proficiency in SQL and data query languages. Knowledge of statistical analysis and methodologies. Experience with data visualization and reporting tools. Knowledge of data cleaning and transformation techniques. Familiarity with machine learning and AI concepts is an advantage (for more advanced roles). Soft Skills: Strong analytical and problem-solving abilities. Excellent attention to detail and ability to identify trends in complex data sets. Good communication skills to present data insights clearly to both technical and non-technical audiences. Ability to work independently and as part of a team. Strong time management and organizational skills, with the ability to prioritize tasks effectively. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
You will lead the development of predictive machine learning models for Revenue Cycle Management analytics, along the lines of: 1 Payer Propensity Modeling - predicting payer behavior and reimbursement likelihood 2 Claim Denials Prediction - identifying high-risk claims before submission 3 Payment Amount Prediction - forecasting expected reimbursement amounts 4 Cash Flow Forecasting - predicting revenue timing and patterns 5 Patient-Related Models - enhancing patient financial experience and outcomes 6 Claim Processing Time Prediction - optimizing workflow and resource allocation Additionally, we will work on emerging areas and integration opportunities—for example, denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. You will reimagine how providers, patients, and payors interact within the healthcare ecosystem through intelligent automation and predictive insights, ensuring that providers can focus on delivering the highest quality patient care. VHT Technical Environment 1 Cloud Platform: AWS (SageMaker, S3, Redshift, EC2) 2 Development Tools: Jupyter Notebooks, Git, Docker 3 Programming: Python, SQL, R (optional) 4 ML/AI Stack: Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow 5 Data Processing: Spark, Pandas, NumPy 6 Visualization: Matplotlib, Seaborn, Plotly, Tableau Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a data scientist with strong technical expertise in machine learning and statistical analysis to develop and deploy impactful data solutions. The role involves designing and implementing predictive models (e.g., regression, classification, NLP, time series), collaborating with stakeholders to solve business problems, and driving strategic decisions through experimentation and data-driven insights Job Description: Key responsibilities: Modeling & Machine Learning Design and implement robust machine learning solutions (e.g., regression, classification, NLP, time series, recommendation systems). Evaluate and tune models using appropriate metrics (AUC, RMSE, precision/recall, etc.). Work on feature engineering, model interpretability, and performance optimization. Data-Driven Business Strategy Partner with stakeholders to identify key opportunities where data science can drive business value. Translate business problems into data science projects with clearly defined deliverables and success metrics. Provide actionable recommendations based on data analysis and model outputs. Analytics and Experimentation Conduct deep-dive exploratory analysis to uncover trends and insights. Apply statistical methods to test hypotheses, forecast trends, and measure campaign effectiveness Design and analyze A/B tests and other experiments to support product and marketing decisions. Automate data pipelines and dashboards for ongoing monitoring of model and business performance. Technical Skills: Languages: Proficient in Python, MySQL. Libraries/Frameworks: scikit-learn, Pandas, NumPy, Time series, ARIMA, Bayesian model, Market Mix model, Regression, XGBoost, LightGBM, TensorFlow, PyTorch. Statistical Method: Proficient in statistical techniques such as hypothesis testing, regression analysis, time-series analysis Databases : PostgreSQL, BigQuery, MySQL. Visualization: Plotly, Seaborn, Matplotlib. Deployment: Experience in Gitlab, CI/CD pipelines(Good to have). Cloud Platforms: Familiarity with AWS, GCP, or Azure services(Good to have). Location: Chennai Brand: Paragon Time Type: Full time Contract Type: Permanent Show more Show less
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Note: Gaming, E-Commerce, Fintech, EdTech or any B2C experience is must. Responsibilities: • Develop an in-depth understanding of user journeys and generate data driven insights & recommendations to help product and customer success teams in meticulous decision-making. • Define and analyze key product data sets to understand customer and product behavior. • Work with stakeholders throughout the organization to identify opportunities for leveraging data to identify areas of growth and build strong data backed business cases around the same. • Perform statistical analysis/modelling on data and uncover hidden data patterns and correlations. • Perform feature engineering and develop and deploy predictive models/algorithms. • Coordinate with different teams to implement and deploy AI/ML driven models. • Conduct ad-hoc analysis around product areas for growth hacking and produce consumable reports for multiple business stakeholders. • Develop processes and tools to monitor and analyze model performance and data accuracy. Technical Skills : • At least 2-4 years of experience of working with real-world data and building statistical models. • Hands-on experience of programming with Python, including working knowledge of packages like Pandas, Numpy, SciPy, Scikit-Learn,Seaborn,Plotly etc. • Hands-on knowledge of SQL and Excel. • Deep understanding of key supervised and unsupervised ML algorithms – should be able to explain what is happening under the hood and their real-world advantages/drawbacks. • Strong foundation of statistics and probability theory. • Knowledge of advanced statistical techniques and concepts (properties of distributions, statistical tests, simulations, Markov chain etc.) and experience with applications. Other Skills: • Preferred Domain Experience: Gaming, E-Commerce, or any B2C experience. • Ability to break a problem into smaller chunks and design solution accordingly. • Ability to dive deeper into data, ask right questions, analyze with statistical methods and generate insights. • Ability to write modular, clean and well-documented code along with crisp design documents. • Strong communication and presentation skills. Show more Show less
Posted 1 week ago
0.0 - 1.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
Red & White Education Pvt Ltd , founded in 2008, is Gujarat's leading educational institute. Accredited by NSDC and ISO, we focus on Integrity, Student-Centricity, Innovation, and Unity. Our goal is to equip students with industry-relevant skills and ensure they are employable globally. Join us for a successful career path. Job Description: Faculties guide students, deliver course materials, conduct lectures, assess performance, and provide mentorship. Strong communication skills and a commitment to supporting students are essential. Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. Additional Skills: Confident body language and clear communication. Strong classroom management and discipline skills. Punctual, prepared, and passionate about teaching. Open to learning and professional development. Proficient in verbal and written communication. Strong problem-solving, leadership, and decision-making abilities. Positive attitude and ability to work independently. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹35,000.00 per month Benefits: Cell phone reimbursement Flexible schedule Leave encashment Paid sick time Paid time off Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): Current Salary? Experience: Teaching / Mentoring: 1 year (Required) AI: 1 year (Required) ML: 1 year (Required) Data science: 1 year (Required) Location: Ahmedabad, Gujarat (Required) Work Location: In person
Posted 1 week ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Note: Please apply only if you have 6 years or more of relevant experience in Data Science (excluding internship) Comfortable working 5-days a week from Gurugram, Haryana Are an immediate joiner or currently serving your notice period About Eucloid At Eucloid, innovation meets impact. As a leader in AI and Data Science, we create solutions that redefine industries—from Hi-tech and D2C to Healthcare and SaaS. With partnerships with giants like Databricks, Google Cloud, and Adobe, we’re pushing boundaries and building next-gen technology. Join our talented team of engineers, scientists, and visionaries from top institutes like IITs, IIMs, and NITs. At Eucloid, growth is a promise, and your work will drive transformative results for Fortune 100 clients. What You’ll Do As a GenAI Engineer, you will play a pivotal role in designing and deploying data-driven and GenAI-powered solutions. Your responsibilities will include: Analyzing large sets of structured and unstructured data to extract meaningful insights and drive business impact. Designing and developing Machine Learning models, including regression, time series forecasting, clustering, classification, and NLP. Building, fine-tuning, and deploying Large Language Models (LLMs) such as GPT, BERT, or LLaMA for tasks like text summarization, generation, and classification. Working with Hugging Face Transformers, LangChain, and vector databases (e.g., FAISS, Pinecone) to develop scalable GenAI pipelines. Applying prompt engineering techniques and Reinforcement Learning with Human Feedback (RLHF) to optimize GenAI applications. Building and deploying models using Python, R, TensorFlow, PyTorch, and Scikit-learn within production-ready environments like Flask, Azure Functions, and AWS Lambda. Developing and maintaining scalable data pipelines in collaboration with data engineers. Implementing solutions on cloud platforms like AWS, Azure, or GCP for scalable and high-performance AI/ML applications. Enhancing BI and visualization tools such as Tableau, Power BI, Qlik, and Plotly to communicate data insights effectively. Collaborating with stakeholders to translate business challenges into GenAI/data science problems and actionable solutions. Staying updated on emerging GenAI and AI/ML technologies and incorporating best practices into projects. What Makes You a Fit Academic Background: Bachelor’s or Master’s degree in Data Science, Computer Science, Mathematics, Statistics, or a related field. Technical Expertise: 6+ years of hands-on experience in applying Machine Learning techniques (clustering, classification, regression, NLP). Strong proficiency in Python and SQL, with experience in frameworks like Flask or Django. Expertise in Big Data environments using PySpark. Deep understanding of ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Hands-on experience with Hugging Face Transformers, OpenAI API, or similar GenAI libraries. Knowledge of vector databases and retrieval-augmented generation (RAG) techniques. Proficiency in cloud-based AI/ML deployment on AWS, Azure, or GCP. Experience in Docker and containerization for ML model deployment. Knowledge of code management methodologies and best practices for implementing scalable ML/GenAI solutions. Extra Skills: Experience in Deep Learning and Reinforcement Learning. Hands-on experience with NLP, Text Mining, and LLM architectures. Experience in business intelligence and data visualization tools (Tableau, Power BI, Qlik). Experience with prompt engineering and fine-tuning LLMs for production use cases. Ability to effectively communicate insights and translate technical work into business value. Why You’ll Love It Here Innovate with the Best Tech: Work on groundbreaking projects using AI, GenAI, LLMs, and massive-scale data platforms. Tackle challenges that push the boundaries of innovation. Impact Industry Giants: Deliver business-critical solutions for Fortune 100 clients across Hi-tech, D2C, Healthcare, SaaS, and Retail. Partner with platforms like Databricks, Google Cloud, and Adobe to create high-impact products. Collaborate with a World-Class Team: Join exceptional professionals from IITs, IIMs, NITs, and global leaders like Walmart, Amazon, Accenture, and ZS. Learn, grow, and lead in a team that values expertise and collaboration. Accelerate Your Growth: Access our Centres of Excellence to upskill and work on industry-leading innovations. Your professional development is a top priority. Work in a Culture of Excellence: Be part of a dynamic workplace that fosters creativity, teamwork, and a passion for building transformative solutions. Your contributions will be recognized and celebrated. About Our Leadership Anuj Gupta – Former Amazon leader with over 22 years of experience in building and managing large engineering teams. (B.Tech, IIT Delhi; MBA, ISB Hyderabad). Raghvendra Kushwah – Business consulting expert with 21+ years at Accenture and Cognizant (B.Tech, IIT Delhi; MBA, IIM Lucknow). Key Benefits Competitive salary and performance-based bonus. Comprehensive benefits package, including health insurance and flexible work hours. Opportunities for professional development and career growth. Location: Gurugram Submit your resume to saurabh.bhaumik@eucloid.com with the subject line “ Application: GenAI Engineer. ” Eucloid is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment. Show more Show less
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Are you looking for a career move that will put you at the heart of a global financial institution? Then bring your skills in data-driven modelling and data engineering to Citi’s Global FX Team. By Joining Citi, you will become part of a global organization whose mission is to serve as a trusted partner to our clients by responsibly providing financial services that enable growth and economic progress. Team/Role Overview The FX Data Analytics & AI Technology team, within Citi's FX Technology organization, seeks a highly motivated Full Stack Data Scientist / Data Engineer. The FX Data Analytics & Gen AI Technology team provides data, analytics, and tools to Citi FX sales and trading globally and is responsible for defining and executing the overall data strategy for FX. The successful candidate will be responsible for developing and implementing data-driven models, and engineering robust data and analytics pipelines, to unlock actionable insights from our vast amount of global FX data. The role will be instrumental in executing the overall data strategy for FX and will benefit from close interaction with a wide range of stakeholders across sales, trading, and technology. We are looking for a proactive individual with a practical and pragmatic attitude, ability to build consensus, and work both collaboratively and independently in a dynamic environment. What You’ll Do Design, develop and implement quantitative models to derive insights from large and complex FX datasets, with a focus on understanding market trends and client behavior, identifying revenue opportunities, and optimizing the FX business. Engineer data and analytics pipelines using modern, cloud-native technologies and CI/CD workflows, focusing on consolidation, automation, and scalability. Collaborate with stakeholders across sales and trading to understand data needs, translate them into impactful data-driven solutions, and deliver these in partnership with technology. Develop and integrate functionality to ensure adherence with best-practices in terms of data management, need-to-know (NTK), and data governance. Contribute to shaping and executing the overall data strategy for FX in collaboration with the existing team and senior stakeholders. What We’ll Need From You 8 to 12 Years experience Master’s degree or above (or equivalent education) in a quantitative discipline. Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate. Excellent Python programming skills, including experience with relevant analytical and machine learning libraries (e.g., pandas, polars, numpy, sklearn, TensorFlow/Keras, PyTorch, etc.), in addition to visualization and API libraries (matplotlib, plotly, streamlit, Flask, etc). Experience developing and implementing Gen AI applications from data in a financial context. Proficiency working with version control systems such as Git, and familiarity with Linux computing environments. Experience working with different database and messaging technologies such as SQL, KDB, MongoDB, Kafka, etc. Familiarity with data visualization and ideally development of analytical dashboards using Python and BI tools. Excellent communication skills, both written and verbal, with the ability to convey complex information clearly and concisely to technical and non-technical audiences. Ideally, some experience working with CI/CD pipelines and containerization technologies like Docker and Kubernetes. Ideally, some familiarity with data workflow management tools such as Airflow as well as big data technologies such as Apache Spark/Ignite or other caching and analytics technologies. A working knowledge of FX markets and financial instruments would be beneficial. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
1.0 - 2.0 years
6 - 7 Lacs
Bengaluru
On-site
Overview: We are an integral part of Annalect Global and Omnicom Group, one of the largest media and advertising agency holding companies in the world. Omnicom’s branded networks and numerous specialty firms provide advertising, strategic media planning and buying, digital and interactive marketing, direct and promotional marketing, public relations, and other specialty communications services. Our agency brands are consistently recognized as being among the world’s creative best. Annalect India plays a key role for our group companies and global agencies by providing stellar products and services in areas of Creative Services, Technology, Marketing Science (data & analytics), Market Research, Business Support Services, Media Services, Consulting & Advisory Services. We are growing rapidly and looking for talented professionals like you to be part of this journey. Let us build this, together. Responsibilities: Translating data into clear, compelling, and actionable insights by leveraging advanced analytics tactics conducted by central resource. Developing and executing attribution and measurement projects. Ensuring timely follow through on all scheduled and ad hoc deliverables. With the leaders of the functional specialty teams, keeping track of projects being run by the Functional Specialists to ensure they are done on time and to right level of quality. Development of presentations to clients, including the results of attribution and modelling projects in a clear and insightful narrative, digestible by a lay person. Understanding of consumer and marketplace behaviors, particularly those that most impact business and marketing goals. Qualifications: Bachelor’s degree in statistics, mathematics, economics, engineering, information management, social sciences or business/marketing related fields. Masters preferred. 1 to 2 years of experience in a quantitative data driven field, media, or equivalent coursework or academic projects Good working knowledge and understanding of statistics and advanced analytics, marketing mix modeling / econometric analysis / multivariate regression is a must have skill. Experience with coding languages; must have working knowledge of Python and/or R. Understanding of fundamental database concepts and SQL is beneficial. Good to have Marketing analytics, data science techniques. Strong excel skills (Vlookups, Pivot Tables, Macros and other advanced functions) Excellent communication skills. Good to have Experience with data visualization platforms. PowerBI/Tableau, Qlikview, Plotly, SAS, etc. are good-to-have. Good to have Prior agency experience Good to have Familiarity with digital marketing and media concepts & tools and web analytics (Google DCM, Adobe Analytics, Google Analytics etc.)
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: These roles have many overlapping skills with GENAI Engineers and architects. Description may scaleup/scale down based on expected seniority. Roles & Responsibilities: -Implement generative AI models, identify insights that can be used to drive business decisions. Work closely with multi-functional teams to understand business problems, develop hypotheses, and test those hypotheses with data, collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals. -Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services. -Optimizing existing generative AI models for improved performance, scalability, and efficiency. -Ensure data quality and accuracy -Leading the design and development of prompt engineering strategies and techniques to optimize the performance and output of our GenAI models. -Implementing cutting-edge NLP techniques and prompt engineering methodologies to enhance the capabilities and efficiency of our GenAI models. -Determining the most effective prompt generation processes and approaches to drive innovation and excellence in the field of AI technology, collaborating with AI researchers and developers -Experience working with cloud based platforms (example: AWS, Azure or related) -Strong problem-solving and analytical skills -Proficiency in handling various data formats and sources through Omni Channel for Speech and voice applications, part of conversational AI -Prior statistical modelling experience -Demonstrable experience with deep learning algorithms and neural networks -Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. -Contributing to the establishment of best practices and standards for generative AI development within the organization. Professional & Technical Skills: -Must have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. -Must be proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras. -Must have strong knowledge of data structures, algorithms, and software engineering principles. -Must be familiar with cloud-based platforms and services, such as AWS, GCP, or Azure. -Need to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face. -Must be familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly. -Need to have knowledge of software development methodologies, such as Agile or Scrum. -Possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Additional Information: -Must have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Ph.D. is highly desirable. -strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. -You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment. Show more Show less
Posted 1 week ago
0 years
0 Lacs
India
On-site
Job Role : Financial Analysts and Advisors For Workflow Annotation Specialist Project Type: Contract-based / Freelance / Part-time – 1 Month Job Overview: We are seeking domain experts to participate in a Workflow Annotation Project . The role involves documenting and annotating the step-by-step workflows of key tasks within the candidate’s area of expertise. The goal is to capture real-world processes in a structured format for AI training and process optimization purposes. Domain Expertise Required : Collect market and company data, build/maintain financial models, craft decks, track portfolios, run risk and scenario analyses, develop client recommendations, and manage CRM workflows. Tools & Technologies You May have Worked: Commercial Software ‑ Bloomberg Terminal, Refinitiv Eikon, FactSet, Excel, PowerPoint, Salesforce FSC, Redtail, Wealthbox, Orion Advisor Tech, Morningstar Office, BlackRock Aladdin, Riskalyze, Tolerisk, eMoney Advisor, MoneyGuidePro, Tableau, Power BI. Open / Free Software ‑ LibreOffice Calc, Google Sheets, Python (Pandas, yfinance, NumPy, SciPy, Matplotlib), R (QuantLib, tidyverse), SuiteCRM, EspoCRM, Plotly Dash, Streamlit, Portfolio Performance, Ghostfolio, Yahoo Finance API, Alpha Vantage, IEX Cloud (free tier). Show more Show less
Posted 1 week ago
0.0 - 1.0 years
0 - 0 Lacs
Tirupati
Remote
About the RoleWe are looking for a passionate and knowledgeable Python & Data Science Instructor to teach and mentor students in our [online / in-person] data science program. You’ll play a key role in delivering engaging lessons, guiding hands-on projects, and supporting learners as they build real-world skills in Python programming and data science. This is a great opportunity for professionals who love teaching and want to empower the next generation of data scientists. 📚 ResponsibilitiesTeach core topics including: Python fundamentals Data manipulation with pandas and NumPy Data visualization using matplotlib/seaborn/plotly Machine learning with scikit-learn Jupyter Notebooks, data cleaning, and exploratory data analysis Deliver live or recorded lectures, tutorials, and interactive sessions Review and provide feedback on student projects and assignments Support students via Q&A sessions, forums, or 1-on-1 mentoring Collaborate with curriculum designers to refine and improve content Stay updated with the latest industry trends and tools ✅ RequirementsStrong proficiency in Python and the data science stack (NumPy, pandas, matplotlib, scikit-learn, etc.) Hands-on experience with real-world data projects or industry experience in data science Prior teaching, mentoring, or public speaking experience (formal or informal) Clear communication and ability to explain complex topics to beginners [Bachelor’s/Master’s/PhD] in Computer Science, Data Science, Statistics, or a related field (preferred but not required) ⭐ Bonus PointsExperience with deep learning frameworks (TensorFlow, PyTorch) Familiarity with cloud platforms (AWS, GCP, Azure) Experience teaching online using tools like Zoom, Slack, or LMS platforms Contribution to open-source, GitHub portfolio, or Kaggle participation 🚀 What We OfferFlexible working hours and remote-friendly environment Opportunity to impact learners around the world Supportive and innovative team culture Competitive pay and performance incentives
Posted 1 week ago
0 years
0 Lacs
Chennai
On-site
Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Education : Bachelor’s in Engineering or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JD for data science: We are seeking an experienced Data Scientist to join our growing analytics and AI team. This role will involve working closely with cross-functional teams to deliver actionable insights, build predictive models, and drive data-driven decision-making across the organization. The ideal candidate is someone who combines strong analytical skills with hands-on experience in statistical modeling, machine learning, and data engineering best practices. Key Responsibilities: Understand business problems and translate them into data science solutions. Build, validate, and deploy machine learning models for prediction, classification, clustering, etc. Perform deep-dive exploratory data analysis and uncover hidden insights. Work with large, complex datasets from multiple sources; perform data cleaning and preprocessing. Design and run A/B tests and experiments to validate hypotheses. Collaborate with data engineers, business analysts, and product managers to drive initiatives from ideation to production. Present results and insights to non-technical stakeholders in a clear, concise manner. Contribute to the development of reusable code libraries, templates, and documentation. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. 3–7 years of hands-on experience in data science, machine learning, or applied statistics. Proficiency in Python or R, and hands-on experience with libraries such as scikit- learn, pandas, numpy, XGBoost, TensorFlow/PyTorch. Solid understanding of machine learning algorithms, statistical inference, and data mining techniques. Strong SQL skills; experience working with large-scale databases (e.g., Snowflake, BigQuery, Redshift). Experience with data visualization tools like Power BI, Tableau, or Plotly. Working knowledge of cloud platforms like AWS, Azure, or GCP is preferred. Familiarity with MLOps tools and model deployment best practices is a plus. Preferred Qualifications: Exposure to time series analysis, NLP, or deep learning techniques. Experience working in domains like healthcare, fintech, retail, or supply chain. Understanding of version control (Git) and Agile development methodologies. Why Join Us: Opportunity to work on impactful, real-world problems. Be part of a high-performing and collaborative team. Exposure to cutting-edge technologies in data and AI. Career growth and continuous learning environment. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Analyst - Python Job Classification : Full-Time Work Location : Work-From-Office (Hyderabad) Education : BE-BCS, B-Tech-IT, MCA or Equivalent Experience Level : 3 Years (2+ years’ Data Analysis experience) Company Description Team Geek Solutions (TGS) is a global technology partner based in Texas, specializing in AI and Generative AI solutions, custom software development, and talent optimization. TGS offers a range of services tailored to industries like BFSI, Telecom, FinTech, Healthcare, and Manufacturing. With expertise in AI/ML development, cloud migration, software development, and more, TGS helps businesses achieve operational efficiency and drive innovation. Position Description We are looking for a Data Analyst to analyze large amounts of raw information to find patterns that will help improve our products. We will rely on you to build data models to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math and statistics. Your task is to gather and prepare data from multiple sources, run statistical analyses, and communicate your findings in a clear and objective way. Your goal will be to help our company analyze trends to make better decisions. Qualifications/Skills Required 2+years’ experience in Python, with knowledge of packages such as pandas, NumPy, SciPy, Scikit-learn, Flask Proficiency in at least one data visualization tool, such as Matplotlib, Seaborn and Plotly Experience with popular statistical and machine learning techniques, such as clustering, SVM, KNN, decision trees, etc. Experience with databases, such as SQL and MongoDB Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy Knowledge of python libraries OpenCVand TensorFlow is a plus Job Responsibilities /Essential Functions Identify, analyze, and interpret trends or patterns in complex data sets Explore and visualize data Use machine learning tools to select features, create and optimize classifiers Clearly communicate the findings from the analysis to turn information into something actionable through reports, dashboards, and/or presentations. Skills: pandas,business insights,numpy,plotly,scipy,python,mongodb,analytical skills,tensorflow,statistics,machine learning,sql,data,opencv,seaborn,data visualization,matplotlib,scikit-learn,flask Show more Show less
Posted 1 week ago
10.0 - 13.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Title : Principal Data scientist Location: Chennai or Bangalore Experience: 10-13 years Job Summary We are seeking a highly skilled and techno-functional Optimization Specialist with 10–13 years of experience in developing enterprise-grade optimization solutions and platforms. The ideal candidate will possess deep expertise in mathematical optimization, strong hands-on Python programming skills, and the ability to bridge the gap between technical and business teams. You will lead the design and deployment of scalable optimization engines to solve complex business problems across supply chain, manufacturing, pricing, logistics, and workforce planning. Key Responsibilities Design & Development : Architect and implement optimization models (LP, MILP, CP, metaheuristics) using solvers like Gurobi, CPLEX, or open-source equivalents. Platform Building : Lead the design and development of optimization-as-a-service platforms with modular, reusable architecture. Techno-Functional Role : Translate business requirements into formal optimization problems and provide functional consulting support across domains. End-to-End Ownership : Manage the full lifecycle from problem formulation, model design, data pipeline integration, to production deployment. Python Expertise : Build robust, production-grade code with modular design using Python, Pandas, NumPy, Pyomo/Pulp, and APIs (FastAPI/Flask). Collaboration : Work with business stakeholders, data scientists, and software engineers to ensure solutions are accurate, scalable, and aligned with objectives. Performance Tuning : Continuously improve model runtime and performance; conduct sensitivity analysis and scenario modeling. Innovation : Stay abreast of the latest in optimization techniques, frameworks, and tools; proactively suggest enhancements. Required Skills & Qualifications Bachelor’s or Master’s in Operations Research, Industrial Engineering, Computer Science, or related fields. 10–12 years of experience in solving real-world optimization problems. Deep understanding of mathematical programming (LP/MILP/CP), heuristics/metaheuristics, and stochastic modeling. Proficiency in Python and experience with relevant libraries (Pyomo, Pulp, OR-Tools, SciPy). Strong experience building end-to-end platforms or optimization engines deployed in production. Functional understanding of at least one domain: supply chain, logistics, manufacturing, pricing, scheduling, or workforce planning. Excellent communication skills – able to interact with technical and business teams effectively. Experience integrating optimization models into enterprise systems (APIs, cloud deployment, etc.). Preferred Qualifications Exposure to cloud platforms (AWS, GCP, Azure) and MLOps pipelines. Familiarity with data visualization (Dash, Plotly, Streamlit) to present optimization results. Certification or training in operations research or mathematical optimization tools. Show more Show less
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Location : Ahmedabad Experience : 4-6 years Title : Data Scientist Job Description Primary Responsibilities : Analyze large and complex datasets to identify trends, patterns, and insights. Develop and implement machine learning models for prediction, classification, and clustering tasks using Python libraries like scikit-learn, TensorFlow, or PyTorch. Perform statistical analysis and hypothesis testing to validate findings and draw meaningful conclusions. Design and implement data visualization dashboards and reports using Python libraries like Matplotlib, Seaborn, or Plotly to communicate insights effectively. Collaborate with cross-functional teams to understand business requirements and translate them into data science solutions. Build and deploy scalable data pipelines using Python and related tools. Stay up-to-date with the latest advancements in data science, machine learning, and Python libraries. Communicate complex data insights and findings to both technical and non-technical audiences. What You'll Bring Proven experience as a Data Scientist or similar role. Strong programming skills in Python, with experience in relevant data science libraries (e.g., Pandas, NumPy, Scikit-learn). Solid understanding of statistical concepts, machine learning algorithms, and data modeling techniques. Experience with data visualization using Python libraries (e.g., Matplotlib, Seaborn, Plotly). Ability to work with large datasets and perform data cleaning, preprocessing, and feature engineering. Strong problem-solving and analytical skills. Excellent communication and presentation abilities. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data science services. Experience with deep learning frameworks (e.g., TensorFlow, PyTorch). Knowledge of database technologies (SQL and NoSQL). Experience with deploying machine learning models into production. (ref:hirist.tech) Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale Qualifications Education : Bachelor’s in Engineering or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France