Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 4.0 years
2 - 8 Lacs
Gurgaon
On-site
Machine Learning Engineer (L1) Experience Required: 2-4 years As a Machine Learning Engineer at Spring, you’ll help bring data-driven intelligence into our products and operations. You’ll support the development and deployment of models and pipelines that power smarter decisions, more personalized experiences, and scalable automation. This is an opportunity to build hands-on experience in real-world ML and AI systems while collaborating with experienced engineers and data scientists. You’ll work on data processing, model training, and integration tasks — gaining exposure to the entire ML lifecycle, from experimentation to production deployment. You’ll learn how to balance model performance with system requirements, and how to structure your code for reliability, observability, and maintainability. You’ll use modern ML/AI tools such as scikit-learn, HuggingFace, and LLM APIs — and be encouraged to explore AI techniques that improve our workflows or unlock new product value. You’ll also be expected to help build and support automated data pipelines, inference services, and validation tools as part of your contributions. You’ll work closely with engineering, product, and business stakeholders to understand how models drive value. Over time, you’ll build the skills and judgment needed to identify impactful use cases, communicate technical trade-offs, and contribute to the broader evolution of ML at Spring. What You’ll Do Support model development and deployment across structured and unstructured data and AI use cases. Build and maintain automated pipelines for data processing, training, and inference. Use ML and AI tools (e.g., scikit-learn, LLM APIs) in day-to-day development. Collaborate with engineers, data scientists, and product teams to scope and deliver features. Participate in code reviews, testing, and monitoring practices. Integrate ML systems into customer-facing applications and internal tools. Identify differences in data distribution that could affect model performance in real-world applications. Stay up to date with developments in the machine learning industry. Tech Expectations Core Skills Curiosity, attention to detail, strong debugging skills, and eagerness to learn through feedback Solid foundation in statistics and data interpretation Strong understanding of data structures, algorithms, and software development best practices Exposure to data pipelines, model training and evaluation, or training workflows Languages Must Have: Python, SQL ML Algorithms Must Have: Traditional modeling techniques (e.g., tree models, Naive Bayes, logistic regression) Ensemble methods (e.g., XGBoost, Random Forest, CatBoost, LightGBM) ML Libraries / Frameworks Must Have: scikit-learn, Hugging Face, Statsmodels, Optuna Good to Have: SHAP, Pytest Data Processing / Manipulation Must Have: pandas, NumPy Data Visualization Must Have: Plotly, Matplotlib Version Control Must Have: Git Others – Good to Have AWS (e.g., EC2, SageMaker, Lambda) Docker Airflow MLflow Github Actions
Posted 11 hours ago
0.0 - 3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Location: Gurugram, India Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back Testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Itos Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning). Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C /Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Work hours will be aligned to APAC Markets.
Posted 2 days ago
0.0 - 5.0 years
5 - 20 Lacs
Gurgaon
On-site
Assistant Manager EXL/AM/1349734 ServicesGurgaon Posted On 30 May 2025 End Date 14 Jul 2025 Required Experience 0 - 5 Years Basic Section Number Of Positions 1 Band B1 Band Name Assistant Manager Cost Code D003152 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type Backfill Max CTC 500000.0000 - 2000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Banking & Financial Services Organization Services LOB Services SBU Analytics Country India City Gurgaon Center Gurgaon-SEZ BPO Solutions Skills Skill PYTHON SQL Minimum Qualification B.TECH/B.E Certification No data available Job Description We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will have expertise in Python, SQL, Tableau, and PySpark, with additional exposure to SAS, banking domain knowledge, and version control tools like GIT and BitBucket. The candidate will be responsible for developing and optimizing data pipelines, ensuring efficient data processing, and supporting business intelligence initiatives. Key Responsibilities: Design, build, and maintain data pipelines using Python and PySpark Develop and optimize SQL queries for data extraction and transformation Create interactive dashboards and visualizations using Tableau Implement data models to support analytics and business needs Collaborate with cross-functional teams to understand data requirements Ensure data integrity, security, and governance across platforms Utilize version control tools like GIT and BitBucket for code management Leverage SAS and banking domain knowledge to improve data insights Required Skills: Strong proficiency in Python and PySpark for data processing Advanced SQL skills for data manipulation and querying Experience with Tableau for data visualization and reporting Familiarity with database systems and data warehousing concepts Preferred Skills: Knowledge of SAS and its applications in data analysis Experience working in the banking domain Understanding of version control systems, specifically GIT and BitBucket Knowledge of pandas, numpy, statsmodels, scikit-learn, matplotlib, PySpark , SASPy Qualifications: Bachelor's/Master's degree in Computer Science, Data Science, or a related field Excellent problem-solving and analytical skills Ability to work collaboratively in a fast-paced environment Workflow Workflow Type L&S-DA-Consulting
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Roles & Responsibilities: • Be responsible for the development of the conceptual, logical, and physical data models • Work with application/solution teams to implement data strategies, build data flows and develop/execute logical and physical data models • Implement and maintain data analysis scripts using SQL and Python. • Develop and support reports and dashboards using Google Plx/Data Studio/Looker. • Monitor performance and implement necessary infrastructure optimizations. • Demonstrate ability and willingness to learn quickly and complete large volumes of work with high quality. • Demonstrate excellent collaboration, interpersonal communication and written skills with the ability to work in a team environment. Minimum Qualifications • Hands-on experience with design, development, and support of data pipelines • Strong SQL programming skills (Joins, sub queries, queries with analytical functions, stored procedures, functions etc.) • Hands-on experience using statistical methods for data analysis • Experience with data platform and visualization technologies such as Google PLX dashboards, Data Studio, Tableau, Pandas, Qlik Sense, Splunk, Humio, Grafana • Experience in Web Development like HTML, CSS, jQuery, Bootstrap • Experience in Machine Learning Packages like Scikit-Learn, NumPy, SciPy, Pandas, NLTK, BeautifulSoup, Matplotlib, Statsmodels. • Strong design and development skills with meticulous attention to detail. • Familiarity with Agile Software Development practices and working in an agile environment • Strong analytical, troubleshooting and organizational skills • Ability to analyse and troubleshoot complex issues, and proficiency in multitasking • Ability to navigate ambiguity is great • BS degree in Computer Science, Math, Statistics or equivalent academic credentials
Posted 2 days ago
5.0 - 9.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Job Summary. ServCrust is a rapidly growing technology startup with the vision to revolutionize India's infrastructure. by integrating digitization and technology throughout the lifecycle of infrastructure projects.. About The Role. As a Data Science Engineer, you will lead data-driven decision-making across the organization. Your. responsibilities will include designing and implementing advanced machine learning models, analyzing. complex datasets, and delivering actionable insights to various stakeholders. You will work closely with. cross-functional teams to tackle challenging business problems and drive innovation using advanced. analytics techniques.. Responsibilities. Collaborate with strategy, data engineering, and marketing teams to understand and address business requirements through advanced machine learning and statistical models.. Analyze large spatiotemporal datasets to identify patterns and trends, providing insights for business decision-making.. Design and implement algorithms for predictive and causal modeling.. Evaluate and fine-tune model performance.. Communicate recommendations based on insights to both technical and non-technical stakeholders.. Requirements. A Ph.D. in computer science, statistics, or a related field. 5+ years of experience in data science. Experience in geospatial data science is an added advantage. Proficiency in Python (Pandas, Numpy, Sci-Kit Learn, PyTorch, StatsModels, Matplotlib, and Seaborn); experience with GeoPandas and Shapely is an added advantage. Strong communication and presentation skills. Show more Show less
Posted 6 days ago
0.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Ito’s Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning) Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C++/Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical l). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Location: Gurugram, Work hours will be aligned to APAC Markets.
Posted 6 days ago
0.0 years
4 - 5 Lacs
Gurgaon
On-site
Location Gurugram, India Share Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Ito’s Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning) Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C++/Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical l). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Location: Gurugram, Work hours will be aligned to APAC Markets.
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity: Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: As a Data Scientist, you will have the opportunity to apply your analytical skills and expertise to extract meaningful insights from vast amounts of data. We are currently seeking a talented and experienced individual to join our team and contribute to our data-driven decision-making process. Objectives: Collaborate with different business users, mainly Supply Chain/Manufacturing to understand the current state and identify opportunities to transform the business into a data-driven organization. Translate processes, and requirements into analytics solutions and metrics with effective data strategy, data quality, and data accessibility for decision making. Operationalize decision support solutions and drive use adoption as well as gathering feedback and metrics on Voice of Customer in order to improve analytics services. Understand the analytics drivers and data to be modeled as well as apply the appropriate quantitative techniques to provide business with actionable insights and ensure analytics model and data are access to the end users to evaluate “what-if” scenarios and decision making. Evaluate the data, analytical models, and experiments periodically to validate hypothesis ensuring it continues to provide business value as requirements and objectives evolve. Accountabilities: Collaborates with business partners in identifying analytical opportunities and developing BI-related goals and projects that will create strategically relevant insights. Work with internal and external partners to develop analytics vision and programs to advance BI solutions and practices. Understands data and sources of data. Strategizes with IT development team and develops a process to collect, ingest, and deliver data along with proper data models for analytical needs. Interacts with business users to define pain points, problem statement, scope, and analytics business case. Develops solutions with recommended data model and business intelligence technologies including data warehouse, data marts, OLAP modeling, dashboards/reporting, and data queries. Works with DevOps and database teams to ensure proper design of system databases and appropriate integration with other enterprise applications. Collaborates with Enterprise Data and Analytics Team to design data model and visualization solutions that synthesize complex data for data mining and discovery. Assists in defining requirements and facilitates workshops and prototyping sessions. Develops and applies technologies such as machine-learning, deep-learning algorithm to enable advanced analytics product functionality. EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS: Bachelors’ Degree, from an accredited institution in Data Science, Statistics, Computer Science, or related field. 3+ years of experience with statistical modeling such as clustering, segmentation, multivariate, regression, etc. and analytics tools such as R, Python, Databricks, etc. required Experience in developing and applying predictive and prescriptive modeling, deep-learning, or other machine learning techniques a plus. Hands-on development of AI solutions that comply with industry standards and government regulations. Great numerical and analytical skills, as well as basic knowledge of Python Analytics packages (Pandas, scikit-learn, statsmodels). Ability to build and maintain scalable and reliable data pipelines that collect, transform, manipulate, and load data from internal and external sources. Ability to use statistical tools to conduct data analysis and identify data quality issues throughout the data pipeline. Experience with BI and Visualization tools (f. e. Qlik, Power BI), ETL, NoSQL and proven design skills a plus. Excellent written and verbal communication skills including the ability to interact effectively with multifunctional teams. Experience with working with agile teams. WHAT TAKEDA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Li-Hybrid Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Manager- GBS Commercial Location: Bangalore Reporting to: Senior Manager - GBS Commercial Purpose of the role This role sits at the intersection of data science and revenue growth strategy, focused on developing advanced analytical solutions to optimize pricing, trade promotions, and product mix. The candidate will lead the end-to-end design, deployment, and automation of machine learning models and statistical frameworks that support commercial decision-making, predictive scenario planning, and real-time performance tracking. By leveraging internal and external data sources—including transactional, market, and customer-level data—this role will deliver insights into price elasticity, promotional lift, channel efficiency, and category dynamics. The goal is to drive measurable improvements in gross margin, ROI on trade spend, and volume growth through data-informed strategies. Key tasks & accountabilities Design and implement price elasticity models using linear regression, log-log models, and hierarchical Bayesian frameworks to understand consumer response to pricing changes across channels and segments. Build uplift models (e.g., Causal Forests, XGBoost for treatment effect) to evaluate promotional effectiveness and isolate true incremental sales vs. base volume. Develop demand forecasting models using ARIMA, SARIMAX, and Prophet, integrating external factors such as seasonality, promotions, and competitor activity. time-series clustering and k-means segmentation to group SKUs, customers, and geographies for targeted pricing and promotion strategies. Construct assortment optimization models using conjoint analysis, choice modeling, and market basket analysis to support category planning and shelf optimization. Use Monte Carlo simulations and what-if scenario modeling to assess revenue impact under varying pricing, promo, and mix conditions. Conduct hypothesis testing (t-tests, ANOVA, chi-square) to evaluate statistical significance of pricing and promotional changes. Create LTV (lifetime value) and customer churn models to prioritize trade investment decisions and drive customer retention strategies. Integrate Nielsen, IRI, and internal POS data to build unified datasets for modeling and advanced analytics in SQL, Python (pandas, statsmodels, scikit-learn), and Azure Databricks environments. Automate reporting processes and real-time dashboards for price pack architecture (PPA), promotion performance tracking, and margin simulation using advanced Excel and Python. Lead post-event analytics using pre/post experimental designs, including difference-in-differences (DiD) methods to evaluate business interventions. Collaborate with Revenue Management, Finance, and Sales leaders to convert insights into pricing corridors, discount policies, and promotional guardrails. Translate complex statistical outputs into clear, executive-ready insights with actionable recommendations for business impact. Continuously refine model performance through feature engineering, model validation, and hyperparameter tuning to ensure accuracy and scalability. Provide mentorship to junior analysts, enhancing their skills in modeling, statistics, and commercial storytelling. Maintain documentation of model assumptions, business rules, and statistical parameters to ensure transparency and reproducibility. Other Competencies Required Presentation Skills: Effectively presenting findings and insights to stakeholders and senior leadership to drive informed decision-making. Collaboration: Working closely with cross-functional teams, including marketing, sales, and product development, to implement insights-driven strategies. Continuous Improvement: Actively seeking opportunities to enhance reporting processes and insights generation to maintain relevance and impact in a dynamic market environment. Data Scope Management: Managing the scope of data analysis, ensuring it aligns with the business objectives and insights goals. Act as a steadfast advisor to leadership, offering expert guidance on harnessing data to drive business outcomes and optimize customer experience initiatives. Serve as a catalyst for change by advocating for data-driven decision-making and cultivating a culture of continuous improvement rooted in insights gleaned from analysis. Continuously evaluate and refine reporting processes to ensure the delivery of timely, relevant, and impactful insights to leadership stakeholders while fostering an environment of ownership, collaboration, and mentorship within the team. Technical Skills - Must Have Data Manipulation & Analysis: Advanced proficiency in SQL, Python (Pandas, NumPy), and Excel for structured data processing. Data Visualization: Expertise in Power BI and Tableau for building interactive dashboards and performance tracking tools. Modeling & Analytics: Hands-on experience with regression analysis, time series forecasting, and ML models using scikit-learn or XGBoost. Data Engineering Fundamentals: Knowledge of data pipelines, ETL processes, and integration of internal/external datasets for analytical readiness. Proficient in Power BI, Advanced MS Excel (Pivots, calculated fields, Conditional formatting, charts, dropdown lists, etc.), MS PowerPoint SQL & Python. Business Environment Work closely with Zone Revenue Management teams. Work in a fast-paced environment. Provide proactive communication to the stakeholders. This is an offshore role and requires comfort with working in a virtual environment. GCC is referred to as the offshore location. The role requires working in a collaborative manner with Zone/country business heads and GCC commercial teams. Summarize insights and recommendations to be presented back to the business. Continuously improve, automate, and optimize the process. Geographical Scope: Global 3. Qualifications, Experience, Skills Level Of Educational Attainment Required Bachelor or Post-Graduate in the field of Business & Marketing, Engineering/Solution, or other equivalent degree or equivalent work experience. Previous Work Experience 5-8 years of experience in the Retail/CPG domain. Extensive experience solving business problems using quantitative approaches. Comfort with extracting, manipulating, and analyzing complex, high volume, high dimensionality data from varying sources. And above all of this, an undying love for beer! We dream big to create future with more cheer. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Roles and Responsibilities : Analyze category performance across sales channels (D2C, marketplaces, offline). Track KPIs like revenue, ASP, margin, sell-through, stock cover, and inventory turns. Conduct pricing, discount, and profitability analysis at SKU and category levels. Identify top-performing or underperforming products and uncover performance drivers. Build dashboards and automated reports for category health and inventory planning. Collaborate with marketing, SCM, and category teams to inform business decisions. Perform trend, seasonality, and cohort analysis to improve demand forecasting. Use customer behavior data (views, clicks, conversions) to support assortment planning. Automate reporting workflows and optimize SQL/Python pipelines. Support new product launches with benchmarks and success prediction models. Skills & Qualifications : 0–2 years of experience in a data analytics role, preferably in E- commerce or Retail. Proficiency in MySQL: writing complex queries, joins, window functions. Advanced Excel/Google Sheets: pivot tables, dynamic dashboards, conditional formatting. Experience in Python: Pandas, automation scripts, statsmodels/scikit- learn. Comfort with data visualization: Power BI / Tableau / Looker Studio. Understanding of product lifecycle, inventory metrics, pricing levers, and customer insights. Strong foundation in statistics: descriptive stats, A/B testing, forecasting models. Excellent problem-solving, data storytelling, and cross-functional collaboration skills. Preferred / Bonus Skills : Experience with Shopify, Magento, or other e-commerce platforms. Familiarity with Google Analytics 4 (GA4). Knowledge of merchandising or visual analytics. Exposure to machine learning (e.g., clustering, success prediction). Experience with VBA or Google Apps Script for reporting automation. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will have expertise in Python, SQL, Tableau, and PySpark, with additional exposure to SAS, banking domain knowledge, and version control tools like GIT and BitBucket. The candidate will be responsible for developing and optimizing data pipelines, ensuring efficient data processing, and supporting business intelligence initiatives. Key Responsibilities Design, build, and maintain data pipelines using Python and PySpark Develop and optimize SQL queries for data extraction and transformation Create interactive dashboards and visualizations using Tableau Implement data models to support analytics and business needs Collaborate with cross-functional teams to understand data requirements Ensure data integrity, security, and governance across platforms Utilize version control tools like GIT and BitBucket for code management Leverage SAS and banking domain knowledge to improve data insights Required Skills Strong proficiency in Python and PySpark for data processing Advanced SQL skills for data manipulation and querying Experience with Tableau for data visualization and reporting Familiarity with database systems and data warehousing concepts Preferred Skills Knowledge of SAS and its applications in data analysis Experience working in the banking domain Understanding of version control systems, specifically GIT and BitBucket Knowledge of pandas, numpy, statsmodels, scikit-learn, matplotlib, PySpark , SASPy Qualifications Bachelor's/Master's degree in Computer Science, Data Science, or a related field Excellent problem-solving and analytical skills Ability to work collaboratively in a fast-paced environment Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Job Summary Fiche de poste : UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity: Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: As a Data Scientist, you will have the opportunity to apply your analytical skills and expertise to extract meaningful insights from vast amounts of data. We are currently seeking a talented and experienced individual to join our team and contribute to our data-driven decision-making process. Objectives: Collaborate with different business users, mainly Supply Chain/Manufacturing to understand the current state and identify opportunities to transform the business into a data-driven organization. Translate processes, and requirements into analytics solutions and metrics with effective data strategy, data quality, and data accessibility for decision making. Operationalize decision support solutions and drive use adoption as well as gathering feedback and metrics on Voice of Customer in order to improve analytics services. Understand the analytics drivers and data to be modeled as well as apply the appropriate quantitative techniques to provide business with actionable insights and ensure analytics model and data are access to the end users to evaluate “what-if” scenarios and decision making. Evaluate the data, analytical models, and experiments periodically to validate hypothesis ensuring it continues to provide business value as requirements and objectives evolve. Accountabilities: Collaborates with business partners in identifying analytical opportunities and developing BI-related goals and projects that will create strategically relevant insights. Work with internal and external partners to develop analytics vision and programs to advance BI solutions and practices. Understands data and sources of data. Strategizes with IT development team and develops a process to collect, ingest, and deliver data along with proper data models for analytical needs. Interacts with business users to define pain points, problem statement, scope, and analytics business case. Develops solutions with recommended data model and business intelligence technologies including data warehouse, data marts, OLAP modeling, dashboards/reporting, and data queries. Works with DevOps and database teams to ensure proper design of system databases and appropriate integration with other enterprise applications. Collaborates with Enterprise Data and Analytics Team to design data model and visualization solutions that synthesize complex data for data mining and discovery. Assists in defining requirements and facilitates workshops and prototyping sessions. Develops and applies technologies such as machine-learning, deep-learning algorithm to enable advanced analytics product functionality. EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS : Bachelors’ Degree, from an accredited institution in Data Science, Statistics, Computer Science, or related field. 3+ years of experience with statistical modeling such as clustering, segmentation, multivariate, regression, etc. and analytics tools such as R, Python, Databricks, etc. required Experience in developing and applying predictive and prescriptive modeling, deep-learning, or other machine learning techniques a plus. Hands-on development of AI solutions that comply with industry standards and government regulations. Great numerical and analytical skills, as well as basic knowledge of Python Analytics packages (Pandas, scikit-learn, statsmodels). Ability to build and maintain scalable and reliable data pipelines that collect, transform, manipulate, and load data from internal and external sources. Ability to use statistical tools to conduct data analysis and identify data quality issues throughout the data pipeline. Experience with BI and Visualization tools (f. e. Qlik, Power BI), ETL, NoSQL and proven design skills a plus. Excellent written and verbal communication skills including the ability to interact effectively with multifunctional teams. Experience with working with agile teams. WHAT TAKEDA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Neo Group: Neo is a new-age, focused Wealth and Asset Management platform in India, catering to HNIs, UHNIs and multi-family offices. Neo stands on its three pillars of unbiased advisory, transparency and cost-efficiency, to offer comprehensive, trustworthy solutions. Founded by Nitin Jain (ex-CEO of Edelweiss Wealth), Neo has amassed over USD 3 Billion (₹25,000 Cr.) of Assets Under Advice within a short span of 2 years since inception, including USD 360 Million (₹3,000 Cr.) Assets Under Management. We have recently partnered with Peak XV Partners via a USD 35 Million growth round. To know more, please visit: www.neo-group.in Position: Senior Data Scientist Location: Mumbai Experience: 4 - 8 years Job Description: You are a data pro with deep statistical knowledge and analytical aptitude. You know how to make sense of massive amounts of data and gather deep insights. You will use statistics, data mining, machine learning, and deep learning techniques to deliver data-driven insights for clients. You will dig deep to understand their challenges and create innovative yet practical solutions. Responsibilities: • Meeting with the business team to discuss user interface ideas and applications. • Selecting features, building and optimizing classifiers using machine learning techniques • Data mining using state-of-the-art methods • Doing ad-hoc analysis and presenting results in a clear manner • Optimize application for maximum speed and scalability • Assure that all user input is validated before submitting code • Collaborate with other team members and stakeholders • Taking ownership of features and accountability Requirements: • 4+ years’ experience in developing Data Models • Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. • Excellent understanding of NLP and language processing • Proficient understanding of Python or PySpark • Good experience of Python and databases such as MongoDB or MySQL • Good applied statistics skills, such as distributions, statistical testing, regression, etc. • Build Acquisition Scorecard Models • Build Behaviour Scorecard Models • Created Threat Detection Models • Created risk profiling model or classification model • Build Threat/Fraud Triggers from various sources of data • Experience with Data Analysis Libraries - NumPy, Pandas, Statsmodels, Dask • Good understanding of Word2vec, RNNs, Transformers, Bert, Resnet, MobileNet, Unet, Mask-RCNN, Siamese Networks, GradCam, image augmentation techniques, GAN, Tensorboard • Ability to provide accurate estimates for tasks and detailed breakdowns for planning and managing sprints • Deployment - Flask, Tensorflow serving, Lambda functions, Docker is a plus • Previous Experience leading a DS team is a plus Personal Qualities: • An ability to perform well in a fast-paced environment • Excellent analytical and multitasking skills • Stays up-to-date on emerging technologies • Data-oriented personality Why join us? We will provide you with the opportunity to challenge yourself and learn new skills, as you become an integral part our growth story. We are group of ambitious people who believe in building a business environment around new age concepts, framework, and technologies built on a strong foundation of industry expertise. We promise you the prospect of being surrounded by smart, ambitious, motivated people, day-in and day-out. That’s the kind of work you can expect to do at Neo. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Neo Group: Neo is a new-age, focused Wealth and Asset Management platform in India, catering to HNIs, UHNIs and multi-family offices. Neo stands on its three pillars of unbiased advisory, transparency and cost-efficiency, to offer comprehensive, trustworthy solutions. Founded by Nitin Jain (ex-CEO of Edelweiss Wealth), Neo has amassed over USD 3 Billion (₹25,000 Cr.) of Assets Under Advice within a short span of 2 years since inception, including USD 360 Million (₹3,000 Cr.) Assets Under Management. We have recently partnered with Peak XV Partners via a USD 35 Million growth round. To know more, please visit: www.neo-group.in Position: Data Scientist Location: Mumbai Experience: 2-5 years Job Description: You are a data pro with deep statistical knowledge and analytical aptitude. You know how to make sense of massive amounts of data and gather deep insights. You will use statistics, data mining, machine learning, and deep learning techniques to deliver data-driven insights for clients. You will dig deep to understand their challenges and create innovative yet practical solutions. Responsibilities: • Meeting with the business team to discuss user interface ideas and applications. • Selecting features, building and optimizing classifiers using machine learning techniques • Data mining using state-of-the-art methods • Doing ad-hoc analysis and presenting results in a clear manner • Optimize application for maximum speed and scalability • Assure that all user input is validated before submitting code • Collaborate with other team members and stakeholders • Taking ownership of features and accountability Requirements: • 2+ years’ experience in developing Data Models • Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. • Excellent understanding of NLP and language processing • Proficient understanding of Python or PySpark • Basic understanding of Python and databases such as MongoDB or MySQL • Good applied statistics skills, such as distributions, statistical testing, regression, etc. • Build Acquisition Scorecard Models • Build Behaviour Scorecard Models • Created Threat Detection Models • Created risk profiling model or classification model • Build Threat/Fraud Triggers from various sources of data • Experience with Data Analysis Libraries - NumPy, Pandas, Statsmodels, Dask • Good understanding of Word2vec, RNNs, Transformers, Bert, Resnet, MobileNet, Unet, Mask-RCNN, Siamese Networks, GradCam, image augmentation techniques, GAN, Tensorboard • Ability to provide accurate estimates for tasks and detailed breakdowns for planning and managing sprints • Deployment - Flask, Tensorflow serving, Lambda functions, Docker is a plus • Previous Experience leading a DS team is a plus Personal Qualities: • An ability to perform well in a fast-paced environment • Excellent analytical and multitasking skills • Stays up-to-date on emerging technologies • Data-oriented personality Why join us? We will provide you with the opportunity to challenge yourself and learn new skills, as you become an integral part our growth story. We are group of ambitious people who believe in building a business environment around new age concepts, framework, and technologies built on a strong foundation of industry expertise. We promise you the prospect of being surrounded by smart, ambitious, motivated people, day-in and day-out. That’s the kind of work you can expect to do at Neo. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Praxair India Private Limited | Business Area: Digitalisation Data Scientist for AI Products (Global) Bangalore, Karnataka, India | Working Scheme: On-Site | Job Type: Regular / Permanent / Unlimited / FTE | Reference Code: req23348 It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less
Posted 2 weeks ago
5.0 - 7.0 years
3 - 6 Lacs
Pune
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – AI and DATA – Statistical Modeler-Senior At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our EY- GDS AI and Data team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. Technical Skills: Statistical Programming Languages: Python, R Libraries & Frameworks: Pandas, NumPy, Scikit-learn, StatsModels, Tidyverse, caret Data Manipulation Tools: SQL, Excel Data Visualization Tools: Matplotlib, Seaborn, ggplot2, Machine Learning Techniques: Supervised and unsupervised learning, model evaluation (cross-validation, ROC curves) 5-7 years of experience in building statistical forecast models for pharma industry Deep understanding of patient flows,treatment journey across both Onc and Non Onc Tas. What we look for A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment What working at EY offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and advisory services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
3.0 - 5.0 years
5 - 8 Lacs
Bengaluru
On-site
Your Job As a Data Analyst in Molex's Copper Solutions Business Unit software solution group, you will be to extracting actionable insights from large and complex manufacturing datasets, identifying trends, optimizing production processes, improving operational efficiency, minimizing downtime, and enhancing overall product quality. You will be collaborating closely with cross-functional teams to ensure the effective use of data in driving continuous improvement and achieving business objectives within the manufacturing environment. Our Team Molex's Copper Solutions Business Unit (CSBU) is a global team that works together to deliver exceptional products to worldwide telecommunication and data center customers. SSG under CSBU is one of the most highly technically advanced software solution group within Molex. Our group leverages software expertise to enhance the concept, design, manufacturing, and support of high-speed electrical interconnects. What You Will Do 1. Collect, clean, and transform data from various sources to support analysis and decision-making processes. 2. Conduct thorough data analysis using Python to uncover trends, patterns, and insights. 3. Create & maintain reports based on business needs. 4. Prepare comprehensive reports that detail analytical processes and outcomes. 5. Develop and maintain visualizations/dashboards. 6. Collaborate with cross-functional teams to understand data needs and deliver actionable insights. 7. Perform ad hoc analysis to support business decisions. 8. Write efficient and optimized SQL queries to extract, manipulate, and analyze data from various databases. 9. Identify gaps and inefficiencies in current reporting processes and implement improvements and new solutions. 10. Ensure data quality and integrity across all reports and tools. Who You Are (Basic Qualifications) B.E./B.Tech Degree in Computer Science Engineering, Information Science, Data Science or related discipline. 3-5 years of progressive data analysis experience with Python (pandas, numpy, matplotlib, OpenPyXL, SciPy , Statsmodels, Seaborn). What Will Put You Ahead . Experience with Power BI, Tableau, or similar tools for creating interactive dashboards and reports tailored for manufacturing operations. • Experience with predictive analytics e.g. machine learning models (e.g., using Scikit-learn) to predict failures, optimize production, or forecast demand. • Experience with big data tools like Hadoop, Apache Kafka, or cloud platforms (e.g., AWS, Azure) for managing and analyzing large-scale data. • Knowledge on A/B testing & forecasting. • Familiarity with typical manufacturing data (e.g., machine performance metrics, production line data, quality control metrics). At Koch companies, we are entrepreneurs. This means we openly challenge the status quo, find new ways to create value and get rewarded for our individual contributions. Any compensation range provided for a role is an estimate determined by available market data. The actual amount may be higher or lower than the range provided considering each candidate's knowledge, skills, abilities, and geographic location. If you have questions, please speak to your recruiter about the flexibility and detail of our compensation philosophy. Who We Are {Insert company language from Company Boilerplate Language Guide } At Koch, employees are empowered to do what they do best to make life better. Learn how our business philosophy helps employees unleash their potential while creating value for themselves and the company. Additionally, everyone has individual work and personal needs. We seek to enable the best work environment that helps you and the business work together to produce superior results.
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Your Job As a Data Analyst in Molex’s Copper Solutions Business Unit software solution group, you will be to extracting actionable insights from large and complex manufacturing datasets, identifying trends, optimizing production processes, improving operational efficiency, minimizing downtime, and enhancing overall product quality. You will be collaborating closely with cross-functional teams to ensure the effective use of data in driving continuous improvement and achieving business objectives within the manufacturing environment. Our Team Molex’s Copper Solutions Business Unit (CSBU) is a global team that works together to deliver exceptional products to worldwide telecommunication and data center customers. SSG under CSBU is one of the most highly technically advanced software solution group within Molex. Our group leverages software expertise to enhance the concept, design, manufacturing, and support of high-speed electrical interconnects. What You Will Do Collect, clean, and transform data from various sources to support analysis and decision-making processes. Conduct thorough data analysis using Python to uncover trends, patterns, and insights. Create & maintain reports based on business needs. Prepare comprehensive reports that detail analytical processes and outcomes. Develop and maintain visualizations/dashboards. Collaborate with cross-functional teams to understand data needs and deliver actionable insights. Perform ad hoc analysis to support business decisions. Write efficient and optimized SQL queries to extract, manipulate, and analyze data from various databases. Identify gaps and inefficiencies in current reporting processes and implement improvements and new solutions. Ensure data quality and integrity across all reports and tools. Who You Are (Basic Qualifications) B.E./B.Tech Degree in Computer Science Engineering, Information Science, Data Science or related discipline. 3-5 years of progressive data analysis experience with Python (pandas, numpy, matplotlib, OpenPyXL, SciPy , Statsmodels, Seaborn). What Will Put You Ahead Experience with Power BI, Tableau, or similar tools for creating interactive dashboards and reports tailored for manufacturing operations. Experience with predictive analytics e.g. machine learning models (e.g., using Scikit-learn) to predict failures, optimize production, or forecast demand. Experience with big data tools like Hadoop, Apache Kafka, or cloud platforms (e.g., AWS, Azure) for managing and analyzing large-scale data. Knowledge on A/B testing & forecasting. Familiarity with typical manufacturing data (e.g., machine performance metrics, production line data, quality control metrics). At Koch companies, we are entrepreneurs. This means we openly challenge the status quo, find new ways to create value and get rewarded for our individual contributions. Any compensation range provided for a role is an estimate determined by available market data. The actual amount may be higher or lower than the range provided considering each candidate's knowledge, skills, abilities, and geographic location. If you have questions, please speak to your recruiter about the flexibility and detail of our compensation philosophy. Who We Are {Insert company language from Company Boilerplate Language Guide} At Koch, employees are empowered to do what they do best to make life better. Learn how our business philosophy helps employees unleash their potential while creating value for themselves and the company. Additionally, everyone has individual work and personal needs. We seek to enable the best work environment that helps you and the business work together to produce superior results. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description As a leading global investment management firm, AB fosters diverse perspectives and embraces innovation to help our clients navigate the uncertainty of capital markets. Through high-quality research and diversified investment services, we serve institutions, individuals and private wealth clients in major markets worldwide. Our ambition is simple: to be our clients’ most valued asset-management partner. With over 4,400 employees across 51 locations in 25 countries, our people are our advantage. We foster a culture of intellectual curiosity and collaboration to create an environment where everyone can thrive and do their best work. Whether you're producing thought-provoking research, identifying compelling investment opportunities, infusing new technologies into our business or providing thoughtful advice to clients, we’re looking for unique voices to help lead us forward. If you’re ready to challenge your limits and build your future, join us. Describe The Role Day to day responsibilities will include: Conduct asset allocation and manager evaluation research and creating bespoke client portfolios Undertake bespoke requests for data analysis; Build dashboards for data visualization (Python Dash) Handle data collation, cleansing and analysis (SQL, Python) Create new databases using data from different sources, and set up infrastructure for their maintenance; Clean and manipulate data, build models and produce automated reports using Python; Use statistical modelling and Machine Learning to address quantitative problems (Python) Conduct and deliver top notch research projects with quantitative applications to fundamental strategies. Preferred Skill Sets 2+ years of experience of RDBMS database design, preferably on MS SQL Server 2+ years of Python development experience. Advanced skills with programming using any of Python libraries (pandas, numpy, statsmodels, dash, pypfopt, cvxpy, keras, scikit-learn) – Must haves (pandas/numpy/statsmodels) Candidate should be capable of manipulating large quantities of data High level of attention to detail and accuracy Working experience on building quantitative models; experience with factor research, portfolio construction, systematic models Academic qualification in Mathematics/Physics/Statistics/Econometrics/Engineering or related field Understanding of company financial statements, accounting and risk analysis would be an added advantage Strong (English) communication skills with proven ability to interact with global clients Pune, India Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Description Big Data Handling: Passion and attitude to learn new data practices and tools (ingestion, transformation, governance, security & privacy), on both on-prem and Cloud (AWS preferable). Influences and contributes to innovative ways of unlocking value through companywide and external data Diagnostic Models Experience with diagnostic system using decision theory and causal models (including tools like probability, DAG, ADMG, Deterministic SME, etc) to predict the effects of an action to improve insight-led decisions. Able to productize the diagnostic systems built for reuse. Predictive & Prescriptive Analytics Models Expert in AI solutions - ML, DL, NLP, ES, RL etc. Should be able to build robust prescriptive learning systems that are scalable, real-time. Should be able to determine "Next Best Action" following Prescriptive Analytics. Autonomous Cognitive Systems Drive Autonomous system utility and continuously improve precision through creating the stable learning environment. Should be able to build Intelligent Autonomous Systems to prescribe proactive actions based on ML predictions and solicit feedback from the support functions with minimal human involvement. Big Data Tech, Environments & Frameworks Advanced applications of CNNs, RNNs, MLPs, Deep learning. Excellent application of machine learning and deep learning packages like tensorflow, pytorch, scikit, numpy, pandas, statsmodels theano, XGBoost etc. Demonstrated superior deep learning algorithms/framework. At least 1 certification in AWS will be preferred. Programming: Python, R, SQL Frameworks: TensorFlow, Keras, Scikit-learn Visualization: Tableau, Power BI Cloud: AWS, Azure Statistical Modeling: Regression, classification, clustering, time series Soft Skills: Communication, stakeholder management, problem-solving VOIS Equal Opportunity Employer Commitment India VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Data Science Intern (Remote) Company: Coreline Solutions Location: Remote / Pune, India Duration: 3 to 6 Months Stipend: Unpaid (Full-time offer based on performance) Work Mode: Remote About Coreline Solutions We’re a tech and consulting company focused on digital transformation, custom software development, and data-driven solutions. Role Overview We’re looking for a Data Science Intern to work on real-world data projects involving analytics, modeling, and business insights. Great opportunity for students or freshers to gain practical experience in the data science domain. Key Responsibilities Collect, clean, and analyze large datasets using Python, SQL, and Excel. Develop predictive and statistical models using libraries like scikit-learn or statsmodels. Visualize data and present insights using tools like Matplotlib, Seaborn, or Power BI. Support business teams with data-driven recommendations. Collaborate with data analysts, ML engineers, and developers. Requirements Pursuing or completed degree in Data Science, Statistics, CS, or related field. Proficient in Python and basic understanding of machine learning. Familiarity with data handling tools (Pandas, NumPy) and SQL. Good analytical and problem-solving skills. Perks Internship Certificate Letter of Recommendation (Top Performers) Mentorship & real-time project experience Potential full-time role To Apply Email your resume to 📧 hr@corelinesolutions.site Subject: “Application for Data Science Intern – [Your Full Name]” Show more Show less
Posted 3 weeks ago
4.0 years
6 - 9 Lacs
Hyderābād
On-site
About Citco Citco is a global leader in fund services, corporate governance and related asset services with staff across 80 offices worldwide. With more than $1.7 trillion in assets under administration, we deliver end-to-end solutions and exceptional service to meet our clients’ needs. For more information about Citco, please visit www.citco.com About the Team & Business Line: Citco Fund Services is a division of the Citco Group of Companies and is the largest independent administrator of Hedge Funds in the world. Our continuous investment in learning and technology solutions means our people are equipped to deliver a seamless client experience. This position reports in to the Loan Services Business Line As a core member of our Loan Services Data and Reporting team, you will be working with some of the industry’s most accomplished professionals to deliver award-winning services for complex fund structures that our clients can depend upon Job Duties in Brief: Your Role: Develop and execute database queries and conduct data analyses Create scripts to analyze and modify data, import/export scripts and execute stored procedures Model data by writing SQL queries/Python codes to support data integration and dashboard requirements Develop data pipelines that provide fast, optimized, and robust end-to-end solutions Leverage and contribute to design/building relational database schemas for analytics. Handle and manipulate data in various structures and repositories (data cube, data mart, data warehouse, data lake) Analyze, implement and contribute to building of APIs to improve data integration pipeline Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Perform data integration through extracting, transforming and loading (ETL) data from various sources. Identify opportunities to improve processes and strategies with technology solutions and identify development needs in order to improve and streamline operations Create tabular reports, matrix reports, parameterized reports, visual reports/dashbords in a reporting application such as Power BI Desktop/Cloud or QLIK Integrating PBI/QLIK reports into other applications using embedded analytics like Power BI service (SaaS), or by API automation is also an advantage Implementation of NLP techniques for text representation, semantic extraction techniques, data structures and modelling Contribute to deployment and maintainence of machine learning solutions in production environments Building and Designing cloud applications using Microsoft Azure/AWS cloud technologies. About You: Background / Qualifications Bachelor’s Degree in technology/related field or equivalent work experience 4+ Years of SQL and/or Python experience is a must Strong knowledge of data concepts and tools and experienced in RDMS such as MS SQL Server, Oracle etc. Well-versed with concepts and techniques of Business Intelligence and Data Warehousing. Strong database designing and SQL skills. objects development, performance tuning and data analysis In-depth understanding of database management systems, OLAP & ETL frameworks Familiarity or hands on experience working with REST or SOAP APIs Well versed with concepts for API Management and Integration with various data sources in cloud platforms, to help with connecting to traditional SQL and new age data sources, such as Snowflake Familiarity with Machine Learning concepts like feature selection/deep learning/AI and ML/DL frameworks (like Tensorflow or PyTorch) and libraries (like scikit-learn, StatsModels) is an advantage Familiarity with BI technologies (e.g. Microsoft Power BI, Oracle BI) is an advantage Hands-on experience at least in one ETL tool (SSIS, Informatica, Talend, Glue, Azure Data factory) and associated data integration principles is an advantage Minimum 1+ year experience with Cloud platform technologies (AWS/Azure), including Azure Machine Learning is desirable. Following AWS experience is a plus: Implementing identity and access management (IAM) policies Managing user accounts with IAM Knowledge of writing infrastructure as code (IaC) using CloudFormation or Terraform. Implementing cloud storage using Amazon Simple Storage Service (S3) Experience with serverless approaches using AWS Lambda, e.g. AWS (SAM) Configuring Amazon Elastic Compute Cloud (EC2) Instances Previous Work Experience: Experience querying databases and strong programming skills: Python, SQL, PySpark etc. Prior experience supporting ETL production environments & web technologies such as XML is an advatange Previous working experience on Azure Data Services including ADF, ADLS, Blob, Data Bricks, Hive, Python, Spark and/or features of Azure ML Studio, ML Services and ML Ops is an advantage Experience with dashboard and reporting applications like Qlik, Tableau, Power BI Other: Well rounded individual possessing a high degree of initiative Proactive person willing to accept responsibility with very little hand-holding A strong analytical and logical mindset Demonstrated proficiency in interpersonal and communication skills including oral and written English. Ability to work in fast paced, complex Business & IT environments Knowledge of Loan Servicing and/or Loan Administration is an advantage Understanding of Agile/Scrum methodology as it relates to the software development lifecycle What We Offer: A rewarding and challenging environment that spans multiple geographies and multiple business lines Great working environment, competitive salary and benefits, and opportunities for educational support Be part of an industry leading global organisation, renowned for excellence Opportunities for personal and professional career development Our Benefits Your well-being is of paramount importance to us, and central to our success. We provide a range of benefits, training and education support, and flexible working arrangements to help you achieve success in your career while balancing personal needs. Ask us about specific benefits in your location. We embrace diversity, prioritizing the hiring of people from diverse backgrounds. Our inclusive culture is a source of pride and strength, fostering innovation and mutual respect. Citco welcomes and encourages applications from people with disabilities. Accommodations are available upon request for candidates taking part in all aspects of the selection .
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane