Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Reference # 318879BR Job Type Full Time Your role Do you have sharp analytic skills? Do you know how to solve problems and develop innovative solutions? Do you enjoy responsibility and independent work? We’re looking for Data Analyst/Team Lead with practical knowledge in Python to: able to apply data science and statistical techniques to real problems, well organized and dependable, able to turn large datasets into an asset across the organization, able to collect and combine data from multiple sources, analyze it for insights and produce great visuals Your team UBS Evidence Lab is the most experienced global team of alternative data experts. We’re a collection of data and software engineers, quantitative market researchers, social media experts, data pricing whizzes, and more. This diversity allows us to look at problems from differing perspectives and turn data into evidence. You’ll be working closely with various Data Analyst teams, which are a part of a street-leading primary research platform called Evidence Lab. Your role will be focused on ability to systematically analyze and build statistical models on companies and markets that Evidence Lab does research on. Your main goal is to give insights for better business decisions. Our offices are located Globally and you will be working closely with analysts based out of Poland, US, UK, and APAC. You would be required to work from UBS BSC Hyderabad (India). Your expertise data processing and programming skills in Python including knowledge of libraries used for data analysis / data science, experience in data analysis and data science techniques, knowledge of statistical and econometric techniques: time series (stationarity), clustering, regression, variable methods selection, out of sample testing, good practical understanding of SQL, understanding of NLP techniques would be a benefit, the ability to deliver under time pressure and work independently, excellent written and oral English high degree of proactivity and creativity willingness to collaborate and work in a team ability to take ownership of projects and processes for continuous improvement excellent attention to detail team management flexible approach on working hours and work timings About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Summary Position Summary Data Scientist Do you have a passion for artificial intelligence, machine learning, and data analysis? Do you yearn to have the impact of your work recognized and valued by more than just your development team? If yes, we have just the role for you. In Deloitte’s Audit and Assurance business, we make businesses and markets better. An audit is more than an obligation; it is an opportunity to see further and deeper into businesses. In our role as independent auditors, we enhance trust in the companies we audit, helping a multitrillion dollar capital markets system function with greater confidence. As we aspire to the very highest standards of audit quality, we deliver deeper insights that can help clients become more effective organizations. You will be joining our collaborative Data Science team which leverages the most advanced technologies in machine learning and artificial intelligence to achieve the Deloitte Audit and Assurance vision of an AI-enabled audit. Some of our current projects focus on generative AI, prompt engineering, LLMs, anomaly detection, clustering, and knowledge graphs. As a Data Scientist, you will be supporting the Data Science group and actively participating in the entire lifecycle of a data science project, including exploratory data analysis, feature engineering, model creation, and model optimization to boost performance and accuracy. It is important that the person in this role is proactive and independent. You should be a problem solver with an open mind and an eagerness to pick up new skills. Specifically, you will be expected to: Develop, test, and deploy machine learning/AI models Collaborate daily with Data Science Managers and Senior Data Scientists to receive guidance and feedback on your work to develop your skills and ensure alignment with workstream objectives Be a contributor to the planning and direction of a project and effectively prioritize your tasks and objectives Contribute to internal discussions on emerging machine learning methodologies Qualifications Required: 2+ years of relevant experience Undergraduate degree in a quantitative field (computer science, engineering, mathematics, physics, machine learning, statistics) Experience using Python and relevant libraries ( NumPy, Pandas, Scikit-learn, etc.) Coursework related to machine learning, deep learning, and programming At least an introductory understanding of LLMs and prompt engineering Hands-on application experience using common machine learning frameworks such as TensorFlow, PyTorch, OpenAI, and LangChain Demonstrated ability to write high-quality code Demonstrated ability to develop machine learning models Ability to travel as needed ( Preferred: 3+ years of relevant experience Master’s degree in a quantitative field (computer science, engineering, mathematics, physics, machine learning, statistics) Relevant work experience (internships, school jobs, etc.) Familiarity with the Microsoft Azure cloud-based ecosystem, including Azure DevOps Familiarity with machine learning model development life cycles Experience with version control practices and tools (Git, etc.) Public AI-related projects you have developed available on GitHub Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300881 Show more Show less
Posted 2 weeks ago
12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
As a Senior Manager for data science, data modelling & Analytics, you will lead a team of data scientists and analysts while actively contributing to the development and implementation of advanced analytics solutions. This role requires a blend of strategic leadership and hands-on technical expertise to drive data-driven decision-making across the organization. Job Description: Key Responsibilities Hands-On Technical Contribution Design, develop, and deploy advanced machine learning models and statistical analyses to solve complex business problems. Utilize programming languages such as Python, R, and SQL to manipulate data and build predictive models. Understand end-to-end data pipelines, including data collection, cleaning, transformation, and visualization. Collaborate with IT and data engineering teams to integrate analytics solutions into production environments. Provide thought leadership on solutions and metrics based on the understanding of nature of business requirement. Team Leadership & Development Lead, mentor, and manage a team of data scientists and analysts, fostering a collaborative and innovative environment. Provide guidance on career development, performance evaluations, and skill enhancement. Promote continuous learning and adoption of best practices in data science methodologies. Engage and manage a hierarchical team while fostering a culture of collaboration. Strategic Planning & Execution Collaborate with senior leadership to define the data science strategy aligned with business objectives. Identify and prioritize high-impact analytics projects that drive business value. Ensure the timely delivery of analytics solutions, balancing quality, scope, and resource constraints. Client Engagement & Stakeholder Management Serve as the primary point of contact for clients, understanding their business challenges and translating them into data science solutions. Lead client presentations, workshops, and discussions to communicate complex analytical concepts in an accessible manner. Develop and maintain strong relationships with key client stakeholders, ensuring satisfaction and identifying opportunities for further collaboration. Manage client expectations, timelines, and deliverables, ensuring alignment with business objectives. Develop and deliver regular reports and dashboards to senior management, market stakeholders and clients highlighting key insights and performance metrics. Act as a liaison between technical teams and business units to align analytics initiatives with organizational goals. Cross-Functional Collaboration Work closely with cross capability teams such as Business Intelligence, Market Analytics, Data engineering to integrate analytics solutions into business processes. Translate complex data insights into actionable recommendations for non-technical stakeholders. Facilitate workshops and presentations to promote data driven conversations across the organization. Closely working with support functions to provide timely updates to leadership on operational metrics. Governance & Compliance Ensure adherence to data governance policies, including data privacy regulations (e.g., GDPR, PDPA). Implement best practices for data quality, security, and ethical use of analytics. Stay informed about industry trends and regulatory changes impacting data analytics. Qualifications Education: Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. Experience: 12+ years of experience in advanced analytics, data science, data modelling, machine learning, Generative AI or a related field with 5+ years in a leadership capacity. Proven track record of managing and delivering complex analytics projects. Familiarity with the BFSI/Hi Tech/Retail/Healthcare industry and experience with product, transaction, and customer-level data Experience with media data will be advantageous Technical Skills: Proficiency in programming languages like Python, R, or SQL. Experience with data visualization tools (e.g., Tableau, Power BI). Familiarity with big data platforms (e.g., Hadoop, Spark) and cloud services (e.g., AWS, GCP, Azure). Knowledge of machine learning frameworks and libraries. Soft Skills: Strong analytical and problem-solving abilities. Excellent communication and interpersonal skills. Ability to influence and drive change within the organization. Strategic thinker with a focus on delivering business outcomes. Desirable Attributes Proficient in the following advanced analytics techniques ( Should have proficiency in most) Descriptive Analytics: Statistical analysis, data visualization. Predictive Analytics: Regression analysis, time series forecasting, classification techniques, market mix modelling Prescriptive Analytics: Optimization, simulation modelling. Text Analytics: Natural Language Processing (NLP), sentiment analysis. Extensive knowledge of machine learning techniques, including ( Should have proficiency in most ) Supervised Learning: Linear regression, logistic regression, decision trees, support vector machines, random forests, gradient boosting machines among others Unsupervised Learning: K-means clustering, hierarchical clustering, principal component analysis (PCA), anomaly detection among others Reinforcement Learning: Q-learning, deep Q-networks, etc. Experience with Generative AI and large language models (LLMs) for text generation, summarization, and conversational agents ( Good to Have ) Researching, loading and application of the best LLMs (GPT, Gemini, LLAMA, etc.) for various objectives Hyper parameter tuning Prompt Engineering Embedding & Vectorization Fine tuning Proficiency in data visualization tools such as Tableau or Power BI ( Good to Have ) Strong skills in data management, structuring, and harmonization to support analytical needs (Must have) Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
India
On-site
Company Description Every day, thousands of patients across India face delays, stress, and reduced quality of care due to cumbersome administrative processes and insurance claim challenges. At Vitraya, we directly confront these issues by using advanced AI to streamline healthcare operations, eliminate administrative friction, and ensure healthcare providers can fully focus on patient outcomes. Role Description This is a full-time role for a Senior Fullstack AI/ML Engineer. The selected candidate will be responsible for designing, developing, and implementing AI and machine learning models, with a focus on pattern recognition and neural networks. Daily tasks include coding and software development, data analysis, and improving existing algorithms. The Senior Fullstack AI/ML Engineer will work closely with cross-functional teams to ensure the integration of AI solutions into the claims settlement process. We are looking for a passionate Senior AI/ML Engineer committed to building transformative solutions that positively impact patient care and healthcare delivery. Responsibilities Lead the design, development, and deployment of machine learning models for various use cases such as recommendation systems, computer vision, natural language processing (NLP), and predictive analytics. Work with large datasets to build, train, and optimize models using techniques such as classification, regression, clustering, and neural networks. Fine-tune pre-trained models and develop custom models based on specific business needs. Collaborate with data engineers to build scalable data pipelines and ensure the smooth integration of models into production. Collaborate with frontend/backend engineers to build AI-driven features into products or platforms. Build proof-of-concept or production-grade AI applications and tools with intuitive UIs or workflows. Ensure scalability and performance of deployed AI solutions within the full application stack. Implement model monitoring and maintenance strategies to ensure performance, accuracy, and continuous improvement of deployed models. Design and implement APIs or services that expose machine learning models to frontend or other systems Utilize cloud platforms (AWS, GCP, Azure) to deploy, manage, and scale AI/ML solutions. Stay up-to-date with the latest advancements in AI/ML research, and apply innovative techniques to improve existing systems. Communicate effectively with stakeholders to understand business requirements and translate them into AI/ML-driven solutions. Document processes, methodologies, and results for future reference and reproducibility. Why Join Us: Direct Patient Impact: Your contributions will significantly enhance patient care quality and reduce administrative burdens on healthcare providers. Innovation with Real Outcomes: Be part of pioneering AI applications that provide tangible, measurable improvements in healthcare. Professional Growth: Tackle unique, meaningful challenges that will sharpen your skills and offer growth in a supportive, collaborative environment. Empowered Team: Enjoy autonomy, flexibility, and the chance to influence core product decisions as part of a dedicated, mission-driven team. Required Skills & Qualifications • Experience: 5+ years of experience in AI/ML engineering roles, with a proven track record of successfully delivering machine learning projects. • AI/ML Expertise: Strong knowledge of machine learning algorithms (supervised, unsupervised, reinforcement learning) and AI techniques, including NLP, computer vision, and recommendation systems. • Programming Languages: Proficient in Python and relevant ML libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. • Data Manipulation: Experience with data manipulation libraries such as Pandas, NumPy, and SQL for managing and processing large datasets. • Model Development: Expertise in building, training, deploying, and fine-tuning machine learning models in production environments. • Cloud Platforms: Experience with cloud platforms such as AWS, GCP, or Azure for the deployment and scaling of AI/ML models. • MLOps: Knowledge of MLOps practices for model versioning, automation, and monitoring. • Data Preprocessing: Proficient in data cleaning, feature engineering, and preparing datasets for model training. Nice to Have Experience with deep learning architectures (CNNs, RNNs, GANs, etc.) and techniques. Knowledge of deployment strategies for AI models using APIs, Docker, or Kubernetes. Experience building full-stack applications powered by AI (e.g., chatbots, recommendation dashboards, AI assistants, etc.). Experience deploying AI/ML models in real-time environments using API gateways, microservices, or orchestration tools like Docker and Kubernetes. Solid understanding of statistics and probability. Experience working in Agile development environments. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Company Profile Our client is a global IT services company that helps businesses with digital transformation with offices in India and the United States. It helps businesses with digital transformation, provide IT collaborations and uses technology, innovation, and enterprise to have a positive impact on the world of business. With expertise is in the fields of Data, IoT, AI, Cloud Infrastructure and SAP, it helps accelerate digital transformation through key practice areas - IT staffing on demand, innovation and growth by focusing on cost and problem solving. Location & work – New Delhi (On-Site), WFO Employment Type - Full Time Profile – AI/ML Engineer Preferred experience – 3-5 Years The Role: We are seeking a highly skilled AI/ML Engineer with strong expertise in traditional statistical modeling using R and end-to-end ML pipeline configuration on Databricks. The ideal candidate will play a key role in designing, developing, and deploying advanced machine learning models, optimizing performance, and ensuring scalability across large datasets on the Databricks platform. Responsibilities: Design and implement traditional ML models using R (e.g., regression, classification, clustering, time-series). Develop and maintain scalable machine learning pipelines on Databricks. Configure and manage Databricks workspaces, clusters, and MLflow integrations for model versioning and deployment. Collaborate with data engineers, analysts, and domain experts to collect, clean, and prepare data. Optimize models for performance, interpretability, and business impact. Automate data workflows and model retraining pipelines using Databricks notebooks and job scheduling. Monitor model performance in production and implement enhancements as needed. Ensure model explainability, compliance, and reproducibility in production environments. Must-Have Qualifications Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. Minimum 3+ years of experience in machine learning and data science roles. Strong proficiency in R for statistical modeling and traditional ML techniques. Hands-on experience with Databricks: cluster configuration, workspace management, notebook workflows, and performance tuning. Experience with MLflow, Delta Lake, and PySpark (optional but preferred). Strong understanding of MLOps practices, model lifecycle management, and CI/CD for ML. Familiarity with cloud platforms such as Azure Databricks, AWS, or GCP. Preferred Qualification: Certification in Databricks or relevant ML/AI platforms is a plus. Excellent problem-solving and communication skills. Application Method Apply online on this portal or on email at careers@speedmart.co.in Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Artificial Intelligence & Machine Learning Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is a forward-thinking edtech startup offering project-driven virtual internships that prepare students for today’s competitive tech landscape. Our AI & ML Internship is designed to immerse you in real-world applications of machine learning and artificial intelligence, helping you develop job-ready skills through hands-on projects. 🚀 Internship Overview As an AI & ML Intern , you will explore machine learning algorithms, build predictive models, and work on projects that mimic real-world use cases—ranging from recommendation systems to AI-based automation tools. You’ll gain experience with Python, Scikit-learn, TensorFlow , and more. 🔧 Key Responsibilities Work on datasets to clean, preprocess, and prepare for model training Implement machine learning algorithms (regression, classification, clustering, etc.) Build and test models using Scikit-learn, TensorFlow, Keras , or PyTorch Analyze model performance and optimize using evaluation metrics Collaborate with mentors to develop AI solutions for business or academic use cases Present findings and document all steps of the model-building process ✅ Qualifications Pursuing or recently completed a degree in Computer Science, Data Science, AI/ML, or related fields Proficient in Python and familiar with data science libraries (NumPy, Pandas, Matplotlib) Basic understanding of machine learning concepts and algorithms Experience with tools like Jupyter Notebook , Google Colab , or similar platforms Strong analytical mindset and interest in solving real-world problems using AI Enthusiastic about learning and exploring new technologies 🎓 What You’ll Gain Hands-on experience with real-world AI and ML projects Exposure to end-to-end model development workflows A strong project portfolio to showcase your skills Internship Certificate upon successful completion Letter of Recommendation for high-performing interns Opportunity for a Full-Time Offer based on performance Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Data Science Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The Data Science Internship is designed to equip you with the skills required to extract insights, build predictive models, and solve complex problems using data. Role Overview: As a Data Science Intern, you will work on real-world datasets to develop machine learning models, perform data wrangling, and generate actionable insights. This internship will help you strengthen your technical foundation in data science while working on projects that have a tangible business impact. Key Responsibilities: Collect, clean, and preprocess data from various sources Apply statistical methods and machine learning techniques to extract insights Build and evaluate predictive models for classification, regression, or clustering tasks Visualize data using libraries like Matplotlib, Seaborn, or tools like Power BI Document findings and present results to stakeholders in a clear and concise manner Collaborate with team members on data-driven projects and innovations Qualifications: Pursuing or recently completed a degree in Data Science, Computer Science, Mathematics, or a related field Proficiency in Python and data science libraries (NumPy, Pandas, Scikit-learn, etc.) Understanding of statistical analysis and machine learning algorithms Familiarity with SQL and data visualization tools or libraries Strong analytical, problem-solving, and critical thinking skills Eagerness to learn and apply data science techniques to solve real-world problems Internship Benefits: Hands-on experience with real datasets and end-to-end data science projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of data science projects and models Show more Show less
Posted 2 weeks ago
7.5 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Splunk Administration Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : a: 15 years of full time education Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have Skills : Splunk Administration Good to Have Skills : No Technology Specialization Job Requirements : Key Responsibilities : Key Responsibilities a: Standardized Splunk agent / tool deployment, configuration and maintenance across a variety of UNIX and Windows platforms b: Experience with Splunk Searching and Reporting, Knowledge Objects administration, Clustering and Forwarder Management c: Support Splunk / tools on Unix, Linux and Windows-based platforms Technical Experience : Technical Experience a: At least 5 years of experience in IT with minimum 3 years in Splunk / tools implementation b: Skills in technical areas which support the deployment and integration of Splunk based solutions, Splunk Apps and Add-ons for monitoring and data integrations- including Infrastructure, Network, OS, DB, Middleware, Storage; Virtualization, Cloud Architectures etc c: Good to have knowledge on Java scripting, Python, shell scripting-based development Professional Attributes : a: Excellent customer facing skills b: Experience working with a global team c: Strong analytical and problem-solving skills d: Good verbal and written communication skill Educational Qualification: a: 15 years of full time education Additional Info : Have Splunk architect overview knowledge and splunk app build know Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Management Level: Ind & Func AI Decision Science Analyst – Level 11 Location: Gurgaon / Bangalore / Mumbai / Hyderabad Must-have skills: Marketing Analytics, Data Driven Merchandizing (Pricing/Promotions/Assortment Optimization), Statistical Timeseries Models, Store Clustering Algorithms, Descriptive Analytics, State Space Modeling, Mixed Effect Regression, NLP Techniques, Large Language Models, Azure ML Tech Stack, SQL, R, Python, AI/ML Model Development, Cloud Platform Experience (Azure/AWS/GCP), Data Pipelines, Client Management, Insights Communication Good to have skills: Non-linear Optimization, Resource Optimization, Cloud Capability Migration, Scalable Machine Learning Architecture Design Patterns, Econometric Modeling, AI Capability Building, Industry Knowledge: CPG, Retail Job Summary As part of our Data & AI practice, you will join a worldwide network of smart and driven colleagues experienced in leading statistical tools, methods, and applications. From data to analytics and insights to actions, our forward-thinking consultants provide analytically informed, issue-based insights at scale to help our clients improve outcomes and achieve high performance. Roles & Responsibilities Working through the phases of project Define data requirements for Data Driven Growth Analytics capability. Clean, aggregate, analyze, interpret data, and carry out data quality analysis. Knowledge of market sizing, lift ratios estimation. Experience in working with non-linear optimization techniques. Proficiency in Statistical Timeseries models, store clustering algorithms, descriptive analytics to support merch AI capability. Hands on experience in state space modeling and mixed effect regression. Development of AI/ML models in Azure ML tech stack. Develop and Manage data pipelines. Aware of common design patterns for scalable machine learning architectures, as well as tools for deploying and maintaining machine learning models in production. Knowledge of cloud platforms and usage for pipelining and deploying and scaling elasticity models. Working knowledge of resource optimization Working knowledge of NLP techniques, Large language models. Manage client relationships and expectations and communicate insights and recommendations effectively. Capability building and thought leadership. Logical Thinking – Able to think analytically, use a systematic and logical approach to analyze data, problems, and situations. Notices discrepancies and inconsistencies in information and materials. Task Management – Advanced level of task management knowledge and experience. Should be able to plan own tasks, discuss and work on priorities, track, and report progress. Professional & Technical Skills Must have at least 2+ years of work experience in Retail/CPG - Marketing analytics with a reputed organization. Must have knowledge of SQL, R & Python language and at-least one cloud-based technology (Azure, AWS, GCP) Must have knowledge of building price/discount elasticity models and conduct non-linear optimization. Must have good knowledge of NLP models, Large Language Models and applicability to industry data. Must have AI capability migration experience from one cloud platform to another. Manage documentation of data models, architecture, and maintenance processes Additional Information Bachelor/Master’s degree in Statistics/Economics/ Mathematics/ Computer Science or related disciplines with an excellent academic record Knowledge of CPG, Retail industry. Proficient in Excel, MS word, PowerPoint, etc. Strong client communication. About Our Company | Accenture Show more Show less
Posted 2 weeks ago
30.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client Our client is a market-leading company with over 30 years of experience in the industry. As one of the world’s leading professional services firms, with $19.7B, with 333,640 associates worldwide, helping their clients modernize technology, reimagine processes, and transform experiences, enabling them to remain competitive in our fast-paced world. Their Specialties in Intelligent Process Automation, Digital Engineering, Industry & Platform Solutions, Internet of Things, Artificial Intelligence, Cloud, Data, Healthcare, Banking, Finance, Fintech, Manufacturing, Retail, Technology, and Salesforce Hi....! We are hiring for below Positions Job Title: AI/ML Developer Key Skills: AI/ML , Deep Learning , Machine Learning, Gen AI , Python, Image Processing , Computer Vision Job Locations: Hyderabad Experience: 5 – 10 Years Budget: 13 – 15LPA Education Qualification : Any Graduation Work Mode: Hybrid Employment Type: Contract Notice Period: Immediate - 15 Days Interview Mode: 2 Rounds of Technical Interview + Including Client round Job Description: Key Focus Areas: Image Analytics & Computer Vision (CV) Machine Learning & Deep Learning Predictive Analytics & Optimization Generative AI (GenAI) & NLP (as secondary skills) Primary Responsibilities: Lead and contribute to projects centered around image analytics, computer vision , and visual data processing . Develop and deploy CV models for tasks such as object detection, image classification, pattern recognition, and anomaly detection. Apply deep learning frameworks (e.g., TensorFlow, Keras) to solve complex visual data challenges. Integrate multi-sensor data fusion and multivariate analysis for industrial applications. Collaborate with cross-functional teams to implement predictive maintenance , fault detection , and process monitoring solutions using visual and sensor data. Mandatory Skills: Strong hands-on experience in Computer Vision and Image Analytics . Proficiency in Python and familiarity with AI/ML libraries such as OpenCV, TensorFlow, Keras, scikit-learn, and Matplotlib. Solid understanding of machine learning techniques : classification, regression, clustering, anomaly detection, etc. Experience with deep learning architectures (CNNs, autoencoders, etc.) for image-based applications. Familiarity with Generative AI and LLMs is a plus. Desirable Skills: Knowledge of optimization techniques and simulation modeling . Domain experience in Oil & Gas, Desalination, Motors & Pumps, or Industrial Systems . Educational & Professional Background: Bachelor’s or Master’s degree in Engineering (Mechanical, Electrical, Electronics, Chemical preferred). Master’s in Industrial/Manufacturing/Production Engineering is a strong plus. Demonstrated experience in solving real-world industrial problems using data-driven approaches. Soft Skills & Attributes: Strong analytical and problem-solving skills. Ability to work independently and manage multiple projects. Excellent communication and stakeholder engagement skills. Proven thought leadership and innovation in AI/ML applications. Interested Candidates please share your CV to sushma.n@people-prime.com Show more Show less
Posted 2 weeks ago
10.0 - 15.0 years
0 Lacs
India
On-site
About Us: At Articul8 AI, we relentlessly pursue excellence and create exceptional GenAI products that exceed customer expectations. We are a team of dedicated individuals who take pride in our work and strive for greatness in every aspect of our business. We believe in using our advantages to make a positive impact on the world and inspiring others to do the same Job Description: Articul8 AI is seeking a Data Scientist to design, develop, and deploy AI-driven solutions that solve real-world problems at scale. You will work on machine learning models, large language models (LLMs), and AI applications while optimizing performance for production environments. This role requires expertise in AI/ML frameworks, cloud platforms, and software engineering best practices. You will be developing and deploying advanced deep learning and generative AI models and algorithms to enhance existing products or to create new products that fulfill critical business needs. In this role, you will be working closely with Product Management and Engineering teams to build GenAI products at scale. You will be responsible for transforming business needs to technical requirements and for leveraging state of the art research to develop and deliver products. You will also support Engineering with testing and validation of the product. Design, develop, and deploy AI-driven solution in production that solve real-world problems at scale. Train, fine-tune, and optimize deep learning and LLM-based solutions to enhance existing products or to create new products. Evaluate and implement state-of-the-art AI/ML algorithms to improve model accuracy and efficiency to enhance and deliver product. Optimize models ensuring low latency and high availability for cloud and on-prem environments. Collaborate closely with engineering teams and product management to build GenAI products at scale. Work with large-scale datasets, ensuring data quality, preprocessing, and feature engineering. Develop APIs and microservices to serve AI models in production at scale. Handle large-scale datasets, preprocessing, and feature engineering to ensuring data quality. Responsible for transforming business needs to technical requirements to develop and deliver products. Stay up to date with the latest AI trends, research, and best practices. Ensure ethical AI practices, data privacy, and security compliance Required Qualifications Master’s Degree in Science, Technology, Engineering and Mathematics (STEM) or Statistics with 10 to 15 years of experience In-depth knowledge and experience with algorithms for time series analysis including data pre-processing, pattern recognition, clustering, modeling and anomaly detection. Strong expertise in Deep Learning, Machine Learning and Generative AI models (including Language, Vision, Audio and Multi-modal models) Exposure to one or more of the following domains - Optimization, Stochastic Processes, Estimation theory Experience in deploying deep learning models on multiple GPUs Experience in developing models and algorithms using ML frameworks like PyTorch, TensorFlow Strong programming skills in one or more of the following languages - Python, Golang Experience in building Docker images in creating scalable, efficient, and portable applications Experience in Kubernetes for container orchestration and writing YAML manifests to define how applications and services should be deployed Knowledge of cloud platforms at least one of AWS, Azure, GCP and its services for deployment of applications Strong verbal and written communications skills. Preferred Qualifications Ph.D, in Science, Technology, Engineering and Mathematics (STEM) or Statistics with 6 to 8 years of experience. Deep expertise and experience in training/fine-tuning large language models on large GPU clusters. Experience in parallel programming including data, model and tensor parallelisms with PyTorch and TensorFlow Deep experience in building and scaling machine learning / deep learning or GenAI applications with Docker and Kubernetes. Strong working experience with at least two cloud service providers (AWS, Azure GCP). Knowledge of CI/CD pipelines, MLOps like MLflow, Kubeflow, or TensorBoard. Deep expertise and experience in one or more of the following areas like finance, healthcare, engineering. Ability to transform business needs to technical requirements, define tasks, metrics and milestones. Ability to communicate technological challenges and achievements to various stakeholders. What We Offer: By joining our team, you become part of a community that embraces diversity, inclusiveness, and lifelong learning. We nurture curiosity and creativity, encouraging exploration beyond conventional wisdom. Through mentorship, knowledge exchange, and constructive feedback, we cultivate an environment that supports both personal and professional development. If you're ready to join a team that's changing the game, apply now to become a part of the Articul8 team. Join us on this adventure and help shape the future of Generative AI in the enterprise. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Skills Required Basic understanding of infrastructure architecture, industry standards and best practices methodology Highly motivated, ability to lead and influence Experience in infrastructure related role, Proven success in projects in large scale projects Strong technical design skills Ability to effectively communicate, coordinate and collaborate successful track record of effective vendor management Knowledge of emerging technologies and vendor landscape, Ability to balance cost against benefits Understanding of business drivers and all stakeholders requirements Understanding of documentation and frameworks Experience in capacity and performance management, understanding of application lifecycle management Hands on experience and expertise with specific infrastructure technologies relevant to organizational needs, including operating systems software, virtualization, automation (on multiple platforms) Creative thinking, innovative approach for solution design, implementation and problem solving. Set and manage stakeholder expectations. Recommend solutions as per RFP requirements or any change order coming from existing clients. Manage vendor relationships from technical matters perspective. Mediate between infrastructure and delivery / development group. Establish and vet key vendors relationships. Assess emerging technologies from key OEMs Guide sales team on price vs performance issues. Experience 8- 15 years of overall experience in the field of IT Infrastructure is essential. Must have at least 3 years experience in designing and implementing products or solutions of any one domain mentioned below: Enterprise Servers & Storage (Cloud computing / Virtualization / Consolidation / Data center / Business Continuity / Backup / Enterprise Servers , Storage and Tape Technologies, Clustering / High availability etc) Enterprise Networking and security ( Routing and Switching protocols, network architecture, connectivity options, network management, remote access, data security, standards and compliance, identity management, log management etc ) Datacenter ( Tier 3 / 3+ DC build including power & cooling) access control and building management system etc) Certifications Any of the industry standard IT Infrastructure related certifications like RHCE/ MCSE / MCTS/ CCNA / CCIE / VCP /CISSP is essential. Also, PMP or ITIL certification will be an advantage. Job Description As part of Solution Architecturing Team (SAT), IT Infrastructure Architect will be responsible for design and delivery of end to end IT Infrastructure solutions for clients from across business will include : Design of IT infrastructure solutions - develop technology strategy with logical and physical designs to meet client requirements, using standard architecture methodologies. Handle multiple infrastructure technologies based on project requirements Preparation of bill of material, technical write-ups for solutions developed Documentation of architecture design to various levels of details Work at CxO level executives to capture of client technical requirements /articulate the solution. Detailed briefing with presentations for larger client audience Work as an individual contributor. Ensure delivery of the infrastructure solutions designed as per scope & project timelines, through right set of internal/external partners. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? , AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Analyst – Junior Data Scientist Location: Bangalore Reporting to: Manager Analytics Purpose of the role Key Tasks & Accountabilities Execute coding skills to develop ML/time-series models. Focus on developing end-to-end analytical solution and present them to business partners/stakeholders. Should have consultative mindset to come up with solutions that can be easily understood and interpreted by business users. Research on the newer technologies and algorithms, to provide better solutions to the current problems. Interact and manage expectations of multiple stakeholders, building compelling narratives and help drive value for businesses by helping them strengthen brands on ground. Qualifications, Experience & Skills Level Of Educational Attainment Required Degree in business analytics / data science / statistics / economics and / or degree in Engineering, Mathematics or Computer Science Previous Work Experience Bachelor’s degree with minimum 2 years of Data Science experience Hands-on experience in implementation of various machine learning algorithms (e.g. SVM, Random Forests, Gradient Boosting, Log-Log regression, XGBoost, Lasso, Ridge, Clustering techniques) and time-series algorithms (e.g. ARIMA, ARIMAX, UCM, Holt-Winters and others) Strong programming skills with working knowledge of SQL, Python. Excellent problem-solving and analytical mindset Strong communication and interpersonal skills Strong story boarding skills And above all of this, an undying love for beer! We dream big to create future with more cheers Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as an Infrastructure Analyst Hone your analytical skills as you provide support to ensure the operational health of the platform, covering all aspects of service, risk, cost and people You’ll be supporting the platforms’ operational stability and technology performance, including maintaining any system’s utilities and tools provided by the platform This is an opportunity to learn a variety of new skills in a constantly evolving environment, working closely with feature teams to continuously enhance your development We're offering this role at associate level What you'll do As an Infrastructure Analyst, you’ll be providing input to and supporting the team’s activities to make sure that the platform integrity is maintained in line with technical roadmaps, while supporting change demands from domains or centres of excellence. You’ll be supporting the delivery of a robust production management service for relevant infrastructure platforms. In addition, you’ll be contributing to the delivery of customer outcomes, innovation and early learning by contributing to test products and services to identify early on if they are viable and deliver the desired outcomes. Your Role Will Involve Contributing to the platform risk culture, making sure that risks are discussed and understood at every step, and effectively collaborating to mitigate risk Contributing to the planning and execution of work within the platform and the timely servicing of feature development requests from cross platform initiatives, and supporting the delivery of regulatory reporting Participating and seeking out opportunities to simplify the platform infrastructure, architecture, services and customer solutions, guarding against introducing new complexity Building relationships with platform, domain and relevant cross-domain stakeholders Making sure that controls are applied and constantly reviewed, primarily against SOX, to ensure full compliance to all our policies and regulatory obligations The skills you'll need To succeed in this role, you’ll need to be a very capable communicator with the ability to communicate complex technical concepts clearly to colleagues, including management level. You’ll need a solid background working in an Agile or DevOps environment with continuous delivery and continuous integration. We’ll Also Look To You To Demonstrate Knowledge of Veritas or Red Hat clustering, partitioning, virtualization and storage administration and integration with operating systems Experience working on Unix, Linux, Solaris VERITAS clustering and VXVM implementation Experience of Disaster Recovery Planning and conducting DR tests An understanding of SAN and NAS Migrations Understanding of operation system upgrades and patching Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform, Oracle Procedural Language Extensions to SQL (PLSQL), PySpark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. - Participate in code reviews to ensure adherence to best practices and coding standards. Professional & Technical Skills: -- Backend Engineer who is good on niche backend skills preferably on Databricks, integration and Reporting skillset - Microservices Architecture and Rest patterns using leading industry recommended security frameworks. - Cloud and related technologies such as AWS, Google, Azure. - Test Automation Skills using Behavioral Driven Development. - Data Integration (batch, real-time) following Enterprise Integration Patterns. - Relational Database, No SQL Database, DynamoDB and Data Modeling, - Database development & tuning (PL/SQL/XQuery). - Performance (threading, indexing, clustering, caching). - Document-centric data architecture (XML DB/NoSQL). Additional Skills: Tableau, Angular, Performance Tuning Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Hyderabad office. - A 15 years full time education is required. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
: MS SQL Developer Skills Required: 3 years in SQL Server 2005, 2008r2, and 2012 UTA & production. Experience in handling SQL Server 2005, 2008r2, and 2012 UTA & production. Strong knowledge of MS SQL Development. Experience in SQL Performance Tuning, SQL Clustering, SQL Queries performance tuning. Hands-on experience in writing and executing stored procedures, functions, complex T-SQL queries, well-versed with JOBS, views, indexes, and query performance tuning. Good exposure to LOG Shipping, MIRRORING, Data Modeling. Job Requirement: Bachelors degree in Computer Science, Information Technology, or MCA. 3 years of experience in a relevant role. Good analytical and problem-solving ability. Detail-oriented with excellent written and verbal communication skills. The ability to work independently as well as collaborate with a team. Experience: 3 Years Job Location: Pune, India Show more Show less
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Data Architecture What does a successful Snowflakes Advisor do? We are seeking a highly skilled and experienced Snowflake Advisor to take ownership of our data warehousing strategy, implementation, maintenance and support. In this role, you will design, develop, and lead the adoption of Snowflake-based solutions to ensure scalable, efficient, and secure data systems that empower our business analytics and decision-making processes. As a Snowflake Advisor, you will collaborate with cross-functional teams, lead data initiatives, and act as the subject matter expert for Snowflake across the organization. What You Will Do Define and implement best practices for data modelling, schema design, query optimization in Snowflakes Develop and manage ETL/ELT workflows to ingest, transform and load data into Snowflakes from various resources Integrate data from diverse systems like databases, API`s, flat files, cloud storage etc. into Snowflakes. Using tools like Streamsets, Informatica or dbt to streamline data transformation processes Monitor or tune Snowflake performance including warehouse sizing, query optimizing and storage management. Manage Snowflakes caching, clustering and partitioning to improve efficiency Analyze and resolve query performance bottlenecks Monitor and resolve data quality issues within the warehouse Collaboration with data analysts, data engineers and business users to understand reporting and analytic needs Work closely with DevOps team for Automation, deployment and monitoring Plan and execute strategies for scaling Snowflakes environments as data volume grows Monitor system health and proactively identify and resolve issues Implement automations for regular tasks Enable seamless integration of Snowflakes with BI Tools like Power BI and create Dashboards Support ad hoc query requests while maintaining system performance Creating and maintaining documentation related to data warehouse architecture, data flow, and processes Providing technical support, troubleshooting, and guidance to users accessing the data warehouse Optimize Snowflakes queries and manage Performance Keeping up to date with emerging trends and technologies in data warehousing and data management Good working knowledge of Linux operating system Working experience on GIT and other repository management solutions Good knowledge of monitoring tools like Dynatrace, Splunk Serve as a technical leader for Snowflakes based projects, ensuring alignment with business goals and timelines Provide mentorship and guidance to team members in Snowflakes implementation, performance tuning and data management Collaborate with stakeholders to define and prioritize data warehousing initiatives and roadmaps. Act as point of contact for Snowflakes related queries, issues and initiatives What You Will Need To Have Must have 8 to 10 years of experience in data management tools like Snowflakes, Streamsets, Informatica Should have experience on monitoring tools like Dynatrace, Splunk. Should have experience on Kubernetes cluster management CloudWatch for monitoring and logging and Linux OS experience Ability to track progress against assigned tasks, report status, and proactively identifies issues. Demonstrate the ability to present information effectively in communications with peers and project management team. Highly Organized and works well in a fast paced, fluid and dynamic environment. What Would Be Great To Have Experience in EKS for managing Kubernetes cluster Containerization technologies such as Docker and Podman AWS CLI for command-line interactions CI/CD pipelines using Harness S3 for storage solutions and IAM for access management Banking and Financial Services experience Knowledge of software development Life cycle best practices Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Delhi, India
On-site
JOB_POSTING-3-71264-3 Job Description Role Title: AVP, Reliability Engineer, EIS(L10) COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview Enterprise integration Services team plays pivotal role in connecting different Systems and applications within an organization. This team specializes in designing, implementing, and maintaining integration solutions that enhance business functionality. Synchrony Middleware is critical application for supplying data to different backend, front-end systems & Synchrony applications. Role Summary/Purpose The AVP, Reliability Engineer – Enterprise Integration Services plays a pivotal technical role within Synchrony Financial in successfully providing technical expertise to the EIS Applications & its components that includes Java Spring-Boot, OpenSSL, ITX, MQ. Additional responsibilities include leading the development and the production support of Synchrony’s EIS Services by creating and developing thoughtful solutions to anticipate bugs and maintain operational excellence Key Responsibilities Develop, maintain, and optimize highly reliable software solutions using Java for enterprise applications. Define and implement strategies to improve system reliability, availability, and performance across application infrastructure. Maintains close coordination with developers and Solution Architects to streamline and expedite deployment practices . Continuous seeking the opportunities to enhance product or services through process improvements. Keenly monitors deployment issues to address with immediacy , identify the root causes of failures/issues and developing corrective actions to prevent recurrence. Serves as a Solution Engineer to support non-functional requirements in the development, deployment, and ongoing tuning, as necessary. Troubleshoot and resolve technical issues related to the platform. Create support tickets and work with IBM as needed. Apply and promote patches. Installation, configuration, and administration of Server set-up and management.; Infrastructure and Environment migrations Perform detailed code reviews to ensure quality, performance, and maintainability. Provide on-call support periodically throughout the year to ensure system reliability and incident response. Mentor and influence all levels of the team: in this role, you will have the opportunity to influence up and down the chain of command. Required Skills/Knowledge Strong Experience with Java, Springboot, DevOps, and Agile based Development. Good knowledge of IBM WebSphere / MQ clustering and administration Good knowledge of IBM ITX including Design studio, setup, and implementation. Experience with deploying IBM ITX/WTX (WebSphere transformation extender) and IBM MQ in Kubernetes containers. Experience with cloud-based environments (AWS, GCP, or Azure) and associated container management tools. Desired Skills/Knowledge Working knowledge of containerization platforms such as Docker, and experience with Kubernetes orchestration. Should have good knowledge of RESTful design, SOAP APIs , and API specifications like Open API(Swagger) Strong working knowledge of the Financial Industry and Consumer Lending Desire to work in a dynamic, fast paced environment. Excellent interpersonal skills with ability to influence clients, team members, management, and external groups. Eligibility Criteria Bachelor’s Degree and 5+ years relevant experience in Information Technology, or in lieu of a degree 7+ years relevant experience in information Technology. Work Timings: 2:00 PM to 11:00 PM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal) L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L8+ Employees can apply Grade/Level: 10 Job Family Group Information Technology Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Job Summary Fiche de poste : UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About the job HERE Technologies is a location data and technology platform company. We empower our customers to achieve better outcomes – from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely. At HERE we take it upon ourselves to be the change we wish to see. We create solutions that fuel innovation, provide opportunity and foster inclusion to improve people’s lives. If you are inspired by an open world and driven to create positive change, join us. Learn more about us on our YouTube Channel. In this this position you will part of HERE’s Places Ingestion team, which is responsible for discovering Points Of Interest (Places) by processing large volumes of raw data from a variety of sources to improve the content coverage, accuracy, and freshness. You will be part of an energetic and dedicated team that works on challenging tasks in distributed processing of large data & streaming technologies. In addition to the technical challenges this position offers, you will have every opportunity to expand your career both technically and personally in this role. Whats the role: You will help design and build the next iteration of processes to improve quality of Place attributes employing machine learning. You will maintain up-to-date knowledge of research activities in the general fields of machine learning and LLMs. Utilize machine learning algorithms/LLMs to generate translation/transliterations, standardization/derivation rules, extract place attributes such as name, address, category and hours of operations from web sites using web scraping solutions. Participate in both algorithm and software developments as a part of a scrum team, and contribute artifacts (software, white-paper, datasets) for project reviews and demos. Collaborate with internal and external team members (researchers and engineers) on expertly implementing the new features to the products or enhancing the existing features. With end-to-tend aspects like developing, testing, and deploying. Who Are you? You are determined and have the following to be successful in the role: MS or PhD in a discipline such as Statistics, Applied Mathematics, Computer Science, Data Science, or others with an emphasis or thesis work on one or more of the following areas: statistics/science/engineering, data analysis, machine learning, LLMs 3+ years of experience in Data Science field. Proficient with at least one of the deep learning frameworks like Tensorflow, Keras and Pytorch. Programming experience with Python, shell script Applied statistics or experimentation (i.e. A/B testing, root cause analysis, etc) Unsupervised Machine learning methods (i.e. clustering, Bayesian, etc) HERE is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, age, gender identity, sexual orientation, marital status, parental status, religion, sex, national origin, disability, veteran status, and other legally protected characteristics Show more Show less
Posted 2 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Pune
Work from Office
RabbitMQ Administrator - Prog Leasing1 Job TitleRabbitMQ Cluster Migration Engineer Job Summary: We are seeking an experienced RabbitMQ Cluster Migration Engineer to lead and execute the seamless migration of our existing RabbitMQ infrastructure to a AWS - new high-availability cluster environment. This role requires deep expertise in RabbitMQ, clustering, messaging architecture, and production-grade migrations with minimal downtime. Key Responsibilities: Design and implement a migration plan to move existing RabbitMQ instances to a new clustered setup. Evaluate the current messaging architecture, performance bottlenecks, and limitations. Configure, deploy, and test RabbitMQ clusters (with or without federation/mirroring as needed). Ensure high availability, fault tolerance, and disaster recovery configurations. Collaborate with development, DevOps, and SRE teams to ensure smooth cutover and rollback plans. Automate setup and configuration using tools such as Ansible, Terraform, or Helm (for Kubernetes). Monitor message queues during migration to ensure message durability and delivery guarantees. Document all aspects of the architecture, configurations, and migration process. Required Qualifications: Strong experience with RabbitMQ, especially in clustered and high-availability environments. Deep understanding of RabbitMQ internalsqueues, exchanges, bindings, vhosts, federation, mirrored queues. Experience with RabbitMQ management plugins, monitoring, and performance tuning. Proficiency with scripting languages (e.g., Bash, Python) for automation. Hands-on experience with infrastructure-as-code tools (e.g., Ansible, Terraform, Helm). Familiarity with containerization and orchestration (e.g., Docker, Kubernetes). Strong understanding of messaging patterns and guarantees (at-least-once, exactly-once, etc.). Experience with zero-downtime migration and rollback strategies. Preferred Qualifications: Experience migrating RabbitMQ clusters in production environments. Working knowledge of cloud platforms (AWS, Azure, or GCP) and managed RabbitMQ services. Understanding of security in messaging systems (TLS, authentication, access control). Familiarity with alternative messaging systems (Kafka, NATS, ActiveMQ) is a plus.
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description: Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 4 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, Agentic Framework to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 4 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models Utilize optimization tools and techniques, including MIP (Mixed Integer Programming. Deep knowledge of classical AIML (regression, classification, time series, clustering) Drive DevOps and MLOps practices, covering CI/CD and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred Technical And Professional Experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $1,409.1M revenue in FY25, delivering 18.8% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16 th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for a DevOps Lead Engineer to be responsible for creating software deployment strategies that are essential for the successful deployment of software in the work environment. You will identify and implement data storage methods like clustering to improve the performance of the team. What you?ll do Manage a group of highly motivated DevOps engineers and systems administrators Participate in the agile ceremonies and interface with the agile team(s) and other program staff as required Work with application teams to help them adopt continuous build, inspection, testing and deployment Participate in all aspects of DevOps engineering and promote industry standard methodologies in DevOps engineering Migrate code from TFS to Azure DevOps Help to configure DevOps stack with regards to performance monitoring, analytics, and auditability Design and build a new code production pipeline Developing ?Idealized? automated CI / CD processes and working with teams to implement those processes in SSGA?s DevOps technology stack Provide deployment and occasional off hours support Analyze existing standards to identify gaps and remedies. Evaluate gaps related to DevOps best practices Develop and maintain installation, configuration and operations procedures Develop Junit tests to support code coverage as part of the CI / CD pipeline Share best practices with a focus on re-use of application code Work with the development, project / product management organizations to align projects, releases, patches, and other efforts Implement automation tools and frameworks (CI / CD pipelines) Expertise you?ll bring Qualifications: Bachelor?s Degree in Computer Science, Computer Engineering or a closely related field. A Bachelor?s degree in Computer Science is preferable while a Master?s degree will carry a lot more weight. Experience: 8+ years working in the related field. Additionally, experience in the following: Automating and orchestrating workloads for large-scale enterprise Java applications using Ansible Working with Cloud solutions at massive scale and resiliency. Deploying updates and fixes Developing scripts to automate visualization Writing scripts and automation using Perl / Python / Groovy / Java / Bash. Shell scripting, Python, Groovy, etc Good to have skills PostgreSQL, MySQL, NoSQL, and / or Cassandra Migrating application to AWS cloud; AWS certifications Test Driven Development Knowledge: Ruby or Python Build tools like Ant, Maven, and Gradle ? including configuring & adopting Scaled Agile Framework (SAFe) practices and tools; Certification in Agile delivery (e.g., SAFe Agilist) Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. Inclusive Environment We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent - persistent.com/careers Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane