Jobs
Interviews

3165 Clustering Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Consultant The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 3-6 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 301813

Posted 5 hours ago

Apply

3.0 - 6.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Consultant The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 3-6 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 301813

Posted 5 hours ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description You will work with diverse datasets, from structured logs to unstructured events, to build intelligent systems for event correlation, root cause analysis, predictive maintenance, and autonomous remediation, ultimately driving significant operational efficiencies and improving service availability. This position requires a blend of deep technical expertise in machine learning, a strong understanding of IT operations, and a commitment to operationalizing AI solutions at scale. Responsibilities As a Senior Data Scientist, your responsibilities will include, but are not limited to: Machine Learning Solution Development: Design, develop, and implement advanced machine learning models (supervised and unsupervised) to solve complex IT Operations problems, including Event Correlation, Anomaly Detection, Root Cause Analysis, Predictive Analytics, and Auto-Remediation. Leverage structured and unstructured datasets, performing extensive feature engineering and data preprocessing to optimize model performance. Apply strong statistical modeling, hypothesis testing, and experimental design principles to ensure rigorous model validation and reliable insights. AI/ML Product & Platform Development: Lead the end-to-end development of Data Science products, from conceptualization and prototyping to deployment and maintenance. Develop and deploy AI Agents for automating workflows in IT operations, particularly within Networks and CyberSecurity domains. Implement RAG (Retrieval Augmented Generation) based retrieval frameworks for state-of-the-art models to enhance contextual understanding and response generation. Adopt AI to detect and redact sensitive data in logs, and implement central data tagging for all logs to improve AI Model performance and governance. MLOps & Deployment: Drive the operationalization of machine learning models through robust MLOps/LLMOps practices, ensuring scalability, reliability, and maintainability. Implement models as a service via APIs, utilizing containerization technologies (Docker, Kubernetes) for efficient deployment and management. Design, build, and automate resilient Data Pipelines in cloud environments (GCP/Azure) using AI Agents and relevant cloud services. Cloud & DevOps Integration: Integrate data science solutions with existing IT infrastructure and AIOps platforms (e.g., IBM Cloud Paks, Moogsoft, BigPanda, Dynatrace). Enable and optimize AIOps features within Data Analytics tools, Monitoring tools, or dedicated AIOps platforms. Champion DevOps practices, including CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions), infrastructure-as-code (Terraform, Ansible, CloudFormation), and automation to streamline development and deployment workflows. Performance & Reliability: Monitor and optimize platform performance, ensuring systems are running efficiently and meeting defined Service Level Agreements (SLAs). Lead incident management efforts related to data science systems and implement continuous improvements to enhance reliability and resilience. Leadership & Collaboration: Translate complex business problems into data science solutions, understanding their strategic implications and potential business value. Collaborate effectively with cross-functional teams including engineering, product management, and operations to define project scope, requirements, and success metrics. Mentor junior data scientists and engineers, fostering a culture of technical excellence, continuous learning, and innovation. Clearly articulate complex technical concepts, findings, and recommendations to both technical and non-technical audiences, influencing decision-making and driving actionable outcomes. Best Practices: Uphold best engineering practices, including rigorous code reviews, comprehensive testing, and thorough documentation. Maintain a strong focus on building maintainable, scalable, and secure systems. Qualifications Education: Bachelors or Master's in Computer Science, Data Science, Artificial Intelligence, Machine Learning, Statistics, or a related quantitative field. Experience: 8+ years of IT and 5+yrs of progressive experience as a Data Scientist, with a significant focus on applying ML/AI in IT Operations, AIOps, or a related domain. Proven track record of building and deploying machine learning models into production environments. Demonstrated experience with MLOps/LLMOps principles and tools. Experience with designing and implementing microservices and serverless architectures. Hands-on experience with containerization technologies (Docker, Kubernetes). Technical Skills: Programming: Proficiency in at least one major programming language, preferably Python, sufficient to effectively communicate with and guide engineering teams. (Java is also a plus). Machine Learning: Strong theoretical and practical understanding of various ML algorithms (e.g., classification, regression, clustering, time-series analysis, deep learning) and their application to IT operational data. Cloud Platforms: Expertise with Google Cloud Platform (GCP) services is highly preferred, including Dataflow, Pub/Sub, Cloud Logging, Compute Engine, Kubernetes Engine, Cloud Functions, BigQuery, Cloud Storage, and Vertex AI. Experience with other major cloud providers (AWS, Azure) is also valuable. DevOps & Tools: Experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, GitHub Actions). Familiarity with infrastructure-as-code tools (e.g., Terraform, Ansible, CloudFormation). AIOps/Observability: Knowledge of AIOps platforms such as IBM Cloud Paks, Moogsoft, BigPanda, Dynatrace, etc. Experience with log analytics platforms and data tagging strategies. Soft Skills: Exceptional analytical and problem-solving skills, with a track record of tackling ambiguous and complex challenges independently. Strong communication and presentation skills, with the ability to articulate complex technical concepts and findings to diverse audiences and influence stakeholders. Ability to take end-to-end ownership of data science projects. Commitment to best engineering practices, including code reviews, testing, and documentation. A strong desire to stay current with the latest advancements in AI, ML, and cloud technologies.

Posted 5 hours ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description You will work with diverse datasets, from structured logs to unstructured events, to build intelligent systems for event correlation, root cause analysis, predictive maintenance, and autonomous remediation, ultimately driving significant operational efficiencies and improving service availability. This position requires a blend of deep technical expertise in machine learning, a strong understanding of IT operations, and a commitment to operationalizing AI solutions at scale. Responsibilities As a Senior Data Engineer your responsibilities will include, but are not limited to: Machine Learning Solution Development: Design, develop, and implement advanced machine learning models (supervised and unsupervised) to solve complex IT Operations problems, including Event Correlation, Anomaly Detection, Root Cause Analysis, Predictive Analytics, and Auto-Remediation. Leverage structured and unstructured datasets, performing extensive feature engineering and data preprocessing to optimize model performance. Apply strong statistical modeling, hypothesis testing, and experimental design principles to ensure rigorous model validation and reliable insights. AI/ML Product & Platform Development: Lead the end-to-end development of Data Science products, from conceptualization and prototyping to deployment and maintenance. Develop and deploy AI Agents for automating workflows in IT operations, particularly within Networks and CyberSecurity domains. Implement RAG (Retrieval Augmented Generation) based retrieval frameworks for state-of-the-art models to enhance contextual understanding and response generation. Adopt AI to detect and redact sensitive data in logs, and implement central data tagging for all logs to improve AI Model performance and governance. MLOps & Deployment: Drive the operationalization of machine learning models through robust MLOps/LLMOps practices, ensuring scalability, reliability, and maintainability. Implement models as a service via APIs, utilizing containerization technologies (Docker, Kubernetes) for efficient deployment and management. Design, build, and automate resilient Data Pipelines in cloud environments (GCP/Azure) using AI Agents and relevant cloud services. Cloud & DevOps Integration: Integrate data science solutions with existing IT infrastructure and AIOps platforms (e.g., IBM Cloud Paks, Dynatrace). Enable and optimize AIOps features within Data Analytics tools, Monitoring tools, or dedicated AIOps platforms. Champion DevOps practices, including CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions), infrastructure-as-code (Terraform, Ansible, CloudFormation), and automation to streamline development and deployment workflows. Performance & Reliability: Monitor and optimize platform performance, ensuring systems are running efficiently and meeting defined Service Level Agreements (SLAs). Lead incident management efforts related to data science systems and implement continuous improvements to enhance reliability and resilience. Leadership & Collaboration: Translate complex business problems into data science solutions, understanding their strategic implications and potential business value. Collaborate effectively with cross-functional teams including engineering, product management, and operations to define project scope, requirements, and success metrics. Mentor junior data scientists and engineers, fostering a culture of technical excellence, continuous learning, and innovation. Clearly articulate complex technical concepts, findings, and recommendations to both technical and non-technical audiences, influencing decision-making and driving actionable outcomes. Best Practices: Uphold best engineering practices, including rigorous code reviews, comprehensive testing, and thorough documentation. Maintain a strong focus on building maintainable, scalable, and secure systems. Qualifications Education: Master's or Ph.D. in Computer Science, Data Science, Artificial Intelligence, Machine Learning, Statistics, or a related quantitative field. Experience: 8+ years of progressive experience as a Data Scientist, with a significant focus on applying ML/AI in IT Operations, AIOps, or a related domain. Proven track record of building and deploying machine learning models into production environments. Demonstrated experience with MLOps/LLMOps principles and tools. Experience with designing and implementing microservices and serverless architectures. Hands-on experience with containerization technologies (Docker, Kubernetes). Technical Skills: Programming: Proficiency in at least one major programming language, preferably Python, sufficient to effectively communicate with and guide engineering teams. (Java is also a plus). Machine Learning: Strong theoretical and practical understanding of various ML algorithms (e.g., classification, regression, clustering, time-series analysis, deep learning) and their application to IT operational data. Cloud Platforms: Expertise with Google Cloud Platform (GCP) services is highly preferred, including Dataflow, Pub/Sub, Cloud Logging, Compute Engine, Kubernetes Engine, Cloud Functions, BigQuery, Cloud Storage, and Vertex AI. Experience with other major cloud providers (AWS, Azure) is also valuable. DevOps & Tools: Experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, GitHub Actions). Familiarity with infrastructure-as-code tools (e.g., Terraform, Ansible, CloudFormation). AIOps/Observability: Knowledge of AIOps platforms such as IBM Cloud Paks, Moogsoft, BigPanda, Dynatrace, etc. Experience with log analytics platforms and data tagging strategies. Soft Skills: Exceptional analytical and problem-solving skills, with a track record of tackling ambiguous and complex challenges independently. Strong communication and presentation skills, with the ability to articulate complex technical concepts and findings to diverse audiences and influence stakeholders. Ability to take end-to-end ownership of data science projects. Commitment to best engineering practices, including code reviews, testing, and documentation. A strong desire to stay current with the latest advancements in AI, ML, and cloud technologies.

Posted 5 hours ago

Apply

25.0 years

2 - 8 Lacs

Hyderābād

On-site

Recruitment Fraud Alert We’ve learned that scammers are impersonating Commvault team members—including HR and leadership—via email or text. These bad actors may conduct fake interviews and ask for personal information, such as your social security number. What to know: Commvault does not conduct interviews by email or text. We will never ask you to submit sensitive documents (including banking information, SSN, etc) before your first day. If you suspect a recruiting scam, please contact us at wwrecruitingteam@commvault.com About Commvault Commvault (NASDAQ: CVLT) is the gold standard in cyber resilience. The company empowers customers to uncover, take action, and rapidly recover from cyberattacks – keeping data safe and businesses resilient. The company’s unique AI-powered platform combines best-in-class data protection, exceptional data security, advanced data intelligence, and lightning-fast recovery across any workload or cloud at the lowest TCO. For over 25 years, more than 100,000 organizations and a vast partner ecosystem have relied on Commvault to reduce risks, improve governance, and do more with data. Senior Engineer – Test (with Python) The Opportunity We have an outstanding career opportunity for a Senior Engineer- Test at Bangalore, Pune and Hyderabad locations. The ideal candidate will have strong analytical skills and a proactive attitude towards testing and automating the testing efforts. This role also includes developing test strategies, drawing up test documents, identifying faults, and reviewing QA reports. What you’ll do… Reviews and validates requirements and technical specifications Develops and executes test plans and detailed test cases based on requirements and/or customer feedback and prioritization Documents results; offer observations or improvements after analysis of test results and overall product quality Collaborates with the development team on bug fix verification and validation (regression testing) Communicates professionally at all levels within and outside of the organization Supports, designs, develops, and enhances test processes and reporting for QA processes Manages testing efforts across many varied projects and tasks under tight deadlines Mentors and provides training assistance to Associate QA Engineers Implementing testing procedures and overseeing the QA process. Who you are… Bachelor's degree required. Experience with virtualization and cloud administration Strong analytical skills and a proactive attitude towards testing and finding issues Hands-on experience with both Windows and Unix operating systems Strong understanding of clustering and networking Experience with automated testing tools or equivalent automation skills such as Python, Selenium, C#, JavaScript, CPP, Ansible, or Terraform Scripting skills, particularly in Python, for automating tests Security and SaaS experience strongly preferred. You’ll love working here because: Employee stock purchase plan (ESPP) Continuous professional development, product training, and career pathing Annual health check-ups, Car lease Program, and Tuition Reimbursement An inclusive company culture, an opportunity to join our Community Guilds Personal accident cover and Term life cover Ready to #makeyourmark at Commvault? Apply now! Commvault is an equal opportunity workplace and is an affirmative action employer. We are always committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status and we will not discriminate against on the basis of such characteristics or any other status protected by the laws or regulations in the locations where we work. Commvault’s goal is to make interviewing inclusive and accessible to all candidates and employees. If you have a disability or special need that requires accommodation to participate in the interview process or apply for a position at Commvault, please email accommodations@commvault.com For any inquiries not related to an accommodation please reach out to wwrecruitingteam@commvault.com.

Posted 5 hours ago

Apply

0 years

6 - 9 Lacs

Hyderābād

On-site

Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Manager, Data Scientist We are looking for a highly motivated and technically proficient Senior Data Scientist with deep expertise in time series forecasting, machine learning, natural language processing (NLP), and Python programming. This role is critical to enhancing our forecasting capabilities, driving product segmentation strategies, and supporting data-driven decision-making. The candidate will work closely with cross-functional teams including data engineers, analysts, and project managers to develop, test, and deploy scalable forecasting models and analytical solutions in a cloud-based environment. Responsibilities 1. Forecasting & Model Development Design, develop, and implement advanced time series forecasting models (e.g., ARIMA, Prophet, LSTM, XGBoost, etc.) tailored to different product categories and business needs. Evaluate and improve forecast accuracy by establishing robust metrics and conducting regular performance assessments throughout the sales cycle. Run what-if scenarios and simulations to assess the impact of various business conditions on forecast outcomes. 2. Segmentation & Clustering Collaborate with the Senior Data Scientist to perform segmentation and clustering of product identifiers using unsupervised learning techniques (e.g., K-means, DBSCAN, hierarchical clustering). Analyze product behavior patterns to identify slow- and fast-moving items, and generate actionable insights for inventory and sales planning. 3. Data Extraction & Feature Engineering Extract, clean, and transform data from multiple source systems (e.g., SQL databases, APIs, cloud storage) to support modeling and analysis. Engineer relevant features and variables to enhance model performance and interpretability. 4. Model Evaluation & Deployment Conduct comparative analysis of forecasting methods across different segments, tuning parameters and optimizing performance. Work in close coordination with Data Engineers and cloud platform teams to ensure seamless deployment of models into production environments (e.g., AWS, Azure, GCP). Monitor deployed models for drift, accuracy, and performance, and implement retraining pipelines as needed. 5. Collaboration & Communication Partner with cross-functional stakeholders to understand business requirements and translate them into analytical solutions. Present findings, insights, and recommendations to both technical and non-technical audiences through reports, dashboards, and presentations. Qualifications we seek in you! Minimum Qualifications / Skills Good years of hands-on experience in data science, with a strong focus on forecasting and predictive analytics. Proficiency in Python and its data science ecosystem (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch, Statsmodels, etc.). Strong understanding of time series analysis, machine learning algorithms, and NLP techniques. Experience with data extraction and manipulation using SQL and/or cloud-based data tools. Familiarity with cloud platforms (AWS, Azure, or GCP) and model deployment workflows. Ability to work in a fast-paced, agile environment with shifting priorities and tight deadlines. Excellent problem-solving, analytical thinking, and communication skills. Preferred Qualifications/ Skills Master’s in computer science, Data Science, Statistics, or a related field Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career—Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Manager Primary Location India-Hyderabad Education Level Bachelor's / Graduation / Equivalent Job Posting Aug 7, 2025, 1:03:41 PM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 5 hours ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Consultant The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 3-6 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 301813

Posted 5 hours ago

Apply

10.0 years

5 - 8 Lacs

Hyderābād

Remote

We are united in our mission to make a positive impact on healthcare. Join Us! South Florida Business Journal, Best Places to Work 2024 Inc. 5000 Fastest-Growing Private Companies in America 2024 2024 Black Book Awards, ranked #1 EHR in 11 Specialties 2024 Spring Digital Health Awards, "Web-based Digital Health" category for EMA Health Records (Gold) 2024 Stevie American Business Award (Silver), New Product and Service: Health Technology Solution (Klara) Who we are: We Are Modernizing Medicine (WAMM)! We're a team of bright, passionate, and positive problem-solvers on a mission to place doctors and patients at the center of care through an intelligent, specialty-specific cloud platform. Our vision is a world where the software we build increases medical practice success and improves patient outcomes. Founded in 2010 by Daniel Cane and Dr. Michael Sherling, we have grown to over 3400 combined direct and contingent team members serving eleven specialties, and we are just getting started! ModMed's global headquarters is based in Boca Raton, FL, with a growing office in Hyderabad, India, and a robust remote workforce across the US, Chile, and Germany. Job Overview: ModMed is seeking an experienced and highly skilled Data Scientist with a strong background in machine learning, predictive analytics, and healthcare data. This individual will play a critical role in developing and implementing advanced analytics solutions to optimize healthcare delivery and patient outcomes. As part of our innovative team, you will work with large, complex datasets to build predictive models, uncover trends, and improve healthcare operations through data-driven insights. If you have a passion for leveraging data to solve complex healthcare challenges and are well-versed in the latest data science methodologies, we would love to hear from you. Key Responsibilities : Data Modeling and Analytics: Develop and implement machine learning models, algorithms, and statistical tools to process large volumes of healthcare data for predictive insights and optimization. Data Mining and Exploration: Analyze structured and unstructured data to discover trends, correlations, and patterns that can drive new product features or enhance existing offerings. Predictive Analytics: Build predictive models to anticipate patient needs, streamline operational workflows, and improve healthcare delivery. Research and Development: Keep abreast of the latest developments in machine learning, data science, and healthcare analytics to drive innovation and improvement in ModMed's AI offerings. Skillset & Qualification: Technical Expertise: Proficient in Python, R, SQL, and data manipulation libraries (e.g., Pandas, NumPy, SciPy). Strong experience with machine learning libraries and frameworks (e.g., TensorFlow, Scikit-learn, PyTorch). Familiarity with big data platforms like Hadoop, Spark, or AWS/GCP. Experience working with healthcare data formats (HL7, FHIR) and privacy standards (HIPAA) is a plus. Machine Learning & Statistics: Expertise in building and deploying machine learning models, including classification, regression, and clustering techniques. Education: Bachelor's or Master's degree in Data Science, Generative AI, Computer Science, Mathematics, Statistics, or a related technical field. Experience: 10+ years of experience in data science, Generative AI, machine learning, or predictive modeling, preferably in healthcare or a similar data-rich domain. Problem Solving: Strong analytical skills with the ability to translate complex business problems into data-driven solutions. Communication: Excellent verbal and written communication skills to explain technical concepts to non-technical stakeholders. Preferred: Experience with healthcare-related analytics, including patient outcomes, clinical workflows, and operational efficiencies. ModMed Benefits Highlight: At ModMed, we believe it's important to offer a competitive benefits package designed to meet the diverse needs of our growing workforce. Eligible Modernizers can enroll in a wide range of benefits: India Meals & Snacks: Enjoy complimentary office lunches & dinners on select days and healthy snacks delivered to your desk, Insurance Coverage: Comprehensive health, accidental, and life insurance plans, including coverage for family members, all at no cost to employees, Allowances: Annual wellness allowance to support your well-being and productivity, Earned, casual, and sick leaves to maintain a healthy work-life balance, Bereavement leave for difficult times and extended medical leave options, Paid parental leaves, including maternity, paternity, adoption, surrogacy, and abortion leave, Celebration leave to make your special day even more memorable, and company-paid holidays to recharge and unwind. United States Comprehensive medical, dental, and vision benefits, including a company Health Savings Account contribution, 401(k): ModMed provides a matching contribution each payday of 50% of your contribution deferred on up to 6% of your compensation. After one year of employment with ModMed, 100% of any matching contribution you receive is yours to keep. Generous Paid Time Off and Paid Parental Leave programs, Company paid Life and Disability benefits, Flexible Spending Account, and Employee Assistance Programs, Company-sponsored Business Resource & Special Interest Groups that provide engaged and supportive communities within ModMed, Professional development opportunities, including tuition reimbursement programs and unlimited access to LinkedIn Learning, Global presence and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability for some roles, Weekly catered breakfast and lunch, treadmill workstations, Zen, and wellness rooms within our BRIC headquarters. PHISHING SCAM WARNING: ModMed is among several companies recently made aware of a phishing scam involving imposters posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote "interviews," and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from ModMed without a formal interview process, and valid communications from our hiring team will come from our employees with a ModMed email address (first.lastname@modmed.com). Please check senders' email addresses carefully. Additionally, ModMed will not ask you to purchase equipment or supplies as part of your onboarding process. If you are receiving communications as described above, please report them to the FTC website.

Posted 5 hours ago

Apply

3.0 years

1 - 2 Lacs

Gurgaon

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Senior Data Scientist AI Garage is responsible for establishing Mastercard as an AI powerhouse. AI will be leveraged and implemented at scale within Mastercard providing a foundational, competitive advantage for the future. All internal processes, all products and services will be enabled by AI continuously advancing our value proposition, consumer experience, and efficiency. Opportunity Join Mastercard's AI Garage @ Gurgaon, a newly created strategic business unit executing on identified use cases for product optimization and operational efficiency securing Mastercard's competitive advantage through all things AI. The AI professional will be responsible for the creative application and execution of AI use cases, working collaboratively with other AI professionals and business stakeholders to effectively drive the AI mandate. Role Ensure all AI solution development is in line with industry standards for data management and privacy compliance including the collection, use, storage, access, retention, output, reporting, and quality of data at Mastercard Adopt a pragmatic approach to AI, capable of articulating complex technical requirements in a manner this is simple and relevant to stakeholder use cases Gather relevant information to define the business problem interfacing with global stakeholders Creative thinker capable of linking AI methodologies to identified business challenges Identify commonalities amongst use cases enabling a microservice approach to scaling AI at Mastercard, building reusable, multi-purpose models Develop AI/ML solutions/applications leveraging the latest industry and academic advancements Leverage open and closed source technologies to solve business problems Ability to work cross-functionally, and across borders drawing on a broader team of colleagues to effectively execute the AI agenda Partner with technical teams to implement developed solutions/applications in production environment Support a learning culture continuously advancing AI capabilities All About You Experience 3+ years of experience in the Data Sciences field with a focus on AI strategy and execution and developing solutions from scratch Demonstrated passion for AI competing in sponsored challenges such as Kaggle Previous experience with or exposure to: o Deep Learning algorithm techniques, open source tools and technologies, statistical tools, and programming environments such as Python, R, and SQL o Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning o Classical Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Recommender Systems - Collaborative Filtering, FPMC, FISM, Fossil o Deep Learning algorithm techniques like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks – Feedforward, CNN, LSTM’s GRU’s is a plus. Optimization techniques – Activity regularization (L1 and L2), Adam, Adagrad, Adadelta concepts; Cost Functions in Neural Nets – Contrastive Loss, Hinge Loss, Binary Cross entropy, Categorical Cross entropy; developed applications in KRR, NLP, Speech and Image processing o Deep Learning frameworks for Production Systems like Tensorflow, Keras (for RPD and neural net architecture evaluation), PyTorch and Xgboost, Caffe, and Theono is a plus Exposure or experience using collaboration tools such as: o Confluence (Documentation) o Bitbucket/Stash (Code Sharing) o Shared Folders (File Sharing) o ALM (Project Management) Knowledge of payments industry a plus Experience with SAFe (Scaled Agile Framework) process is a plus Effectiveness Effective at managing and validating assumptions with key stakeholders in compressed timeframes, without hampering development momentum Capable of navigating a complex organization in a relentless pursuit of answers and clarity Enthusiasm for Data Sciences embracing the creative application of AI techniques to improve an organization's effectiveness Ability to understand technical system architecture and overarching function along with interdependency elements, as well as anticipate challenges for immediate remediation Ability to unpack complex problems into addressable segments and evaluate AI methods most applicable to addressing the segment Incredible attention to detail and focus instilling confidence without qualification in developed solutions Core Capabilities Strong written and oral communication skills Strong project management skills Concentration in Computer Science Some international travel required Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 6 hours ago

Apply

3.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Consultant The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 3-6 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 301813

Posted 6 hours ago

Apply

3.0 years

28 Lacs

Gurgaon

On-site

Data Scientist Location: Gurugram, Haryana Need Immediate Joiners JD 1 – Data Scientist Statistical Knowledge, Good communication skills Coding Skills in Python, SQL Knowledge on GenAi PowerBI Tableu, Good Comminication Hands on testing, coding. API Key responsibilities: Analyze large and complex data sets to identify patterns, trends, and actionable insights using AI techniques. Develop and implement AI models, including machine learning, deep learning, and natural language processing algorithms, to address client-specific challenges. Design, develop, test, and deploy scalable AI solutions in production environments (a plus) Collaborate with cross-functional teams, including consultants and industry experts, to understand client needs and deliver AI-driven solute Communicate findings and insights effectively to both technical and non-technical stakeholders. Skills: Minimum of 3 years of work experience in data science or a related field with a focus on AI Strong knowledge of AI techniques (knowledge of Rag Implementation and Agents) including machine learning, deep learning, and natural language process Proficiency in Python and related libraries (e.g., Pandas, NumPy, scikit-learn, streamlit) is mandatory. Proficiency in querying language like SQL Experience with AI and machine learning frameworks and libraries (e.g., TensorFlow, PyTorch) Strong problem-solving skills and attention to detail JD 2- Data Scientist Skills: Minimum of 6-9 years experience with advanced knowledge of statistical and data mining techniques (regression, decision trees, clustering, neural networks, etc.). Experience in working with large datasets and relational databases is highly desirable (SQL). Knowledge of additional programming languages is a plus (Python, C++, Java). Distinctive communications skills and ability to communicate analytical and technical content in an easy to understand way. Intellectual curiosity, along with excellent problem-solving and quantitative skills, including the ability to disaggregate issues, identify root causes and recommend solutions. Proven leaders with the ability in inspire others, build strong relationships, and create a true followership, result-driven achievers. Strong people skills, team-orientation, and a professional attitude. Our Advanced Analytics teams bring the latest analytical techniques plus a deep understanding of industry dynamics and corporate functions to help clients create the most value from data. Notice Period : Immediate or max, 2 weeks Location: Gurgaon, Hybrid mode Job Type: Permanent Pay: ₹2,800,000.00 per year Experience: Data Scientist: 3 years (Required) Coding Skills in Python, SQL: 3 years (Required) PowerBI Tableu,: 3 years (Required) Hands on testing, coding. API : 3 years (Required) Work Location: In person

Posted 6 hours ago

Apply

170.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Area(s) of responsibility About Us: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Job Summary We are seeking a skilled Snowflake Developer with 8+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications 8+ years in database development, data warehousing, or ETL. 4+ years of hands-on Snowflake development experience. Strong SQL or Python skills for data processing. Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). Certifications: SnowPro Core Certification (preferred). Preferred Skills Familiarity with data governance and metadata management.

Posted 6 hours ago

Apply

3.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Consultant The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 3-6 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 301813

Posted 6 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Introduction IBM Infrastructure division builds Servers, Storage, Systems and Cloud Software which are the building blocks for next-generation IT infrastructure of enterprise customers and data centers. IBM Servers provide best-in-class reliability, scalability, performance, and end-to-end security to handle mission-critical workloads and provide seamless extension to hybrid multicloud environments. India Systems Development Lab (ISDL) is part of word-wide IBM Infrastructure division. Established in 1996, the ISDL Lab is headquartered in Bengaluru, with presence in Pune and Hyderabad as well. ISDL teams work across the IBM Systems stack including Processor development (Power and IBM Z), ASCIs, Firmware, Operating Systems, Systems Software, Storage Software, Cloud Software, Performance & Security Engineering, System Test etc. The lab also focuses on innovations, thanks to the creative energies of the teams. The lab has contributed over 400+ patents in cutting edge technologies and inventions so far. ISDL teams also ushered in new development models such as Agile, Design Thinking and DevOps. Your Role And Responsibilities As a Software Engineer at IBM India Systems Development Lab (IBM ISDL), you will get an opportunity to work on all the phases of product development (Design/Development, Test and Support) across core Systems technologies including Operating Systems, Firmware, Systems Software, Storage Software & Cloud Software. As a Software Developer At ISDL: You will be focused on development of IBM Systems products interfacing with development & product management teams and end users, cutting across geos. You would analyze product requirements, determine the best course of design, implement/code the solution and test across the entire product development life cycle. One could also work on Validation and Support of IBM Systems products. You get to work with a vibrant, culture driven and technically accomplished teams working to create world-class products and deployment environments, delivering an industry leading user experience for our customers. You will be valued for your contributions in a growing organization with broader opportunities. At ISDL, work is more than a job - it's a calling: To build. To design. To code. To invent. To collaborate. To think along with clients. To make new products/markets. Not just to do something better, but to attempt things you've never thought was possible. Are you ready to lead in this new era of technology and solve some of the most challenging problems in Systems Software technologies? If so, let’s talk. Required Technical And Professional Expertise Required Technical Expertise: Knowledge of Operating Systems, OpenStack, Kubernetes, Container technologies, Cloud concepts, Security, Virtualization Management, REST API, DevOps (Continuous Integration) and Microservice Architecture. Strong programming skills in C, C++, Go Lang, Python, Ansible, Shell Scripting. Comfortable in working with Github and leveraging Open source tools. AI Software Engineer: As a Software Engineer with IBM AI on Z Solutions teams, you will get the opportunity to get involved in delivering best-in class Enterprise AI Solutions on IBM Z and support IBM Customers while adopting AI technologies / Solutions into their businesses by building ethical, secure, trustworthy and sustainable AI solutions on IBM Z. You will be part of end to end solutions working along with technically accomplished teams. You will be working as a Full stack developer starting from understanding client challenges to providing solutions using AI. Required Technical Expertise: Knowledge of AI/ML/DL, Jupyter Notebooks, Linux Systems, Kubernetes, Container technologies, REST API, UI skills, Strong programming skills like – C, C++, R, Python, Go Lang and well versed with Linux platform. Strong understanding of Data Science, modern tools and techniques to derive meaningful insights Understanding of Machine learning (ML) frameworks like scikit- learn, XGBoost etc. Understanding of Deep Learning (DL) Frameworks like Tensorflow, PyTorch Understanding of Deep Learning Compilers (DLC) Natural Language Processing (NLP) skills Understanding of different CPU architectures (little endian, big endian). Familiar with open source databases PostGreSQL, MongoDB, CouchDB, CockroachDB, Redis, data sources, connectors, data preparations, data flows, Integrate, cleanse and shape data. IBM Storage Engineer: As a Storage Engineer Intern in a Storage Development Lab you would support the design, testing, and validation of storage solutions used in enterprise or consumer products. This role involves working closely with hardware and software development teams to evaluate storage performance, ensure data integrity, and assist in building prototypes and test environments. The engineer contributes to the development lifecycle by configuring storage systems, automating test setups, and analyzing system behavior under various workloads. This position is ideal for individuals with a foundational understanding of storage technologies and a passion for hands-on experimentation and product innovation. Preferred Technical Expertise: Practical working experience with Java, Python, GoLang, ReactJS, Knowledge of AI/ML/DL, Jupyter Notebooks, Storage Systems, Kubernetes, Container technologies, REST API, UI skills, Exposure to cloud computing technologies such as Red Hat OpenShift, Microservices Architecture, Kubernetes/Docker Deployment. Basic understanding of storage technologies: SAN, NAS, DAS Familiarity with RAID levels and disk configurations Knowledge of file systems (e.g., NTFS, ext4, ZFS) Experience with operating systems: Windows Server, Linux/Unix Basic networking concepts: TCP/IP, DNS, DHCP Scripting skills: Bash, PowerShell, or Python (for automation) Understanding of backup and recovery tools (e.g., Veeam, Commvault) Exposure to cloud storage: AWS S3, Azure Blob, or Google Cloud Storage Linux Developer: As a Linux developer, you would be involved in design and development of advanced features in the Linux OS for the next generation server platforms from IBM by collaboration with the Linux community. You collaborate with teams across the hardware, firmware, and upstream Linux kernel community to deliver these capabilities. Preferred Technical Expertise Excellent knowledge of the C programming language Knowledge of Linux Kernel internals and implementation principles. In-depth understanding of operating systems concepts, data structures, processor architecture, and virtualization Experience with working on open-source software using tools such git and associated community participation processes. Hardware Management Console (HMC) / Novalink Software Developer: As a Software Developer in HMC / Novalink team, you will work on design, development, and test of the Management Console for IBM Power Servers. You will be involved in user centric Graphical User Interface development and Backend for server and virtualization management solution development in Agile environment. Preferred Technical Expertise Strong Programming skills in in Core Java 8, C/C++ Web development skills in JavaScript (Frameworks such as Angular.js, React.js etc),, HTML, CSS and related technologies Experience in developing rich HTML applications Web UI Frameworks: Vaadin, React JS and UI styling libraries like Bootstrap/Material Knowledge of J2EE, JSP, RESTful web services and GraphQL API AIX Developer: AIX is a proprietary Unix operating system which runs on IBM Power Servers. It’s a secure, scalable, and robust open standards-based UNIX operating system which is designed to meet the needs of Enterprises class infrastructure. As an AIX developer, you would be involved in development, test or support of AIX OS features development or open source software porting/development for AIX OS Preferred Technical Expertise Strong Expertise in Systems Programming Skills (C, C++) Strong knowledge of operating systems concepts, data structures, algorithms Strong knowledge of Unix/Linux internals (Signals, IPC, Shared Memory,..etc) Expertise in developing/handling multi-threaded Applications. Good knowledge in any of the following areas User Space Applications File Systems, Volume Management Device Drivers Unix Networking, Security Container Technologies Linkers/Loaders Virtualization High Availability & clustering products Strong debugging and Problem-Solving skills Performance Engineer: As a performance Engineer , you will get an opportunity to conduct experiments and analysis to identify performance aspects for operating systems and Enterprise Servers. where you will be responsible for advancing the product roadmap by using your expertise in Linux operating system, building kernel , applying patches, performance characterization, optimization and hardware architecture to analyse performance of software/hardware combinations. You will be involved in conducting experiments and analysis to identify performance challenges and uncover optimization opportunities for IBM Power virtualization and cloud management software built on Open stack. The areas of work will be on characterization, analysis and fine-tune application software to help deliver optimal performance on IBM Power. Preferred Technical Expertise Experience in C/C++ programming Knowledge of Hypervisor, Virtualization concepts Good understanding of system HW , Operating System , Systems Architecture Strong skills in scripting Good problem solving, strong analytical and logical reasoning skills Familiar with server performance management and capacity planning Familiar with performance diagnostic methods and techniques Firmware Engineer: As a Firmware developer you will be responsible for designing and developing components and features independently in IBM India Systems Development Lab. ISDL works on end-to-end design, development across Power, Z and Storage portfolio. You would be a part of WW Firmware development organization and would be involved in designing & developing cutting edge features on the open source OpenBMC stack (https://github.com/openbmc/) and developing the open source embedded firmware code for bringing up the next generation enterprise Power, Z and LinuxONE Servers. You will get an opportunity work alongside with some of the best minds in the industry, forum and communities in the process of contributing to the portfolio. Preferred Technical Expertise Strong System Architecture knowledge Hands on programming skills with C, C++ , C on Linux Distros. Experience/exposure in Firmware/Embedded software design & development, Strong knowledge of Linux OS and Open Source development Experience with Open Source tools & scripting languages: Git, Gerrit, Jenkins, perl/python Other Skills (Common For All The Positions): Strong Communication, analytical, interpersonal & problem solving skills Ability to deliver on agreed goals and the ability to coordinate activities in the team/collaborate with others to deliver on the team vision. Ability to work effectively in a global team environment Enterprise System Design Software Engineer: The Enterprise Systems Design team is keen on hiring passionate Computer science and engineering graduates / Masters students, who can blend their architectural knowledge and programming skills to build the complex infrastructure geared to work for the Hybrid cloud and AI workloads. We have several opportunities in following areas of System & chip development team : Processor verification engineer Needs to develop the test infrastructure to verify the architecture and functionality of the IBM server processors/SOC or ASICs. Will be responsible to creatively think of all the scenarios to test and report the coverage. Work with design as well as other key stakeholders in identifying /debugging & Resolving logic design issues and deliver a quality design Processor Pre / Post silicon validation engineer As a validation engineer you would design and develop algorithms for Post Silicon Validation of next generation IBM server processors, SOCs and ASICs. Electronic design automation – Front & BE tool development. EDA tools development team is responsible for developing state of the art Front End verification , simulation , Formal verification tools , Place & Route, synthesis tools and Flows critical for designing & verifying high performance hardware design for IBM's next generation Systems (IBM P and Z Systems) which is used in Cognitive, ML, DL, and Data Center applications. Required Professional And Technical Skills: Functional Verification / Validation of Processors or ASICs. Computer architecture knowledge, Processor core design specifications, instruction set architecture and logic verification. Multi-processor cache coherency, Memory subsystem, IO subsystem knowledge, any of the protocols like PCIE/CXL, DDR, Flash, Ethernet etc Strong C/C++programming skills in a Unix/Linux environment required Great scripting skills – Perl / Python/Shell Development experience on Linux/Unix environments and in GIT repositories and basic understanding of Continues Integration and DevOps workflow. Understand Verilog / VHDL , verification coverage closure Proven problem-solving skills and the ability to work in a team environment are a must

Posted 6 hours ago

Apply

3.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Consultant The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Required : 3-6 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 301813

Posted 6 hours ago

Apply

3.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description:- The Microsoft SQL Server Database Administrator (DBA) is responsible for managing and maintaining SQL Server databases, ensuring their availability, security, and performance. This role involves database design, installation, configuration, monitoring, troubleshooting, and optimization. Must have exposure to working in SQL Server 2017/2019/2022 versions. Candidate must possess 3-6 years of experience in SQL Server administration. Key Responsibilities:- 1. Database Administration:- Install, configure, and upgrade SQL Server instances. Create, modify, and manage databases, tables, statistics & indexes. Troubleshooting jobs. Understand and develop SQL queries & stored procedures. Knowledge of PowerShell commands and scripting. 2. Backup and Recovery:- Develop and maintain backup and recovery strategies. Perform regular backups and test restoration procedures. Ensure data consistency and minimize downtime during recovery. 3. Performance Tuning:- Optimize database performance by analyzing query execution plans. Identify and resolve performance issues related to indexing, query design, and resource utilization. Index maintenance and updating stats. Monitor server resources (CPU, memory, disk space) and adjust configurations as needed. 4. High Availability and Disaster Recovery:- Configure and manage database clustering, and Always On Availability Groups. Always On - Plan and execute failover and failback procedures, and troubleshoot latency and data consistency across replicas. SQL Cluster - Troubleshooting cluster issues, flipping cluster roles, etc. Thorough knowledge of Transaction replication - setup, failures, latency, and troubleshooting. 5. Good to have:- Developing SQL queries & stored procedures Linux Basics. PowerShell scripting skills PostgreSQL Database administration.

Posted 6 hours ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About Klook We are Asia’s leading platform for experiences and travel services, and we believe that we can help bring the world closer together through experiences . Founded in 2014 by 3 avid travelers, Ethan Lin, Eric Gnock Fah and Bernie Xiong, Klook inspires and enables more moments of joy for travelers with over half a million curated quality experiences ranging from the biggest attractions to paragliding adventures, iconic museums to rich cultural tours, and other convenient local travel services across 2,700 destinations around the world. Do you share our belief in the wonders of travel? Our international community of over 1,800 employees, based in 30+ locations, certainly do! Global citizens ourselves, Klookers are not only curating memorable experiences for others but also co-creating our world of joy within Klook. We work hard and play hard, upkeeping our high-performing culture as we are guided daily by our 6 core values: Customer First Push Boundaries Critical Thinking Build for Scale Less is More Win as One We never settle, and together, we believe in achieving greater heights and realizing endless possibilities ahead of us in the dynamic new era of travel. Care to be a part of this revolution? Join us! As a Data Scientist within the Pricing Strategy team, you will play a pivotal role in driving data-driven decision-making and optimizing pricing strategies. You will leverage your expertise in data science and analytics to develop and implement dynamic pricing models, predictive and prescriptive analysis, ultimately contributing to revenue growth and market competitiveness. What you'll do: Dynamic Pricing Model Development: Develop and implement advanced dynamic pricing models to optimize product pricing across various channels and markets. Predictive Analytics: Utilize predictive modeling techniques to forecast demand, market trends, and customer behavior, enabling proactive pricing adjustments. Prescriptive Analysis: Employ prescriptive analytics to identify optimal pricing strategies based on specific business objectives and constraints. Data Exploration and Analysis: Conduct in-depth data exploration and analysis to uncover valuable insights and inform pricing decisions. Model Evaluation and Refinement: Continuously evaluate and refine pricing models to ensure their accuracy and effectiveness. Collaboration: Collaborate with cross-functional teams (e.g., marketing, sales, finance) to align pricing strategies with overall business goals. Stay Updated: Stay abreast of the latest advancements in data science and pricing optimization techniques. What you'll need: Master's degree or PhD in Data Science, Statistics, Computer Science, Economics, or a related field. A minimum of 3-4 years of relevant experience in the field of Data Science. Strong programming skills in Python or R, including proficiency in data manipulation, analysis, and visualization libraries (e.g., Pandas, NumPy, Matplotlib, Seaborn). Experience with machine learning algorithms and techniques (e.g., regression, classification, clustering, time series analysis). Knowledge of statistical modeling and hypothesis testing. Experience with data warehousing and cloud computing platforms (e.g., AWS, GCP, Azure) is a plus. Excellent problem-solving and analytical skills. Ability to communicate complex technical concepts to both technical and non-technical audiences. Passion for data-driven decision-making and a continuous learner mindset. Klook is proud to be an equal opportunity employer. We hire talented and passionate people of all backgrounds. We believe that a joyful workplace is an inclusive workplace, one where employees from all walks of life have an equal opportunity to thrive. We’re dedicated to creating a welcoming and supportive culture where everyone belongs. Klook does not accept unsolicited resumes from any temporary staffing agency, placement service or professional recruiter (“Agency”). Klook will not be responsible for, and will not pay, any fees, commissions or other payments related to such unsolicited resumes. An Agency must obtain advance written approval from Klook’s Talent Acquisition Team to submit resumes, and then only in conjunction with a valid fully-executed agreement for service and in response to a specific job opening for which the Agency has been requested to submit resumes for. Klook will not be responsible for, and will not pay, any fees, commissions or other payments to any Agency that does not have such agreement in place or does not comply with the foregoing.

Posted 6 hours ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are seeking an experienced Advanced Analytics Specialist to join our dynamic team. This role focuses on leveraging advanced analytics techniques, including machine learning algorithms, Generative AI (GenAI), and large language models (LLMs), to drive data-driven decision-making within the retail/CPG domain. The ideal candidate will possess a strong quantitative background and a passion for transforming complex data into actionable insights. Job Description: Key Responsibilities: Develop, implement, and maintain advanced analytical models using machine learning algorithms and GenAI applications Utilize various advanced analytics techniques to uncover trends, patterns, and insights from large and complex datasets. Collaborate with cross-functional teams to identify business needs and deliver data-driven solutions. Create visually compelling dashboards and reports to present findings to stakeholders. Continuously evaluate and improve existing analytics methodologies and models to enhance accuracy and performance. Stay abreast of industry trends and advancements in analytics and machine learning to drive innovation within the team. Mentor junior team members and contribute to knowledge sharing within the organization. Basic Qualifications: Bachelor’s or Master’s degree in Data Science, Business Analytics, Mathematics, Statistics, or a related field. 3+ years of experience in advanced analytics, data science, machine learning, Generative AI or a related field. Strong experience with quantitative modeling, predictive analytics, text analytics, and forecasting methodologies Proficiency in SQL (or Google BigQuery), Python, visualization tools like Tableau/PowerBI Familiarity with the Retail/CPG/Tech industry and experience with product, transaction, and customer-level data. Excellent communication skills, both verbal and written, with the ability to convey complex concepts to non-technical stakeholders. Strong analytical and problem-solving skills, with an inquisitive mindset. Desired Skills: Proficient in the following advanced analytics techniques: Descriptive Analytics: Statistical analysis, data visualization. Predictive Analytics: Regression analysis, time series forecasting, classification techniques, market mix modeling Prescriptive Analytics: Optimization, simulation modeling. Text Analytics: Natural Language Processing (NLP), sentiment analysis. Extensive knowledge of machine learning techniques, including: Supervised Learning: Linear regression, logistic regression, decision trees, support vector machines, random forests, gradient boosting machines among others Unsupervised Learning: K-means clustering, hierarchical clustering, principal component analysis (PCA), anomaly detection among others Reinforcement Learning: Q-learning, deep Q-networks, etc. Experience with Generative AI and large language models (LLMs) for text generation, summarization, and conversational agents. Researching, loading and application of the best LLMs (GPT, Gemini, LLAMA, etc.) for various objectives Hyper parameter tuning Prompt Engineering Embedding & Vectorization Fine tuning Proficiency in data visualization tools such as Tableau or Power BI. Strong skills in data management, structuring, and harmonization to support analytical needs. What We Offer: Opportunities for professional development and career growth. A collaborative and innovative work environment. If you are passionate about data analytics and want to make a significant impact in the retail/CPG industry, we encourage you to apply! Location: Bangalore Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 8 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. About The Team And The Role Join the Payments Technology team at eBay, which is at the forefront of transforming e-commerce payments. Our mission is to innovate and optimize payment experiences, providing diverse and efficient consumer choices for both buyers and sellers. As part of this rapidly evolving team, you will tackle new challenges and contribute to impactful projects that reinvigorate eBay's payment landscape. We foster a collaborative culture that encourages work-life balance while pushing the boundaries of technology. You will be integral to building the next generation of payments at eBay, excited by the scale and complexity of e-commerce payments systems. This role reports to engineering management, focusing on designing and implementing advanced payment solutions. What You Will Accomplish Drive the design and implementation of systems aligned with eBay's existing payments infrastructure to enhance functionality and scalability. Collaborate across multiple teams including engineering, product management, and QA, to develop robust solutions that meet market needs. Lead significant feature developments independently and collaborate on delivering more complex changes with fellow team members. Innovate and propose technical improvements, presenting detailed business cases for enhancements or new opportunities. Rapidly deliver iterative value as part of a cross-functional Agile team, staying responsive to customer needs. Engage in continuous learning and growth through tackling complex technical challenges in payment technology. What You Will Bring Advanced degree (MS/PhD) in Computer Science or related fields, with a strong emphasis on machine learning, data mining, and information retrieval. Proficiency in Java and other software development languages (J2EE, Scala, R, Perl) with a focus on creating scalable solutions. Solid understanding of database schema design, performance tuning, and both SQL and NoSQL databases like MongoDB. Experience with large-scale data-driven systems and web data analysis technologies. Familiarity with Hadoop development and data-mining technologies such as classification and clustering algorithms. Commitment to a collaborative work environment and willingness to engage with distributed teams across varying locations. Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.

Posted 8 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. About The Team And The Role Join the Payments Technology team at eBay, which is at the forefront of transforming e-commerce payments. Our mission is to innovate and optimize payment experiences, providing diverse and efficient consumer choices for both buyers and sellers. As part of this rapidly evolving team, you will tackle new challenges and contribute to impactful projects that reinvigorate eBay's payment landscape. We foster a collaborative culture that encourages work-life balance while pushing the boundaries of technology. You will be integral to building the next generation of payments at eBay, excited by the scale and complexity of e-commerce payments systems. This role reports to engineering management, focusing on designing and implementing advanced payment solutions. What You Will Accomplish Drive the design and implementation of systems aligned with eBay's existing payments infrastructure to enhance functionality and scalability. Collaborate across multiple teams including engineering, product management, and QA, to develop robust solutions that meet market needs. Lead significant feature developments independently and collaborate on delivering more complex changes with fellow team members. Innovate and propose technical improvements, presenting detailed business cases for enhancements or new opportunities. Rapidly deliver iterative value as part of a cross-functional Agile team, staying responsive to customer needs. Engage in continuous learning and growth through tackling complex technical challenges in payment technology. What You Will Bring Advanced degree (MS/PhD) in Computer Science or related fields, with a strong emphasis on machine learning, data mining, and information retrieval. Proficiency in Java and other software development languages (J2EE, Scala, R, Perl) with a focus on creating scalable solutions. Solid understanding of database schema design, performance tuning, and both SQL and NoSQL databases like MongoDB. Experience with large-scale data-driven systems and web data analysis technologies. Familiarity with Hadoop development and data-mining technologies such as classification and clustering algorithms. Commitment to a collaborative work environment and willingness to engage with distributed teams across varying locations. Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.

Posted 8 hours ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge technology to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential to driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated, and we believe that when one of us wins, we all win. We offer a full benefits package, including exciting travel perks, generous time off, parental leave, a global hybrid work setup (with some pretty cool offices), and career development resources. All of this is designed to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. About The Role Are you passionate about data, analytics, and collaborating with top talent to tackle complex problems? If you're motivated to make an immediate impact on reimagining travel through insightful recommendations that drive improvements, we want to speak with you! We are seeking an experienced analytics professional to join our AI Agent Product Analytics Team, supporting a portfolio of some of the world’s leading online travel brands. This is a mid-level individual contributor role, partnering with product managers and focusing on user experience optimization via GenAI-driven personalization. As a Data Scientist, you’ll collaborate with a multidisciplinary team to solve a wide range of problems. You will bring scientific rigor and statistical methods to the challenges of business growth and product development. This role supports key business decisions throughout the product development lifecycle to improve the user experience for millions of Expedia customers. You’ll be responsible for uncovering customer friction points, hypothesis generation and prioritization, experiment design, unbiased test analysis, actionable recommendations, and product optimization. If you are a self-starter who has demonstrated success using analytics to drive innovation and user engagement, this role is for you. In this role you will Apply expertise in quantitative analysis, machine learning, data mining, and data visualization to improve customer experience and business outcomes, with a focus on impactful analytics Guide product teams to actionable insights Own end-to-end testing excellence, including evaluating A/B and multivariate (MVT) tests, identifying opportunities, and recommending new features for test-and-learn initiatives Evaluate and apply alternative methods to supplement randomized testing Develop metrics frameworks for your product; build accurate, user-friendly dashboards/reports; and collaborate on goal setting, performance reviews, and strategic planning Take initiative to identify current and potential problems and recommend optimal solutions based on trade-offs Experience And Qualifications 4+ years of experience in quantitative analysis to solve business problems Bachelor’s degree in a science, technology, engineering, or mathematics (STEM) field Strong analytical skills, with the ability to break down complex business scenarios Experience with large datasets and proficiency in SQL or programming languages such as R or Python Excellent communicator with the ability to tell compelling stories using data Solid understanding of statistics and hypothesis testing (e.g., confidence intervals, regression, time series, clustering, factor analysis) Experience with visualization tools (e.g., Tableau) Familiarity with Adobe Analytics or Google Analytics is a plus Detail-oriented with a strong curiosity and passion for making an impact Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 8 hours ago

Apply

3.0 - 7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Summary: We are seeking Senior AI/ML Engineers with 3 to 7 years of experience in implementing, deploying, and scaling AI/ML solutions. This role involves working with generative AI, machine learning, deep learning, and data science to solve business challenges by designing, building, and maintaining scalable and efficient AI/ML applications. Key Responsibilities: AI: Architect scalable Generative AI and Machine Learning applications using AWS Cloud and other cutting-edge technologies. Extensive experience with LLMs and various prompt engineering techniques. Fine-tune and build custom LLMs. Deep understanding of LLM architecture and internal mechanisms. Experience with Langchain, Langgraph, Langfuse, Crew AI, LLM output evaluations, and agentic workflows. Build RAG (Retrieval-Augmented Generation) pipelines and integrate them with traditional applications. Data Science & Machine Learning: Solve complex data science problems and uncover insights using advanced EDA techniques. Implement automated pipelines for data cleaning, preprocessing, and model re-training. Hands-on experience with model experiment tracking and validation techniques. Deploy, track, and monitor models using AWS SageMaker. Strong knowledge of fundamental machine learning concepts, including supervised and unsupervised learning, deep learning, CNNs, and RNNs. Proficiency in working with databases for efficient data storage and retrieval. Experience with data warehouses and data lakes.  Computer Vision: Work on complex computer vision problems, including image classification, object detection, segmentation, and image captioning. Skills & Qualifications: 2-3 years of experience in implementing, deploying, and scaling Generative AI solutions. 3-7 years of experience in NLP, Data Science, Machine Learning, and Computer Vision. Proficiency in Python and ML frameworks such as Langchain, Langfuse, LLAMA Index, Langgraph, Crew AI, and LLM output evaluations. Experience with AWS Bedrock, OpenAI GPT models (GPT-4, GPT-4o, GPT-4o-mini), and LLMs such as Claude, LLaMa, Gemini, and DeepSeek. Experience with vector databases like Pinecone, OpenSearch, FAISS, and Chroma, with a strong understanding of indexing mechanisms. Expertise in forecasting, time series analysis, and predictive analytics. Experience with classification, regression, clustering, and other ML models. Proficiency in SageMaker for model training, evaluation, and deployment. Hands-on experience with ML libraries such as Scikit-learn, XGBoost, LightGBM, and CatBoost. Experience with deep learning frameworks such as PyTorch and TensorFlow. Familiarity with Docker, Uvicorn, FastAPI, and Flask for REST APIs. Proficiency in SQL and NoSQL databases, including PostgreSQL and AWS DynamoDB. Experience with caching technologies such as Redis and Memcached.

Posted 9 hours ago

Apply

7.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Minimum 7-10 years of experience with Python development. Excellent experience in Python 3.7 or higher, Fast API, Flask framework, Django, Rest APi, Pandas, HTML5, CSS3, JavaScript Experience in consuming Client models Excellent knowledge of SQL and No-SQL Database with Dockers and Kubernetes Should have good knowledge of Web Application, Infrastructure and web servers like Apache. Should have good experience with Bit bucket for source code management. Should have experience with Agile methodology and knows SDLC Knowledge of Azure/ AWS services and DevOps Knowledge on Networking and other components of web-application development is beneficial Knowledge on Angular is nice to have Should have good communication skills and a team player attitude. Understanding on to integrate multiple data sources and databases into one system Understanding of the threading limitations of Python, and multi-process architecture Familiarity with event-driven programming in Python Writing reusable, testable, and efficient code Excellent pattern recognition and predictive modeling skills Knowledge of a variety of machine learning techniques (clustering, decision tree learning, time series etc.) and their real-world advantages/drawbacks.

Posted 9 hours ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Company Description Metro Brands Limited (MBL) is one of India's largest footwear specialty retailers, known for its aspirational brands in the footwear category. Established in 1977, and having opened its first store in Mumbai in 1955, MBL operates 598 stores across 136 cities in India. The company boasts strong brands like Metro Shoes, Mochi, and Walkway, with premium offerings such as DaVinchi and J. Fontini. Metro Brands' strong vendor network, investment in technology, and continuous trend updates ensure fresh and innovative designs for customers. With a dedicated e-commerce platform and retail partnerships, the organization is supported by robust processes and a comprehensive supply chain network ensuring top-notch customer satisfaction. Role Description Analytical & Commercial Acumen : Strong ability to interpret financial and operational data to drive business insights and decisions. Retail FP&A Expertise : Proven hands-on experience in Financial Planning & Analysis, including budgeting, forecasting, and scenario planning within a retail environment. Merchandise Planning Knowledge : Demonstrated experience in pre-season and in-season merchandise planning, particularly within the fashion and footwear segments. Technical Proficiency : Advanced skills in MS Excel for modeling and analysis Familiarity with data visualization tools like Power BI or Tableau Exposure to platforms like Jupyter for data analysis is an added advantage Demand Forecasting & Inventory Management : Understanding of forecasting techniques and inventory lifecycle, including Open-to-Buy (OTB) frameworks. Clustering & Localization : Experience with store clustering methodologies and localized assortment planning based on store grade and regional preferences. Automation & Reporting : Ability to transition manual reports to automated, insight-driven dashboards and reports. Stakeholder Management : Strong interpersonal and communication skills to collaborate with cross-functional teams and external analytics partners. Retail Context Exposure : Experience in fast-paced, multi-brand, and multi-channel retail setups Familiarity with Indian consumer behavior, regional buying trends, and seasonality patterns Qualifications Strong Analytical Skills and Finance knowledge Proficient in Market Research and Research methodologies Excellent Communication abilities, both written and verbal Experience in financial analysis, budgeting, and forecasting Bachelor’s degree in Finance, Economics, Business Administration, or a related field Advanced proficiency in Microsoft Excel and financial modeling tools Ability to work well in a team-oriented environment Prior experience in retail or consumer goods sector is a plus

Posted 9 hours ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

The ideal candidate for this position should have 5 to 7 years of experience in database administration, specifically in managing MySQl, PostgreSQL, MongoDB, and Redis across various environments including cloud (public/private) and on-premise setups. You will be responsible for installing, configuring, and upgrading databases, collaborating with development teams for schema design and code reviews, managing data backups and restorations, diagnosing and resolving performance issues, applying security patches, and ensuring database access control and user permissions. Additionally, you will be expected to define and enforce database standards, research emerging technologies, design monitoring systems, develop High Availability (HA) and Disaster Recovery (DR) strategies, and automate routine tasks. To qualify for this role, you should hold a Bachelor's degree in information technology, Computer Science, or a related field, along with at least 5-7 years of relevant experience in database administration. Strong expertise in PostgreSQL, MongoDB, and Redis administration is essential, as well as hands-on experience with replication, clustering, and partitioning. Proficiency in managing cloud-based databases, particularly on AWS, is required. The ideal candidate will possess strong analytical, problem-solving, and communication skills, and be able to work both independently and collaboratively in a fast-paced environment. Preferred qualifications include certification in Database Administration (DBA), experience with Kafka, BI/reporting platforms, and automation tools, as well as familiarity with DevOps practices and CI/CD for database operations. If you are looking to further develop your skills and expertise in database administration and are eager to contribute to a dynamic and innovative team, this position may be the perfect fit for you.,

Posted 13 hours ago

Apply

Exploring Clustering Jobs in India

The job market for clustering roles in India is thriving, with numerous opportunities available for job seekers with expertise in this area. Clustering professionals are in high demand across various industries, including IT, data science, and research. If you are considering a career in clustering, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for clustering roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi

Average Salary Range

The average salary range for clustering professionals in India varies based on experience levels. Entry-level positions may start at around INR 3-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-20 lakhs per annum.

Career Path

In the field of clustering, a typical career path may look like: - Junior Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead

Related Skills

Apart from expertise in clustering, professionals in this field are often expected to have skills in: - Machine Learning - Data Analysis - Python/R programming - Statistics

Interview Questions

Here are 25 interview questions for clustering roles: - What is clustering and how does it differ from classification? (basic) - Explain the K-means clustering algorithm. (medium) - What are the different types of distance metrics used in clustering? (medium) - How do you determine the optimal number of clusters in K-means clustering? (medium) - What is the Elbow method in clustering? (basic) - Define hierarchical clustering. (medium) - What is the purpose of clustering in machine learning? (basic) - Can you explain the difference between supervised and unsupervised learning? (basic) - What are the advantages of hierarchical clustering over K-means clustering? (advanced) - How does DBSCAN clustering algorithm work? (medium) - What is the curse of dimensionality in clustering? (advanced) - Explain the concept of silhouette score in clustering. (medium) - How do you handle missing values in clustering algorithms? (medium) - What is the difference between agglomerative and divisive clustering? (advanced) - How would you handle outliers in clustering analysis? (medium) - Can you explain the concept of cluster centroids? (basic) - What are the limitations of K-means clustering? (medium) - How do you evaluate the performance of a clustering algorithm? (medium) - What is the role of inertia in K-means clustering? (basic) - Describe the process of feature scaling in clustering. (basic) - How does the GMM algorithm differ from K-means clustering? (advanced) - What is the importance of feature selection in clustering? (medium) - How can you assess the quality of clustering results? (medium) - Explain the concept of cluster density in DBSCAN. (advanced) - How do you handle high-dimensional data in clustering? (medium)

Closing Remark

As you venture into the world of clustering jobs in India, remember to stay updated with the latest trends and technologies in the field. Equip yourself with the necessary skills and knowledge to stand out in interviews and excel in your career. Good luck on your job search journey!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies