Jobs
Interviews

7004 Hadoop Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 14.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You should have a B.Tech/B.E/MSc/MCA qualification along with a minimum of 10 years of experience. As a Software Architect - Cloud, your responsibilities will include architecting and implementing AI-driven Cloud/SaaS offerings. You will be required to research and design new frameworks and features for various products, ensuring they meet high-quality standards and are designed for scale, resiliency, and efficiency. Additionally, motivating and assisting lead and senior developers for their professional and technical growth, contributing to academic outreach programs, and participating in company branding activities will be part of your role. To qualify for this position, you must have experience in designing and delivering widely used enterprise-class SaaS applications, preferably in Marketing technologies. Knowledge of cloud computing infrastructure, AWS Certification, hands-on experience in scalable distributed systems, AI/ML technologies, big data technologies, in-memory databases, caching systems, ETL tools, containerization solutions like Kubernetes, large-scale RDBMS deployments, SQL optimization, Agile and Scrum development processes, Java, Spring technologies, Git, and DevOps practices are essential requirements for this role.,

Posted 4 days ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

The Data Engineer will be responsible for designing, implementing, and maintaining the data infrastructure and pipelines necessary for AI/ML model training and deployment. You will work closely with data scientists and engineers to ensure data is clean, accessible, and efficiently processed. You should have 6-8 years of experience in data engineering, ideally in financial services. Strong proficiency in SQL, Python, and big data technologies (e.g., Hadoop, Spark) is required. Experience with cloud platforms (e.g., AWS, Azure, GCP) and data warehousing solutions is a must. Familiarity with ETL processes and tools, as well as knowledge of data governance, security, and compliance best practices, are essential. Key Responsibilities: - Build and maintain scalable data pipelines for data collection, processing, and analysis. - Ensure data quality and consistency for training and testing AI models. - Collaborate with data scientists and AI engineers to provide the required data for model development. - Optimize data storage and retrieval to support AI-driven applications. - Implement data governance practices to ensure compliance and security. At GlobalLogic, we prioritize a culture of caring where people come first. You'll experience an inclusive culture of acceptance and belonging, build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. We are committed to your continuous learning and development, offering opportunities to try new things, sharpen your skills, and advance your career. You will have the chance to work on projects that matter, engage your curiosity and problem-solving skills, and contribute to cutting-edge solutions shaping the world today. We support balance and flexibility in work and life, providing various career areas, roles, and work arrangements. Joining GlobalLogic means being part of a high-trust organization where integrity is key, and trust is fundamental to our relationships with employees and clients. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world's largest and most forward-thinking companies. Since 2000, we've been at the forefront of the digital revolution, collaborating with clients in transforming businesses and industries through intelligent products, platforms, and services.,

Posted 4 days ago

Apply

0.0 - 4.0 years

0 Lacs

karnataka

On-site

We are looking for someone who is enthusiastic to contribute to the implementation of a metadata-driven platform managing the full lifecycle of batch and streaming Big Data pipelines. This role involves applying ML and AI techniques in data management, such as anomaly detection for identifying and resolving data quality issues and data discovery. The platform facilitates the delivery of Visa's core data assets to both internal and external customers. You will provide Platform-as-a-Service offerings that are easy to consume, scalable, secure, and reliable using open source-based Cloud solutions for Big Data technologies. Working at the intersection of infrastructure and software engineering, you will design and deploy data and pipeline management frameworks using open-source components like Hadoop, Hive, Spark, HBase, Kafka streaming, and other Big Data technologies. Collaboration with various teams is essential to build and maintain innovative, reliable, secure, and cost-effective distributed solutions. Facilitating knowledge transfer to the Engineering and Operations team, you will work on technical challenges and process improvements with geographically distributed teams. Your responsibilities will include designing and implementing agile-innovative data pipeline and workflow management solutions that leverage technology advances for cost reduction, standardization, and commoditization. Driving the adoption of open standard toolsets to reduce complexity and support operational goals for increasing automation across the enterprise is a key aspect of this role. As a champion for the adoption of open infrastructure solutions that are fit for purpose, you will keep technology relevant. The role involves spending 80% of the time writing code in different languages, frameworks, and technology stacks. At Visa, your uniqueness is valued. Working here provides an opportunity to make a global impact, invest in your career growth, and be part of an inclusive and diverse workplace. Join our global team of disruptors, trailblazers, innovators, and risk-takers who are driving economic growth worldwide, moving the industry forward creatively, and engaging in meaningful work that brings financial literacy and digital commerce to millions of unbanked and underserved consumers. This position is hybrid, and the expectation of days in the office will be confirmed by your hiring manager. **Basic Qualifications**: - Minimum of 6 months of work experience or a bachelor's degree - Bachelor's degree in Computer Science, Computer Engineering, or a related field - Good understanding of data structures and algorithms - Good analytical and problem-solving skills **Preferred Qualifications**: - 1 or more years of work experience or an Advanced Degree (e.g., Masters) in Computer Science - Excellent programming skills with experience in at least one of the following: Python, Node.js, Java, Scala, GoLang - MVC (model-view-controller) for end-to-end development - Knowledge of SQL/NoSQL technology. Familiarity with Databases like Oracle, DB2, SQL Server, etc. - Proficiency in Unix-based operating systems and bash scripts - Strong communication skills, including clear and concise written and spoken communications with professional judgment - Team player with excellent interpersonal skills - Demonstrated ability to lead and navigate through ambiguity **Additional Information**:,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

As an AVP, Marketing Technology Audience Analyst at Synchrony, you will play a crucial role in understanding, building, and tracking audiences across various platforms. Your primary focus will be on developing best practices for audience governance and supporting the broader audience strategy development. Your responsibilities will include performing audience analyses using internal and external data sources to optimize current audience campaigns and shape future campaign strategies. You will create data personas, collaborate with marketing partners to identify trends and audience opportunities, and work with cross-functional teams to collect data for audience insights. Additionally, you will be responsible for building audiences, managing the workflow from CRM data onboarding to audience segment delivery for programmatic and personalization campaigns. You will establish partnerships with cross-functional teams to understand their business needs and goals, delivering processes and opportunities accordingly. To qualify for this role, you should have a Bachelor's Degree with at least 5 years of experience in mining and analyzing digital audience performance. Alternatively, a minimum of 7 years of relevant experience in the financial domain will be considered in lieu of a degree. You should have a strong background in enterprise-level data sciences, analytics, and customer intelligence, with at least 3 years of professional digital marketing experience. Desired characteristics for this role include proficiency in data mining techniques and analytic programming languages such as Python, SQL, Java, SAS, and others. You should have leadership experience working with cross-functional partners and familiarity with analytic platforms and tools like Hadoop, R, Hive, and Tableau. Experience in areas such as probability and statistics, machine learning, and artificial intelligence will be advantageous. As an ideal candidate, you should be able to execute analyses with massive data sets, collaborate effectively with diverse teams, and provide strategic recommendations based on data insights. You should be a creative thinker with a history of synthesizing insights to drive business decisions and lead strategic discussions. If you meet the eligibility criteria and possess the required skills and experience, we encourage you to apply for this role. This is a Level 10 position, and the work timings are from 2:00 PM to 11:00 PM IST. For internal applicants, it is essential to understand the criteria and mandatory skills needed for the role before applying. Informing your manager and HRM, updating your professional profile, and ensuring your resume is up to date are crucial steps in the application process. Employees at Level 8 and above who meet the specified tenure requirements are eligible to apply. Join us at Synchrony and be part of a dynamic team that drives ROI, elevates brand presence, and fosters a culture of innovation in the ever-evolving market landscape.,

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer II at JPMorgan Chase within the Employee Platforms team, you will have the opportunity to enhance your software engineering career while working with a team of agile professionals. Your main responsibility will be to design and deliver cutting-edge technology products in a secure, stable, and scalable manner. You will play a crucial role in developing technology solutions across different technical areas to support the firm's business objectives. Your key responsibilities will include executing innovative software solutions, developing high-quality production code, and identifying opportunities to enhance operational stability. You will lead evaluation sessions with external vendors and internal teams to drive architectural designs and technical applicability. Additionally, you will collaborate with various teams to drive feature development and produce documentation of cloud solutions. To qualify for this role, you should have formal training or certification in software engineering concepts along with at least 2 years of practical experience. You must possess advanced skills in system design, application development, and testing. Proficiency in programming languages, automation, and continuous delivery methods is essential. An in-depth understanding of agile methodologies, such as CI/CD, Application Resiliency, and Security, is required. Knowledge in Python, Big Data technologies, and financial services industry IT systems will be advantageous. Your success in this role will depend on your ability to innovate, collaborate with stakeholders, and excel in a diverse and improvement-focused environment. You should have a strong track record of technology implementation projects, along with expertise in software applications and technical processes within a technical discipline. Preferred skills include teamwork, initiative, and knowledge of financial instruments and specific programming languages like Core Java 8, Spring, JPA/Hibernate, and React JavaScript.,

Posted 4 days ago

Apply

2.0 - 12.0 years

0 Lacs

haryana

On-site

At American Express, the culture is built on a 175-year history of innovation, shared values, and Leadership Behaviors, with an unwavering commitment to supporting customers, communities, and colleagues. As a part of Team Amex, you will experience comprehensive support for your holistic well-being and numerous opportunities to learn new skills, develop as a leader, and advance your career. Your voice and ideas hold significance here, as your work creates an impact and contributes to defining the future of American Express. Java Backend Developer: As a Java Backend Developer, you will serve as a core member of an agile team responsible for driving user story analysis and elaboration. You will design and develop responsive web applications using the best engineering practices. Your role will involve hands-on software development, including writing code, unit tests, proof of concepts, code reviews, and testing in ongoing sprints. Continuous improvement through ongoing code refactoring is essential. You will develop a deep understanding of integrations with other systems and platforms within the supported domains. Managing your time effectively, working both independently and as part of a team, is crucial. Bringing a culture of innovation, ideas, and continuous improvement is encouraged. Challenging the status quo, taking risks, and implementing creative ideas are key aspects of the role. Collaboration with product managers, back-end, and front-end engineers to implement versatile solutions to web development problems is expected. Embracing emerging standards and promoting best practices and consistent framework usage are essential. Qualifications: - BS or MS degree in computer science, computer engineering, or related technical discipline - Total Experience: 3-12 Years; with 2+ years of experience working in Java and demonstrating good Java knowledge - Proficiency in Java 7 and Java 8 is preferred - Demonstrated knowledge of web fundamentals and HTTP protocol - Positive attitude, effective communication skills, willingness to learn, and collaborate - 2+ years of development experience in Java applications within an enterprise setting - Experience in developing Java applications using frameworks such as Spring, Spring Boot, Dropwizard is a plus - Proficiency in Test Driven Development (TDD) / Behavior Driven Development (BDD) practices and various testing frameworks - Experience in continuous integration and continuous delivery environments - Working experience in an Agile or SAFe development environment is advantageous Data Engineer: As a Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines. Serving as a core member of an agile team, you will drive user story analysis, design, and development of responsive web applications. Collaborating closely with data scientists, analysts, and partners is essential to ensure seamless data flow. Building and optimizing reports for analytical and business purposes, monitoring and resolving data pipeline issues, implementing data quality checks, validation processes, data governance policies, access controls, and security measures are all part of your responsibilities. Developing a deep understanding of integrations with other systems and platforms, fostering a culture of innovation, ideas, and continuous improvement, challenging the status quo, and taking risks to implement creative ideas are key aspects of the role. Leading your time effectively, working independently and as part of a team, adopting emerging standards, promoting best practices, and consistent framework usage are crucial. Collaborating with Product Owners to define requirements for new features and plan increments of work is also expected. Qualifications: - BS or MS degree in computer science, computer engineering, or related technical subject area - 3+ years of work experience - At least 5 years of hands-on experience with SQL, including schema design, query optimization, and performance tuning - Experience with distributed computing frameworks such as Hadoop, Hive, Spark for processing large-scale data sets - Proficiency in programming languages like Python, PySpark for building data pipelines and automation scripts - Understanding of cloud computing and exposure to cloud services like GCP, AWS, or Azure - Knowledge of CICD, GIT commands, and deployment processes - Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues and optimize data processing workflows - Excellent communication and collaboration skills American Express offers benefits that support your holistic well-being, including competitive base salaries, bonus incentives, financial well-being and retirement support, comprehensive medical, dental, vision, life insurance, and disability benefits, flexible working models, paid parental leave, access to wellness centers, counseling support, career development, and training opportunities. The offer of employment with American Express is subject to the successful completion of a background verification check, as per applicable laws and regulations.,

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. In infrastructure engineering at PwC, you will focus on designing and implementing robust and scalable technology infrastructure solutions for clients. Your work will involve network architecture, server management, and cloud computing experience. Data Modeler Job Description: Looking for candidates with a strong background in data modeling, metadata management, and data system optimization. You will be responsible for analyzing business needs, developing long term data models, and ensuring the efficiency and consistency of our data systems. Key areas of expertise include Analyze and translate business needs into long term solution data models. Evaluate existing data systems and recommend improvements. Define rules to translate and transform data across data models. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Utilize canonical data modeling techniques to enhance data system efficiency. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems to ensure optimal performance. Strong expertise in relational and dimensional modeling (OLTP, OLAP). Experience with data modeling tools (Erwin, ER/Studio, Visio, PowerDesigner). Proficiency in SQL and database management systems (Oracle, SQL Server, MySQL, PostgreSQL). Knowledge of NoSQL databases (MongoDB, Cassandra) and their data structures. Experience working with data warehouses and BI tools (Snowflake, Redshift, BigQuery, Tableau, Power BI). Familiarity with ETL processes, data integration, and data governance frameworks. Strong analytical, problem-solving, and communication skills. Qualifications: Bachelor's degree in Engineering or a related field. 3 to 5 years of experience in data modeling or a related field. 4+ years of hands-on experience with dimensional and relational data modeling. Expert knowledge of metadata management and related tools. Proficiency with data modeling tools such as Erwin, Power Designer, or Lucid. Knowledge of transactional databases and data warehouses. Preferred Skills: Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams. Preferred Skills: Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. In infrastructure engineering at PwC, you will focus on designing and implementing robust and scalable technology infrastructure solutions for clients. Your work will involve network architecture, server management, and cloud computing experience. Data Modeler Job Description Looking for candidates with a strong background in data modeling, metadata management, and data system optimization. You will be responsible for analyzing business needs, developing long term data models, and ensuring the efficiency and consistency of our data systems. Key areas of expertise include Analyze and translate business needs into long term solution data models. Evaluate existing data systems and recommend improvements. Define rules to translate and transform data across data models. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Utilize canonical data modeling techniques to enhance data system efficiency. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems to ensure optimal performance. Strong expertise in relational and dimensional modeling (OLTP, OLAP). Experience with data modeling tools (Erwin, ER/Studio, Visio, PowerDesigner). Proficiency in SQL and database management systems (Oracle, SQL Server, MySQL, PostgreSQL). Knowledge of NoSQL databases (MongoDB, Cassandra) and their data structures. Experience working with data warehouses and BI tools (Snowflake, Redshift, BigQuery, Tableau, Power BI). Familiarity with ETL processes, data integration, and data governance frameworks. Strong analytical, problem-solving, and communication skills. Qualifications Bachelor's degree in Engineering or a related field. 5 to 9 years of experience in data modeling or a related field. 4+ years of hands-on experience with dimensional and relational data modeling. Expert knowledge of metadata management and related tools. Proficiency with data modeling tools such as Erwin, Power Designer, or Lucid. Knowledge of transactional databases and data warehouses. Preferred Skills Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams.

Posted 4 days ago

Apply

710.0 years

0 Lacs

Greater Kolkata Area

Remote

Job Title : Senior Data Scientist Location : Remote Department : Data Science / Analytics / AI & ML Experience : 710 years Employment Type : Summary : Responsibilities We are seeking an experienced and highly motivated Senior Data Scientist with 710 years of industry experience to lead advanced analytics initiatives and drive data-driven decision-making across the organization. The ideal candidate will be skilled in statistical modeling, machine learning, and data engineering, with a strong business sense and the ability to mentor junior team Responsibilities : Lead end-to-end data science projects from problem definition through model deployment. Build, evaluate, and deploy machine learning models and statistical algorithms to solve complex business problems. Collaborate with cross-functional teams including Product, Engineering, and Business stakeholders to integrate data science solutions. Work with large, complex datasets using modern data tools (e.g., Spark, SQL, Airflow). Translate complex analytical results into actionable insights and present them to non-technical audiences. Mentor junior data scientists and provide technical guidance. Stay current with the latest trends in AI/ML, data science, and data engineering. Ensure reproducibility, scalability, and performance of machine learning systems in Qualifications : Bachelors or Masters degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. PhD is a plus. 710 years of experience in data science, machine learning, or applied statistics roles. Strong programming skills in Python and/or R; proficiency in SQL. Deep understanding of statistical techniques, hypothesis testing, and predictive modeling. Hands-on experience with ML libraries such as scikit-learn, TensorFlow, PyTorch, XGBoost, etc. Familiarity with data processing tools like Spark, Hadoop, or equivalent. Experience deploying models into production environments (APIs, MLOps, CI/CD pipelines). Excellent communication skills and ability to convey technical insights to business audiences. Experience working in cloud environments such as AWS, GCP, or Azure. (ref:hirist.tech)

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Senior Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your primary objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will be responsible for monitoring and controlling all phases of the development process, including analysis, design, construction, testing, and implementation. Additionally, you will provide user and operational support on applications to business users. You will utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business processes, system processes, and industry standards, and make evaluative judgments. Furthermore, you will recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. You will also consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems. As the Applications Development Senior Programmer Analyst, you will ensure that essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. You will have the ability to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. In this role, you will appropriately assess risk when business decisions are made, demonstrate particular consideration for the firm's reputation, and safeguard Citigroup, its clients, and assets by driving compliance with applicable laws, rules, and regulations. You will be required to have strong analytical and communication skills and must be results-oriented, willing, and able to take ownership of engagements. Additionally, experience in the banking domain is a must. Qualifications: Must Have: - 8+ years of application/software development/maintenance - 5+ years of experience on Big Data Technologies like Apache Spark, Hive, Hadoop - Knowledge of Python, Java, or Scala programming language - Experience with JAVA, Web services, XML, Java Script, Micro services, SOA, etc. - Strong technical knowledge of Apache Spark, Hive, SQL, and Hadoop ecosystem - Ability to work independently, multi-task, and take ownership of various analyses or reviews Good to Have: - Work experience in Citi or Regulatory Reporting applications - Hands-on experience on cloud technologies, AI/ML integration, and creation of data pipelines - Experience with vendor products like Tableau, Arcadia, Paxata, KNIME - Experience with API development and use of data formats Education: - Bachelor's degree/University degree or equivalent experience This is a high-level overview of the job responsibilities and qualifications. Other job-related duties may be assigned as required.,

Posted 4 days ago

Apply

3.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Location: Gurugram Experience: 3-8 years Qualification: Any Role: Mainframe Developer; Data Engineer; AWS Specialist; Business Analyst (Insurance Background) Skills: Mainframe – COBOL, JCL, DB2, VSAM, IMS, COBOL, JCL, DB2, VSAM, IMS, CICS Main Frame Dveloper Associate 3-6 Years Mandatory skill sets- Captives (GCC) - Mainframe Developer/Data Engineer/Business Analyst/AWS Specialist Preferred skill sets- Hadoop, HIVE, SQL, Spark, CDC Tool with AWS Services (DMS, Glue, Glue Catalog, Athena, S3, Lake Formation) ETL Concept, SQL; BA-from Insurance Domain; Mainframe – COBOL, JCL, DB2, VSAM, IMS, COBOL, JCL, DB2, VSAM, IMS, CICS; Python, AWS API Gateway, AWS Lambda, SNS, S3, SQS, Event Notification, Event Bridge, CloudWatch DynamoDB, AWS SAM, VPC, Security, CloudFormation/Teraform, IAM, AWS CLI etc Year of experience required- 2 - 5 Yrs Qualifications- Any Required Skills Business Analytics, Data Engineering, Mainframe Development Optional Skills Apache Hadoop, Apache Hive, Apache Spark, Structured Query Language (SQL), Virtual Private Cloud Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As an Advisor, Statistical Analysis at Fiserv, you will utilize your expertise in Computer Science, Data Science, Statistics, Mathematics, or related fields to drive impactful insights. Your responsibilities will involve leveraging your proficiency in Python, SQL, and data science libraries such as pandas, scikit-learn, TensorFlow, and PyTorch to analyze and interpret data effectively. To excel in this role, you should have hands-on experience with big data technologies like Spark, Hadoop, as well as cloud platforms including AWS, GCP, and Azure. Your strong problem-solving skills will be crucial in tackling complex challenges, and your ability to thrive in a dynamic and collaborative work environment will contribute to the success of our team. If you are a forward-thinker with a passion for innovation, and are seeking a rewarding career where you can make a difference, Fiserv is the place for you. Join us in shaping the future of financial technology and unlock your full potential. To apply for this position, please submit your application using your legal name. Complete the step-by-step profile and attach your resume to be considered for this exciting opportunity. We appreciate your interest in becoming part of the Fiserv team. At Fiserv, we are committed to fostering an inclusive and diverse workplace where every individual is valued and respected. We believe that diversity drives innovation and we embrace the unique perspectives and experiences of our employees. Please note that Fiserv does not accept resume submissions from agencies without existing agreements. Kindly refrain from sending resumes to Fiserv associates as we are not liable for any fees related to unsolicited submissions. We also caution applicants to be vigilant against fake job posts that are not affiliated with Fiserv. These fraudulent postings may be used by cybercriminals to obtain personal information or financial data. Any communication from a Fiserv representative will be from a legitimate Fiserv email address to ensure transparency and security throughout the hiring process.,

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a highly skilled Senior Data Scientist, you will bring expertise in Python, Machine Learning (ML), Natural Language Processing (NLP), Generative AI (GenAI), and Azure Cloud Services to our team. Your primary responsibility will be to design, develop, and deploy advanced AI/ML models to facilitate data-driven decision-making processes. You must possess strong analytical skills, proficiency in AI/ML technologies, and hands-on experience with cloud-based solutions. Your key responsibilities will include designing and developing ML, NLP, and GenAI models to address complex business challenges. You will be tasked with building, training, and optimizing AI models using Python and various ML frameworks, as well as implementing Azure AI/ML services for scalable deployment. Additionally, you will develop and integrate APIs for real-time model inference and decision-making, collaborate with cross-functional teams, and ensure adherence to software engineering best practices and Agile methodologies. To excel in this role, you must stay updated on cutting-edge AI/ML advancements, conduct research on emerging trends, and contribute to the development of innovative solutions. Providing technical mentorship to junior data scientists, optimizing model performance in a production environment, and continuously enhancing models and algorithms will also be part of your responsibilities. The required skills and qualifications for this role include proficiency in Python and ML frameworks such as TensorFlow, PyTorch, or Scikit-learn. Hands-on experience in NLP techniques, expertise in Generative AI models, strong knowledge of Azure AI/ML services, and familiarity with CI/CD pipelines are essential. Additionally, you should have a strong understanding of software engineering principles, experience working in an Agile development environment, excellent problem-solving skills, and a background in statistical analysis, data mining, and data visualization. Preferred qualifications include experience in MLOps, knowledge of vector databases and retrieval-augmented generation techniques, exposure to big data processing frameworks, and familiarity with Graph Neural Networks and recommendation systems. Strong communication skills to convey complex ideas to technical and non-technical stakeholders, experience with AutoML frameworks, and hyperparameter tuning strategies will be advantageous in this role. This is a full-time or part-time permanent position with benefits such as health insurance and provident fund. The work schedule includes day shifts from Monday to Friday with weekend availability. An additional performance bonus will be provided based on your contribution to the team. If you have at least 8 years of experience in Python, Azure AI/ML services, Senior Data Scientist roles, and working with ML, NLP, and GenAI models, we encourage you to apply for this opportunity. The work location is in person to foster collaboration and innovation within the team.,

Posted 4 days ago

Apply

7.0 - 11.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Engineer specializing in supply chain applications, you will play a crucial role in the Supply Chain Analytics team at NovintiX, based in Coimbatore, India. Your primary responsibility will be to design, develop, and optimize scalable data solutions that support various aspects of logistics, procurement, and demand planning. Your key responsibilities will include building and enhancing data pipelines for inventory, shipping, and procurement data, integrating data from ERP, PLM, and third-party sources, and creating APIs to facilitate seamless data exchange. Additionally, you will be tasked with designing and maintaining enterprise-grade data lakes and warehouses while ensuring high standards of data quality, integrity, and security. Collaborating with stakeholders, you will be involved in developing reporting dashboards using tools like Power BI, Tableau, or QlikSense to support supply chain decision-making through data-driven insights. You will also work on building data models and algorithms for demand forecasting and logistics optimization, leveraging ML libraries and concepts for predictive analysis. Your role will involve cross-functional collaboration with supply chain, logistics, and IT teams, translating complex technical solutions into business language to drive operational efficiency. Implementing robust data governance frameworks and ensuring data compliance and audit readiness will be essential aspects of your job. To qualify for this position, you should have at least 7 years of experience in Data Engineering, a Bachelor's degree in Computer Science/IT or a related field, and expertise in technologies such as Python, Java, SQL, Spark SQL, Hadoop, PySpark, NoSQL, Power BI, Tableau, QlikSense, Azure Data Factory, Azure Databricks, and AWS. Strong collaboration, communication skills, and experience in fast-paced, agile environments are also desired. This is a full-time position based in Coimbatore, Tamil Nadu, requiring in-person work. If you are passionate about leveraging data to drive supply chain efficiency and are ready to take on this exciting challenge, please send your resume to shanmathi.saravanan@novintix.com before the application deadline on 13/07/2025.,

Posted 4 days ago

Apply

7.0 - 11.0 years

0 Lacs

chennai, tamil nadu

On-site

As a PySpark Data Reconciliation Engineer, you should have at least 7 years of relevant experience in technology and development. Your technical skillset should include proficiency in Java or Python, along with hands-on experience in BigData, Hadoop, Spark, and Kafka. Additionally, familiarity with APIs and microservices architecture is essential. If you have UI development and integration experience, it would be considered a strong advantage. Your domain expertise should lie in the capital market domain, with a preference for experience in Regulatory reporting or reconciliations within a technological context. It is crucial to have a proven track record of successfully delivering large-scale projects with globally distributed teams. Your application development skills, design paradigms knowledge, and any previous experience in the data domain would be beneficial for this role. Stakeholder management and the ability to lead a global technology team are key requirements for this position. You should consistently demonstrate clear and concise written and verbal communication skills. If you meet these qualifications and are ready to take on challenging projects in a dynamic environment, we encourage you to apply for this opportunity.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

At Medtronic, you can embark on a rewarding career dedicated to exploration and innovation, all while contributing to the advancement of healthcare access and equity for all. As a Digital Engineer at our new Minimed India Hub, you will play a crucial role in leveraging technology to enhance healthcare solutions on a global scale. Specifically, as a PySpark Data Engineer, you will be tasked with designing, developing, and maintaining data pipelines using PySpark. Your collaboration with data scientists, analysts, and stakeholders will be essential in ensuring the efficient processing and analysis of large datasets, as well as handling complex transformations and aggregations. This role offers an exciting opportunity to work within Medtronic's Diabetes business. As the Diabetes division prepares for separation to foster future growth and innovation, you will have the chance to operate with increased speed and agility. By working as a separate entity, there will be a focus on driving meaningful innovation and enhancing the impact on patient care. Your responsibilities will include designing, developing, and maintaining scalable and efficient ETL pipelines using PySpark, working with structured and unstructured data from various sources, optimizing PySpark applications for performance and scalability, collaborating with data scientists and analysts to understand data requirements, implementing data quality checks, monitoring and troubleshooting data pipeline issues, documenting technical specifications, and staying updated on the latest trends and technologies in big data and distributed computing. To excel in this role, you should possess a Bachelor's degree in computer science, engineering, or a related field, along with 4-5 years of experience in data engineering focusing on PySpark. Proficiency in Python and Spark, strong coding and debugging skills, knowledge of SQL and relational databases, hands-on experience with cloud platforms, familiarity with data warehousing solutions, experience with big data technologies, problem-solving abilities, and effective communication and collaboration skills are essential. Preferred skills include experience with Databricks, orchestration tools like Apache Airflow, knowledge of machine learning workflows, understanding of data security and governance best practices, familiarity with streaming data platforms, and knowledge of CI/CD pipelines and version control systems. Medtronic offers a competitive salary and flexible benefits package, along with a commitment to recognizing and supporting employees at every stage of their career and life. As part of the Medtronic team, you will contribute to the mission of alleviating pain, restoring health, and extending life by tackling the most challenging health problems facing humanity. Join us in engineering solutions that make a real difference in people's lives.,

Posted 4 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space, it has footprint across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About The Role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3–4 years of relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) and use big data technologies Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.)

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

ahmedabad, gujarat

On-site

The Machine Learning Engineer position at our fast-growing AI/ML startup is ideal for a highly skilled and self-motivated individual with at least 4 years of experience. As a Machine Learning Engineer, you will be responsible for designing and deploying intelligent systems and advanced algorithms tailored to real-world business problems across diverse industries. The role requires a creative thinker with a strong mathematical foundation, hands-on experience in machine learning and deep learning, and the ability to work independently in a dynamic, agile environment. Your key responsibilities will include collaborating with cross-functional teams to design and develop machine learning and deep learning algorithms, translating complex client problems into mathematical models, building data pipelines and automated classification systems, conducting data mining, and applying supervised/unsupervised learning to extract meaningful insights. You will also be responsible for performing Exploratory Data Analysis (EDA), hypothesis generation, and pattern recognition, developing and implementing Natural Language Processing (NLP) techniques, extending and customizing ML libraries/frameworks, visualizing analytical findings, developing and integrating APIs, providing technical documentation and support, and staying updated with the latest trends in AI/ML. To qualify for this role, you should have a B.Tech/BE or M.Tech/MS in Computer Science, Computer Engineering, or a related field, a solid understanding of data structures, algorithms, probability, and statistical methods, proficiency in Python, R, or Java, hands-on experience with ML/DL frameworks and libraries, experience with cloud services, a strong grasp of NLP, predictive analytics, and deep learning algorithms, familiarity with big data technologies, expertise in building and deploying scalable AI/ML models, exceptional analytical, problem-solving, and communication skills, and a strong portfolio of applied ML use cases. Joining our team will provide you with the opportunity to work at the forefront of AI innovation, be part of a high-impact team driving AI solutions across industries, enjoy a flexible remote working culture with autonomy and ownership, receive competitive compensation, growth opportunities, and access to cutting-edge technology, and embrace our culture of Learning, Engaging, Achieving, and Pioneering (LEAP) in every project you touch. Apply now to be a part of our innovative and collaborative team!,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be part of a dynamic team of researchers, data scientists, and developers as an AI/ML Developer. Your primary responsibility will involve working on cutting-edge AI solutions in various industries like commerce, agriculture, insurance, financial markets, and procurement. Your focus will be on developing and optimizing machine learning and generative AI models to address real-world challenges effectively. Your key responsibilities will include developing and optimizing ML, NLP, Deep Learning, and Generative AI models. You will be required to research and implement advanced algorithms for supervised and unsupervised learning. Additionally, you will work with large-scale datasets in distributed environments and understand business processes to select and apply the most suitable ML approaches. Ensuring the scalability and performance of ML solutions will also be a crucial part of your role. Collaboration with cross-functional teams, such as product owners, designers, and developers, will be essential. You will be expected to solve intricate data integration and deployment challenges while effectively communicating results using data visualization. Working in global teams across different time zones will also be part of your job scope. To be successful in this role, you should possess strong experience in Machine Learning, Deep Learning, NLP, and Generative AI. Hands-on expertise in frameworks like TensorFlow, PyTorch, or Hugging Face Transformers is required. Experience with LLMs, model fine-tuning, and prompt engineering will be beneficial. Proficiency in Python, R, or Scala for ML development is necessary, along with knowledge of cloud-based ML platforms such as AWS, Azure, and GCP. Experience with big data processing using tools like Spark, Hadoop, or Dask is also preferred. Your ability to scale ML models from prototypes to production and your strong analytical and problem-solving skills will be highly valued. If you are enthusiastic about pushing the boundaries of ML and GenAI, we are excited to hear from you!,

Posted 5 days ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a Consultant in Performance Analytics at Mastercard, you will be part of the Advisors & Consulting Services group, which specializes in Strategy & Transformation, Performance Analytics, Business Experimentation, Marketing, and Program Management. Your role will involve translating data into insights by utilizing both Mastercard and customer data to create, implement, and scale analytical solutions for clients. You will employ qualitative and quantitative analytical techniques along with enterprise applications to synthesize analyses into clear recommendations and impactful narratives. Your responsibilities will include providing creative input on projects across various industries, contributing to the development of analytics strategies for regional and global clients, collaborating with the Mastercard team to understand client needs, and developing relationships with client analysts and managers. Additionally, you will collaborate with senior project delivery consultants, identify key findings, prepare presentations, deliver recommendations to clients, and lead internal and client meetings. To qualify for this role, you should have an undergraduate degree with experience in data and analytics, proficiency in data analytics software such as Python, R, SQL, SAS, and advanced skills in Word, Excel, and PowerPoint. You must be able to analyze large datasets, synthesize key findings, manage clients or internal stakeholders, and communicate effectively in English and the local office language. Preferred qualifications include additional experience in database structures, data visualization tools, working with the Hadoop framework, and relevant industry expertise. As part of your role, you will be expected to abide by Mastercard's security policies, ensure the confidentiality and integrity of accessed information, report any suspected security violations, and complete mandatory security trainings. This position offers opportunities for professional growth and development through mentorship from performance analytics leaders. If you are passionate about leveraging data to drive business insights and solutions, and possess the required qualifications and skills, we encourage you to explore the available positions in Performance Analytics at Mastercard and apply to join our dynamic team.,

Posted 5 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Talend ETL Lead, you will be responsible for leading the design and development of scalable ETL pipelines using Talend, integrating with big data platforms, and mentoring junior developers. This is a high-impact, client-facing role requiring hands-on leadership and solution ownership. Lead the end-to-end development of ETL pipelines using Talend Data Fabric. Collaborate with data architects and business stakeholders to understand requirements. Build and optimize data ingestion, transformation, and loading processes. Ensure high performance, scalability, and reliability of data solutions. Mentor and guide junior developers in the team. Troubleshoot and resolve ETL-related issues quickly. Manage deployments and promote code through different environments. Qualifications: - 7+ years of experience in ETL/Data Engineering. - Strong hands-on experience with Talend Data Fabric. - Solid understanding of SQL, Hadoop ecosystem (HDFS, Hive, Pig, etc.). - Experience building robust data ingestion pipelines. - Excellent communication and leadership skills.,

Posted 5 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: We are looking for leaders in the data science domain, who should not only be passionate about building new algorithms and filing patents but can solve business problems using innovative and unique applied engineering concepts on big data. The principal data scientist is responsible to ensure that the delivery of data science solution is meeting the laid-out standards and commitments that Quadratyx has made to its clients. Apart from continuously improvising the data science expertise of the company, the principal data scientist needs to handle the research and development team with equal focus. Requisites Education: Bachelor's / Master degree is required in relevant field, Ph.D. degree will be preferred. Experience (Years): 9 plus years of relevant experience in delivering solutions using AI tools and machine learning algorithms, with proven expertise in data science research and awarded patents Job Responsibilities Lead a team of data scientists in developing and delivering machine learning models that work in a real-life production environment Collaborate with other department heads on delivery of projects Lead pre-sales responsibilities to deliver proposals Monitor the projects and sprints planned for data science team Ensure adoption of latest data science technologies Continuously challenge the model outcome and better the results Research to solve business problems where the creative ideas may have exhausted to do so Follow agile methodology in an iterative software development environment: committing to deadlines, owning work, and providing project status updates when necessary Own strategic thought leadership for the subject of enterprise-wide machine learning capabilities Dig deeper into data, understand characteristics of data, evaluate alternate models, and validate hypothesis through theoretical and empirical approaches Represent Quadratyx in data science conferences and interact with business leaders Publications in internal journals of repute and build patentable research outputs Skills Technical Skills Must have Relevant industry experience in handling high volumes of structured and unstructured data Implementation expertise in area of Quantitative Analytics spanning statistical modeling, machine learning, optimization methods, econometrics, graph theories, artificial intelligence, text mining and Natural Language Processing Solution delivery experience in deep learning methods Expertise in exploiting parallel computing environments like clusters on big data platforms like Hadoop etc. Proven skills in programming using any languages like Python, R, Scala, MATLAB Preferable to have Problem solving experience in maturing big data platforms Hands-on experience in tools and components used to assimilate and analyze big data Experience with data visualization tools to develop descriptive analysis Experience in deploying solutions on cloud platforms like Azure, AWS Other Skills Provable Excellence In Past Record Problem solving and analytical skills A penchant to excel, a strong urge to research and deliver Strong communication, interpersonal and leadership skills An inclusive attitude to respect diverse individuals and perspectives

Posted 5 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Who You'll Work With Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package to enable holistic well-being for you and your family. Your Impact You will join our Client Capabilities Network in Gurugram or Bengaluru office as part of HALO (Healthcare Access, Quality and Outcomes) domain within the SHaPE practice. This team uses healthcare data (payer/provider/third party data etc.), advanced analytics (combination of descriptive & predictive analytics), advanced technologies (automation, digital, AI) and core consulting skills to answer some of the most pressing questions our healthcare payor clients face. HALO has been a fast-growing domain, bringing together a diverse mix of healthcare experts, data scientists, engineers, and more to drive enterprise transformation programs across the healthcare value chain. McKinsey’s SHaPE fosters innovation driven by advanced analytics, user experience design thinking, predictive forecasting to develop new products/services and integrating them into our client work. It is helping to shift our model towards asset-based consulting and is a foundation for our entrepreneurial culture. As a Capabilities and Insights Analytics Analyst, you will be staffed on active client engagements focused on health care value with the ultimate goal of making healthcare better, affordable and accessible for billions of people around the world. You will deliver high-quality analytical outputs to guide decisions on business problems for our clients, such as identifying potential opportunity areas in medical, network, admin, pharmacy costs using descriptive analytics and where needed, web-accessible dynamic reports using advanced analytics and cloud-based platforms. You will be actively engaged with front-line client teams involved in helping payors implement Healthcare Value optimization (HVO) programs, their capability development for sustained impact that improve member & provider experience and enhances the quality and affordability of healthcare system. You will use our proprietary big data and analytical platform, Nebula, which will become a core element of your toolkit. You will also support the development of HALO’s Intellectual Property (IP), knowledge & product development and drive the process of creating end-to-end solutions. You are expected to own data models that you/team creates, develop deep healthcare content expertise and develop consulting tool kit – all of which are critical in solving the overall business problem. You will also hone project management and client communication skills as you participate in problem-solving sessions with clients and McKinsey leadership. You will collaborate with other HALO colleagues, SHaPE domains (primarily in healthcare domains such as provider performance improvement - PPI, Episodes of Care, Population Health, Healthcare Data Management, etc.), McKinsey Integrative consultants and other McKinsey teams (Design, QuantumBlack, Academy, etc.). Your Qualifications and Skills Bachelor’s degree in business or engineering preferred 2+ years of experience Experience working with large databases, data visualization tools, machine learning and statistics preferred Working knowledge of SQL; knowledge of Hadoop, Hive, Tableau, R, Python is a plus Knowledge of process automation for efficient process Strong problem solving skills; ability to process complex information, break it into logical steps/tasks and present it clearly to a range of audiences Strong on entrepreneurial drive; committed to team and personal development Client service and operation mindset Strong team player in a dynamic and changing environment; ability to work well with multi-disciplinary teams across continents/time zones Strong oral and written communication skills

Posted 5 days ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Where Data Does More. Join the Snowflake team. We are looking for a Solutions Architect to be part of our Professional Services team to deploy cloud products and services for our customers' Global Competency Centers located in India. This person must be a hands-on, self-starter who loves solving innovative problems in a fast-paced, agile environment. The ideal candidate will have the insight to connect a specific business problem and Snowflake’s solution and communicate that connection and vision to various technical and executive audiences. This person will have a broad range of skills and experience ranging from data architecture to ETL, security, performance analysis, analytics, etc. He/she will have the insight to make the connection between a customer’s specific business problems and Snowflake’s solution, the customer-facing skills to communicate that connection and vision to a wide variety of technical and executive audiences, and the technical skills to be able to not only build demos and execute proof-of-concepts but also to provide consultative assistance on architecture and implementation. The person we’re looking for shares our passion for reinventing the data platform and thrives in a dynamic environment. That means having the flexibility and willingness to jump in and get it done to make Snowflake and our customers successful. It means keeping up to date on the ever-evolving data and analytics technologies, and working collaboratively with a broad range of people inside and outside the company to be an authoritative resource for Snowflake and its customers. AS A SOLUTIONS ARCHITECT AT SNOWFLAKE YOU WILL: Be a technical expert on all aspects of Snowflake Present Snowflake technology and vision to executives and technical contributors to customers. Position yourself as a Trusted Advisor to key customer stakeholders with a focus on achieving their desired Business Outcomes. Drive project teams towards common goals of accelerating the adoption of Snowflake solutions. Demonstrate and communicate the value of Snowflake technology throughout the engagement, from demo to proof of concept to running workshops, design sessions and implementation with customers and stakeholders. Create repeatable processes and documentation as a result of customer engagement. Collaborate on and create Industry based solutions that are relevant to other customers in order to drive more value out of Snowflake. Deploy Snowflake following best practices, including ensuring knowledge transfer so that customers are correctly enabled and can extend the capabilities of Snowflake on their own. Maintain a deep understanding of competitive and complementary technologies and vendors and how to position Snowflake in relation to them. Work with System Integrator consultants at a deep technical level to successfully position and deploy Snowflake in customer environments Be able to position and sell the value of Snowflake professional services for ongoing delivery OUR IDEAL SOLUTIONS ARCHITECT WILL HAVE: Minimum 10 years of experience working with customers in a pre-sales or post-sales technical role University degree in computer science, engineering, mathematics or related fields, or equivalent experience Outstanding skills presenting to both technical and executive audiences, whether impromptu on a whiteboard or using presentations and demos Understanding of complete data analytics stack and workflow, from ETL to data platform design to BI and analytics tools Strong skills in databases, data warehouses, and data processing Extensive hands-on expertise with SQL and SQL analytics Proficiency in implementing data security measures, access controls, and design within the Snowflake platform. Extensive knowledge of and experience with large-scale database technology (e.g. Netezza, Exadata, Teradata, Greenplum, etc.) Software development experience with Python, Java , Spark and other Scripting languages Internal and/or external consulting experience. Deep collaboration with Account Executives and Sales Engineers on account strategy. BONUS POINTS FOR EXPERIENCE WITH THE FOLLOWING: 1+ years of practical Snowflake experience Experience with non-relational platforms and tools for large-scale data processing (e.g. Hadoop, HBase) Familiarity and experience with common BI and data exploration tools (e.g. Microstrategy, Looker, Tableau, PowerBI) OLAP Data modeling and data architecture experience Experience and understanding of large-scale infrastructure-as-a-service platforms (e.g. Amazon AWS, Microsoft Azure, GCP, etc.) Experience using AWS services such as S3, Kinesis, Elastic MapReduce, Data pipeline Experience delivering data migration projects Expertise in a core vertical such as Financial Services, Retail, Media & Entertainment, Healthcare, Life-Sciences etc. Hands-on experience with Python, Java or Scala. WHY JOIN OUR PROFESSIONAL SERVICES TEAM AT SNOWFLAKE: Unique opportunity to work on a truly disruptive software product Get unique, hands-on experience with bleeding edge data warehouse technology Develop, lead and execute an industry-changing initiative Learn from the best! Join a dedicated, experienced team of professionals. Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com

Posted 5 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Machine Learning Engineer Level: T24 People Manager: No Location: India Team: BX Recs Hiring Manager: rishaagarwal@ebay.com Recruiter: TBD Date: Jul 3, 2025 Job Description Looking for a company that inspires passion, courage and creativity, where you can be on the team shaping the future of global commerce? Want to shape how millions of people buy, sell, connect, and share around the world? If you’re interested in joining a purpose driven community that is dedicated to crafting an ambitious and inclusive work environment, join eBay – a company you can be proud to be with. Our Recommendations team works on delivering recommendations at scale and in near real time to our buyers on our website and native app platforms. Recommendations are a core part of how our buyers navigate eBay’s vast and varied inventory. Our team develops state-of-the-art recommendations systems, including deep learning based retrieval systems for personalized recommendations, machine learned ranking models, GenAI/LLM powered recommendations, as well as advanced MLOps in a high volume traffic industrial e-commerce setting. We are building cutting edge recommender systems powered by the latest ML, NLP, LLM/GenAI/RAG and AI technologies. Additionally, we are building production integrations with Google GCP Vertex AI platforms to supercharge our item recommendation algorithms. Come join our innovative engineering and applied research team! This Is An Opportunity To Influence how people will interact with eBay’s recommender systems in the future, and how recommender systems technology will evolve Work with unique and large data sets of unstructured multimodal data representing eBay's vast and varied inventory, including billions of items and millions of users Develop and deploy state-of-the-art AI models to production which have direct measurable impact on eBay buyers Deploy big data technology and large scale data pipelines Drive marketplace GMB as well as advertising revenue via organic and sponsored recommendations Qualifications MS in Computer Science or related area with 1+ years of relevant work experience (or BS/BA with 3+ years) in Engineering / Machine Learning / AI Experience building large scale distributed applications and expertise in an OO/functional language (Scala, Java, etc.) Experience building with no sql databases and key value stores (MongoDB, Redis, etc) Generalist with a can do attitude and willingness to learn/pick up new skill sets as needed Experience with using cloud services is a plus (GCP is a double plus) Experience with big data pipelines (Hadoop, Spark, Flink) is a plus Experience in AI applied research and industrial recommendation systems is a plus Experience with Large Language Models (LLMs) and prompt engineering is a plus Links To Some Of Our Previous Work Tech Blog 2025 (Multimodal GenAI) Tech Blog 2025 (GenAI Agentic Platform) RecSys 2024 Workshop paper Google Cloud Blog 2024 eBay Tech Blog 2023 eBay Tech Blog 2022 RecSys 2021 paper Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies