Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
4 - 8 Lacs
Calcutta
On-site
Job description Some careers have more impact than others. If you’re looking for a career where you can make a real impression, join HSBC and discover how valued you’ll be. HSBC is one of the largest banking and financial services organizations in the world, with operations in 62 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions. We are currently seeking an experienced professional to join our team in the role of Manager - Business Consulting Principal responsibilities The candidate is expected to perform basic data ETL (Extract-Transform-Load) tasks – starting with extraction of data from the source systems, cleaning it, handling exceptions and finally prepare them to feed into various systems/tools depending on the projects/PODs/delivery channels. Testing and validation of data models/data tables/data assets The role also requires the person to understand the data thoroughly and draw business insights which needs to be shared with various business stakeholders for informed decision making/process improvement. Facilitate and manage regular meetings with senior managers, key stakeholders, subject matter experts and delivery partners to reach targeted milestones on time. The candidate needs to be really flexible in their approach and to be ready to understand ever changing business scenarios and cater to the dynamic requirements in a smart and prompt manner Driving automation efforts across multiple reports/processes, understanding the gap in the process and eventually driving a project to move away from tactical solutions and adopt more strategic way of delivering reports and insights. Adherence to controls and governance is must while dealing with the data. The candidate must understand the basic risks arising from managing sensitive banking data and act accordingly to manage the risks in their everyday life at work. Completion of mandatory trainings on time and be self-motivated to take up ample training opportunities the bank provides to upskill oneself for future of work. The candidate must demonstrate excellent communication skills, problem solving and critical thinking abilities. Requirements Hands on Experience in Python , SQL , SAS , Excel and Pyspark required Prior project management experience with knowledge of Agile methodology and exposure to project management tools like Jira, Confluence, GITHUB Exposure to Bigdata and Hadoop data lake is a big plus Knowledge of cloud platforms like GCP, Azure, AWS is a big plus Expertise in visualization tools like Qliksense, Tableau, Power BI is good to have Excellent verbal and written communication Good Stakeholder Management Skills. You’ll achieve more at HSBC HSBC is an equal opportunity employer committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and, opportunities to grow within an inclusive and diverse environment. We encourage applications from all suitably qualified persons irrespective of, but not limited to, their gender or genetic information, sexual orientation, ethnicity, religion, social status, medical care leave requirements, political affiliation, people with disabilities, color, national origin, veteran status, etc., We consider all applications based on merit and suitability to the role.” Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. ***Issued By HSBC Electronic Data Processing (India) Private LTD***
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an exprienced professional to join our team in the role of Consultant Specialist In this role, you will: Data Pipelines Integration and Management: Expertise in Scala-Spark/Python-Spark development and should be able to Work with Agile application dev team to implement data strategies. Design and implement scalable data architectures to support the bank's data needs. Develop and maintain ETL (Extract, Transform, Load) processes. Ensure the data infrastructure is reliable, scalable, and secure. Oversee the integration of diverse data sources into a cohesive data platform. Ensure data quality, data governance, and compliance with regulatory requirements. Monitor and optimize data pipeline performance. Troubleshoot and resolve data-related issues promptly. Implement monitoring and alerting systems for data processes. Troubleshoot and resolve technical issues optimizing system performance ensuring reliability Create and maintain technical documentation for new and existing system ensuring that information is accessible to the team Implementing and monitoring solutions that identify both system bottlenecks and production issues Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in data engineering or related field and hands-on experience of building and maintenance of ETL Data pipelines Good experience in Designing and Developing Spark Applications using Scala or Python. Good experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark) Proficiency in programming languages such as Python, Java, or Scala. Optimization and Performance Tuning of Spark Applications GIT Experience on creating, merging and managing Repos. Perform unit testing and performance testing. Good understanding of ETL processes and data pipeline orchestration tools like Airflow, Control-M. Strong problem-solving skills and ability to work under pressure. Excellent communication and interpersonal skills. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience in the banking or financial services industry. Familiarity with regulatory requirements related to data security and privacy in the banking sector. Experience with cloud platforms (Google Cloud) and their data services. Certifications in cloud platforms (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist In this role, you will: Influences a cross-functional/cross-cultural team and the performance of individuals/teams against performance objectives and plans Endorses team engagement initiatives, fostering an environment which encourages learning and collaboration to build a sense of community Creates environments where only the best will do and high standards are expected, regularly achieved and appropriately rewarded; encourages and supports continual improvements within the team based on ongoing feedback Develops a network of professional relationships across Wholesale Data & Analytics and our stakeholders to improve collaborative working and encourage openness - sharing ideas, information and collateral Encourages individuals to network and collaborate with colleagues beyond their own business areas and/or the Group to shape change and benefit the business and its customers Support the PMO and delivery managers in documenting accurate status reports as required, in a timely manner Requirements To be successful in this role, you should meet the following requirements: Serve as voice of the customer Develop/scope and define backlog items (epics/features/user stories) that guide the development team Managing the capture, analysis and documentation of data & analytics requirements and processes (both business and IT) Create analysis of customer journeys and own the product roadmap Managing implementation of data & analytics solutions Manage change interventions such as training, adoption planning to stakeholder management Tracking and documenting progress and managing delivery status Helping define and track measure of success for products Risk identification, risk reporting and devising interventions to mitigating risks Budget management and forecasting Management of internal delivery teams and/or external service providers Data Platform, Service and Product Owner experience Experience of applying Design Thinking A Data-Driven mind-set Outstanding cross-platform knowledge (incl. Teradata, Hadoop and GCP) and understands the nuances of delivery across these platforms both technically and functionally and how they impact delivery Communication capabilities, decision-making, problem solving skills, lateral thinking, analytical and interpersonal skills Experience in leading a team and managing multiple, competing priorities and demands in a dynamic environment Highly analytical with strong attention to detail Demonstrable track record of delivery in a Banking and Financial Markets context A strong and diverse technical background and foundation (ETL, SQL, NoSQL, APIs, Data Architecture, Data Management principle and patterns, Data Ingest, Data Refinery, Data Provision, etc.) Experience in customising and managing integration tools, databases, warehouses and analytical tools A passion for designing towards consistency and efficiency, and always striving for continuous improvements via automated processes Experience in Agile Project methodologies and tools such as JIRA and Confluence You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Ernakulam, Pune, Thrissur
Hybrid
Job Title:Data Engineer Azure Big Data | Allianz | Pune/Bangalore Company: Claidroid Technologies Location: Pune/Trivandrum(Hybrid) Job Description: At Claidroid, we are building a data-driven future, and our Data Development & Engineering (DDE) team is at the heart of it. Were looking for an experienced Data Engineer who can design and build scalable pipelines, integrate diverse datasets, and enable analytics on our Azure Global Data Platform. If you thrive on solving complex data challenges, mentoring others, and working with cutting-edge tech in an agile environment we want to hear from you. Key Responsibilities: Design & develop ETL/ELT pipelines, data warehouses, and provisioning systems on Azure. Integrate data from SQL servers, internal systems, external feeds, and real-time streams via Kafka. Assemble and transform large, complex data sets meeting business and performance requirements. Collaborate with Data Governance, Data Science, and BI teams to deliver business value. Coach and mentor junior engineers, instill best practices, and champion quality and reliability. Skills & Experience: Mandatory Skills: 4+ years of experience as a Data Engineer Programming: Python, SQL, PySpark Big Data: Hadoop, Hive, Spark, Kafka, HBase Cloud: Azure Data Lake, Azure Storage, Azure Fundamentals NoSQL: Cassandra, MongoDB Strong understanding of ETL, data warehousing, and pipelines Linux basics & scripting, GitHub, schedulers (Luigi/CRON) Nice-to-Have: Financial Services / Insurance domain knowledge Tools: ELK Stack, QlikSense, Grafana, Prometheus Postgres, SAS Azure Kubernetes, Docker, CI/CD, TDDQF Why Join Us? At Claidroid, youll work on high-impact, enterprise-grade projects that challenge you to innovate and deliver. We foster a collaborative, growth-oriented culture where your expertise makes a real difference. Interested? Apply now with your updated resume at pratik.chavan@claidroid.com Claidroid Technologies Where expertise meets innovation.
Posted 1 week ago
5.0 - 10.0 years
0 - 0 Lacs
gurugram
On-site
1. 5-12 Years of in Big Data & Data related technology experience 2. Expert level understanding of distributed computing principles 3. Expert level knowledge and experience in Apache Spark 4. Hands on programming with Python 5. Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop 6. Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming 7. Experience with messaging systems, such as Kafka or RabbitMQ 8. Good understanding of Big Data querying tools, such as Hive, and Impala 9. Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files 10. Good understanding of SQL queries, joins, stored procedures, relational schemas 11. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB 12. Knowledge of ETL techniques and frameworks 13. Performance tuning of Spark Jobs 14. Experience with native Cloud data services AWS or AZURE Databricks 15. Ability to lead a team efficiently 16. Experience with designing and implementing Big data solutions 17. Practitioner of AGILE methodology
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role Our Data Science and AI Competence Center is looking for a motivated and versatile expert, who will work on developing and deploying AI & machine learning solutions for our business. As a Senior Data Scientist (ML and AI) , you will collaborate with an industry-leading team and enhance your skills in delivering groundbreaking solutions to continue driving innovation. This role follows a hybrid approach to working, allowing you to combine working from home with working in our modern office in Hyderabad, India . What You Will Do Across areas such as Sports Betting, Gaming, and other products, you will have the opportunity to develop advanced ML Engineering solutions to streamline AI solutions , deploying a variety of algorithms in production and set ML-Ops standard together with the Data Scientists. Work with diverse teams to become an architect of innovation with AI, driving the company's strategic edge by transforming raw data into actionable insights in this dynamic and thrilling environment. To Excel In This Role, You Will Need A degree qualified in a STEM subject along with 4 t o 6 years of experience as a Data Scientist or ML Engineer . Strong proficiency in PySpark with a deep understanding of statistical and machine learning techniques Experience with “Big Data” platforms (Google Cloud, Amazon Web Services, Azure, Oracle Cloud) and engines (Apache Hadoop, Apache Kafka, Apache Spark) Experience delivering several production grade machine learning products in an academic or corporate environment is crucial. Hands-on experience in ML Techniques, such as tree-based learners, deep learning (e.g. CNNs, RNNs, GNNs & Transformers), reinforcement learning and/or unsupervised methods Ability to manage ML-Ops lifecycle (deploy-monitor-maintain-evolve) Knowledge of ML-Ops/DevOps technology (Docker, Kubernetes, CI/CD pipeline…) Benefits We Offer Aside from a generous base salary, we have a phenomenal benefits and rewards program that is designed to encourage personal and career development. This Package Includes Access to Learnerbly, Udemy, and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs. Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model: 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance, and a Home Office Setup Allowance. Employer PF Contribution, gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards. Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India .
Posted 1 week ago
3.0 years
2 - 17 Lacs
Kochi, Kerala
On-site
1. Software Development Engineer Company: Triumph Edge Innovations Pvt. Ltd. Location: Kozhikode, Kerala, India Job Type: Full-time Job Summary: Triumph Edge is seeking a skilled Software Development Engineer to build robust, scalable, and secure software systems for our AI-powered defense solutions. You will be crucial in developing the backend infrastructure and integrating AI/ML models into operational platforms for the Indian defense sector. Key Responsibilities: Design, develop, and maintain high-performance, secure software applications. Integrate AI/ML models and data pipelines into production systems. Collaborate with AI/ML Engineers and Data Engineers to ensure seamless system functionality. Implement secure coding practices and ensure compliance with defense-grade security standards. Required Qualifications: Bachelor's or Master's degree in Computer Science or related field. [Specify years of experience, e.g., 3+ years] in software development. Proficiency in Python, Java, or C++. Experience with API development, databases (SQL/NoSQL), and cloud platforms. Strong understanding of software development lifecycle and secure coding principles. 2. Project Manager (AI/ML Focus) Company: Triumph Edge Innovations Pvt. Ltd. Location: Kozhikode, Kerala, India Job Type: Full-time Job Summary: Triumph Edge is looking for an experienced Project Manager to lead our cutting-edge AI/ML projects for the Indian defense sector. You will be responsible for planning, executing, and delivering complex R&D initiatives, ensuring projects align with strategic objectives and are completed on time and within budget. Key Responsibilities: Define project scope, goals, and deliverables for AI/ML development in defense. Lead cross-functional teams (AI/ML, Software, Data Engineers, Domain Experts). Manage project timelines, budgets, resources, and risks. Communicate project status, challenges, and solutions to stakeholders, including defense partners. Ensure adherence to quality standards and regulatory compliance for defense applications. Required Qualifications: Bachelor's degree in Engineering, Computer Science, or a related field; MBA or PMP certification preferred. [Specify years of experience, e.g., 7+ years] in project management, with significant experience in AI/ML or deep tech projects. Proven ability to manage complex R&D initiatives. Excellent leadership, communication, and stakeholder management skills. Understanding of the Indian defense ecosystem and project procurement processes. 3. Data Engineer Company: Triumph Edge Innovations Pvt. Ltd. Location: Kozhikode, Kerala, India Job Type: Full-time Job Summary: Triumph Edge is seeking a skilled Data Engineer to build and optimize robust data pipelines for our AI-powered image analysis solutions in the Indian defense sector. You will be responsible for managing large volumes of diverse data, ensuring its quality, accessibility, and security for AI model training and deployment. Key Responsibilities: Design, construct, and maintain scalable data pipelines for ingesting, transforming, and storing image and video data. Ensure data quality, integrity, and security for sensitive defense datasets. Collaborate with AI/ML Engineers to provide clean, structured data for model development. Implement and manage data storage solutions (databases, data lakes) and ETL processes. Required Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. [Specify years of experience, e.g., 2+ years] in data engineering. Proficiency in Python and SQL. Experience with big data technologies (e.g., Apache Spark, Hadoop) and cloud data services (AWS, Azure, or GCP). Strong understanding of data modeling, data warehousing, and data governance. Job Types: Full-time, Part-time, Permanent, Fresher, Freelance Contract length: 24 months Pay: ₹266,866.36 - ₹1,794,918.81 per year Expected hours: 30 – 48 per week Benefits: Cell phone reimbursement Education: Bachelor's (Preferred) Work Location: In person Speak with the employer +91 9380767254
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We're Hiring: Data Engineer – Azure Location: Pune / Trivandrum | Mode: Hybrid / On-Site Do you thrive on architecting intelligent, cloud-native data platforms that drive real-time enterprise transformation? Claidroid Technologies Pvt. Ltd. , a deep-tech innovation company at the forefront of AI, cloud infrastructure, cybersecurity, and platform engineering, is looking for a skilled Data Engineer – Azure to lead the design and deployment of global-scale, high-impact data infrastructure. About the Project This role places you at the center of a mission-critical data modernization program for a global enterprise. You’ll be responsible for building and operationalizing secure, scalable, and performant data pipelines on Azure, empowering enterprise-wide analytics, decision intelligence, and machine learning capabilities. About Claidroid Claidroid is a deep-tech solutions provider solving complex real-world problems at the convergence of: Cloud Computing AI/ML and AIOps Cybersecurity and IAM Geospatial Intelligence ServiceNow and Platform Engineering With operational hubs in India, Finland, and the USA , we work with global enterprises to build resilient, intelligent, and secure digital infrastructure . Our engagements span ETL modernization, data observability, cloud security , and enterprise-scale data engineering . Role Overview: Data Engineer – Azure In this role, you will: Design and develop scalable ETL/ELT pipelines using Azure-native tools Ingest and transform both structured and unstructured datasets Build domain-specific data lakes and data warehouse layers Implement real-time data streaming architectures using Kafka and Spark Work extensively with Python, PySpark, Hive, and NoSQL databases such as MongoDB and Cassandra Integrate automation, monitoring, and TDD practices across data workflows Collaborate with BI, analytics, governance, and infrastructure teams Support production releases and CI/CD-enabled data deployment pipelines Who You Are 4–8 years of experience in hands-on data engineering roles Strong proficiency in Python, SQL, PySpark, and Unix/Linux scripting Deep expertise with Azure Data Lake, Azure Storage, Docker, and Kubernetes Practical experience with Kafka, Hive, Spark, Hadoop, and the ELK stack Knowledge of metadata-driven ingestion frameworks and data streaming patterns Familiarity with Agile delivery, CI/CD pipelines, and automated testing practices Bonus: Experience with Postgres, Oracle, SAS, or visualization tools like QlikSense A detail-oriented problem-solver with a performance-driven mindset Position Details Role: Data Engineer – Azure Experience: 4–8 years Location: Pune / Trivandrum (Hybrid) Start Date: Immediate or within 30 days Why Join Claidroid? Work on global transformation programs with real-world impact Join a cutting-edge deep-tech company shaping the future of cloud and data ecosystems Solve challenging problems across domains like finance, infrastructure, and cybersecurity Collaborate with exceptional engineers, architects, and innovators Accelerate your career in a purpose-driven, fast-paced environment Ready to lead the next generation of enterprise data systems? Apply today and be a part of Claidroid’s mission to transform data into intelligence.
Posted 1 week ago
4.0 years
10 - 13 Lacs
Mumbai Metropolitan Region
On-site
Education: Any graduate but preferably B.E./B.Tech/MCA in Computer Science Experience: 4+ Years’ experience specially in ClickHouse Technology Mandatory Skills/Knowledge Helping build production-grade systems based on ClickHouse: advise how to design schemas, plan clusters etc. Environments range from single node setups to clusters with 100s of nodes, Cloud, managed ClickHouse service. Working on infrastructure projects related to ClickHouse Improving ClickHouse itself – fixing bugs, improving docs, creating test-cases, etc. Studying new usage patterns, ClickHouse functions, & integration with other products. Working with the community – GitHub, Stack Overflow, Telegram. Installation multiple node cluster , configure, backup and recovery and maintain ClickHouse database. Monitor and optimize database performance, ensuring high availability and responsiveness. Troubleshoot database issues, identify and resolve performance bottlenecks. Design and implement database backup and recovery strategies. Develop and implement database security policies and procedures. Collaborate with development teams to optimize database schema design and queries. Provide technical guidance and support to development and operations teams. Experience with big data stack components like Hadoop, Spark, Kafka, Nifi, … Experience with data science/data analysis Knowledge of SRE / DevOP stacks – monitoring/system management tools (Prometheus, Ansible, ELK, …) Version control using git Handling support calls from customers using ClickHouse. This includes diagnosing problems connecting to ClickHouse, designing applications, deploying/upgrading ClickHouse, and operations Skills: git,data science,troubleshooting,design,database,backup and recovery,stack,schema design,cluster planning,database security,infrastructure projects,database administration,database performance monitoring,operations,big data technologies (hadoop, spark, kafka, nifi),devops/sre tools (prometheus, ansible, elk),clickhouse
Posted 1 week ago
6.0 years
10 - 15 Lacs
Mumbai Metropolitan Region
On-site
Education: Any graduate but preferably B.E./B.Tech/MCA in Computer Science Experience: 6+ Years’ experience specially in ClickHouse Technology Mandatory Skills/Knowledge Helping build production-grade systems based on ClickHouse: advise how to design schemas, plan clusters etc. Environments range from single node setups to clusters with 100s of nodes, Cloud, managed ClickHouse service. Working on infrastructure projects related to ClickHouse Improving ClickHouse itself – fixing bugs, improving docs, creating test-cases, etc. Studying new usage patterns, ClickHouse functions, & integration with other products. Working with the community – GitHub, Stack Overflow, Telegram. Installation multiple node cluster , configure, backup and recovery and maintain ClickHouse database. Monitor and optimize database performance, ensuring high availability and responsiveness. Troubleshoot database issues, identify and resolve performance bottlenecks. Design and implement database backup and recovery strategies. Develop and implement database security policies and procedures. Collaborate with development teams to optimize database schema design and queries. Provide technical guidance and support to development and operations teams. Experience with big data stack components like Hadoop, Spark, Kafka, Nifi, … Experience with data science/data analysis Knowledge of SRE / DevOP stacks – monitoring/system management tools (Prometheus, Ansible, ELK, …) Version control using git Handling support calls from customers using ClickHouse. This includes diagnosing problems connecting to ClickHouse, designing applications, deploying/upgrading ClickHouse, and operations Skills: data science,troubleshooting,design,big data technologies,sre,database,backup and recovery,stack,teams,schema design,version control (git),database security,monitoring tools,operations,customer support,devops,performance optimization,clickhouse,database design
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Machine Learning Engineer at our company located in Coimbatore, Tamil Nadu, India, you will play a crucial role in developing, validating, and implementing machine learning models and algorithms. Your main responsibilities will include analysing large and complex datasets to derive actionable insights and collaborating with cross-functional teams to seamlessly integrate models into our products and services. It will be essential for you to stay updated with the latest techniques in statistical modeling and machine learning to ensure we are leveraging cutting-edge technologies. To excel in this role, you must have proven experience in machine learning algorithms and statistical modeling. Proficiency in Python is a must, along with familiarity with data science libraries such as TensorFlow, Scikit-learn, and Pandas. Additionally, experience with cloud platforms is required, with expertise in Google Cloud being preferred, though experience with AWS or Azure is also valuable. Your strong analytical and problem-solving skills will be put to the test, and your excellent communication abilities will be crucial in explaining complex models to non-technical stakeholders. In addition to the technical requirements, having previous experience in ERP systems and familiarity with big data technologies like Hadoop and Spark will be advantageous. An advanced degree in Computer Science, Statistics, or a related field is also a key qualification for this position. What makes this job great is the opportunity to work with a team of smart individuals in a friendly and open culture. You will not have to deal with micromanagement or cumbersome tools, and rigid working hours will not be a concern. At our company, you will have real responsibilities and the chance to expand your knowledge across various business industries. Your work will involve creating content that directly impacts our users on a daily basis, presenting you with real responsibilities and challenges in a fast-evolving company. Join us in this exciting role where you can contribute to cutting-edge machine learning projects and be part of a dynamic and innovative team.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
delhi
On-site
Agoda is an online travel booking platform that connects travelers worldwide with a vast network of 4.7M hotels, holiday properties, flights, activities, and more. As part of Booking Holdings and with 7,100+ employees from 95+ nationalities, Agoda fosters a diverse, creative, and collaborative work environment. Through a culture of experimentation and ownership, we strive to enhance our customers" ability to experience the world. At Agoda, our purpose is to bridge the world through travel, allowing people to enjoy, learn, and experience the amazing world we live in. Travel brings individuals and cultures closer, fostering empathy, understanding, and happiness. Our team consists of skilled, driven individuals from across the globe, united by a passion to make a meaningful impact. By leveraging innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. The Data department at Agoda is responsible for managing all data-related requirements for the company. We strive to enable and enhance the use of data through creative approaches and the implementation of powerful resources such as operational and analytical databases, queue systems, BI tools, and data science technology. We recruit top talent globally to tackle this challenge, providing them with the necessary knowledge and tools for personal growth and success while upholding our culture of diversity and experimentation. The role of the Data team is crucial as various stakeholders depend on us to empower their decision-making processes and improve the customer search experience. We are committed to enhancing customer experience, accelerating search results, and safeguarding against fraudulent activities. We are currently seeking ambitious and agile data scientists to join our Data Science and Machine Learning (AI/ML) team. In this role, you will work on challenging machine learning and big data platforms, processing 600B events daily and making 5B predictions. You will tackle real-world challenges such as dynamic pricing, real-time customer intent prediction, search result ranking optimization, content classification, personalized recommendations, and more. You will have the opportunity to work with one of the world's largest ML infrastructure, utilizing GPUs, CPU cores, and memory to drive innovations and improve user experiences. As a Data Scientist at Agoda, you will design, code, experiment, and implement models and algorithms to enhance customer experience, business outcomes, and infrastructure readiness. You will analyze vast amounts of customer data, user-generated events, supplier data, and pricing information to derive actionable insights for continuous improvements and innovation. Collaboration with developers and various business stakeholders is crucial for delivering high-quality results on a daily basis. Additionally, you will be tasked with researching, discovering, and implementing new ideas that can make a significant difference. To succeed in this role, you should have at least 4 years of hands-on data science experience, a strong understanding of AI/ML/DL and Statistics, proficiency in coding using open-source libraries and frameworks, and excellent skills in SQL, Python, PySpark, or Scala. The ability to lead, work independently, and collaborate in a multicultural environment is essential. A background in Computer Science, Operations Research, Statistics, or other quantitative fields is preferred, along with experience in NLP, image processing, recommendation systems, and data engineering. We welcome both local and international applications for this role, with full visa sponsorship and relocation assistance available for eligible candidates. Agoda is an equal opportunity employer committed to creating a diverse and inclusive workplace.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
Agoda is an online travel booking platform that connects travelers with a vast global network of 4.7M hotels, holiday properties, flights, activities, and more. As part of Booking Holdings and based in Asia, Agoda's diverse team of 7,100+ employees from 95+ nationalities in 27 markets fosters an environment rich in diversity, creativity, and collaboration. At Agoda, innovation is driven by a culture of experimentation and ownership to enhance the customer experience of exploring the world. The Data department at Agoda, headquartered in Bangkok, is responsible for overseeing all data-related requirements within the company. The primary objective is to enable and enhance the use of data through creative approaches and the implementation of powerful resources such as operational and analytical databases, queue systems, BI tools, and data science technology. The Data team at Agoda consists of bright minds from around the world who are equipped with the necessary knowledge and tools to support the company's culture of diversity and experimentation. The team plays a critical role in empowering decision-making processes for business users, product managers, engineers, and other stakeholders, while also focusing on improving the search experience for customers and safeguarding against fraudulent activities. Agoda is seeking ambitious and agile data scientists to join the Data Science and Machine Learning (AI/ML) team in Bangkok. The role involves working on challenging machine learning and big data platforms, processing approximately 600B events daily and making 5B predictions. As part of this team, you will tackle real-world challenges such as dynamic pricing, customer intent prediction, personalized recommendations, and more. The role offers the opportunity to work with one of the world's largest ML infrastructure, employing advanced technologies like GPUs, CPU cores, and memory for innovation and user experience enhancement. In this role, you will have the opportunity to design, code, experiment, and implement models and algorithms to maximize customer experience, business outcomes, and infrastructure readiness. You will analyze vast amounts of data to derive actionable insights for driving improvements and innovation, collaborating with developers and various business owners to deliver high-quality results. Successful candidates will have hands-on data science experience, a strong understanding of AI/ML/DL and Statistics, proficiency in coding languages like Python, PySpark, and SQL, and excellent communication skills for multicultural teamwork. Preferred qualifications include a PhD or MSc in relevant fields, experience in NLP, image processing, recommendation systems, data engineering, and data science for e-commerce or OTA. Agoda welcomes applications from both local and international candidates, offering full visa sponsorship and relocation assistance for eligible individuals. Agoda is an Equal Opportunity Employer and keeps applications on file for future vacancies. For further information, please refer to the privacy policy on our website. Agoda does not accept third-party resumes and is not responsible for any fees related to unsolicited resumes.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You are a skilled QA / Data Engineer with 3-5 years of experience, joining a team focused on ensuring the quality and reliability of data-driven applications. Your expertise lies in manual testing and SQL, with additional knowledge in automation and performance testing being highly valuable. Your responsibilities include performing thorough testing and validation to guarantee the integrity of the applications. Your must-have skills include extensive experience in manual testing within data-centric environments, strong SQL skills for data validation and querying, familiarity with data engineering concepts such as ETL processes, data pipelines, and data warehousing, experience in Geo-Spatial data, a solid understanding of QA methodologies and best practices for software and data testing, and excellent communication skills. It would be beneficial for you to have experience with automation testing tools and frameworks like Selenium and JUnit for data pipelines, knowledge of performance testing tools such as JMeter and LoadRunner for evaluating data systems, familiarity with data engineering tools and platforms like Apache Kafka, Apache Spark, and Hadoop, understanding of cloud-based data solutions like AWS, Azure, and Google Cloud, along with their testing methodologies. Your proficiency in SQL, JUnit, Azure, Google Cloud, communication skills, performance testing, Selenium, QA methodologies, Apache Spark, data warehousing, data pipelines, cloud-based data solutions, Apache Kafka, Geo-Spatial data, JMeter, data validation, automation testing, manual testing, AWS, ETL, Hadoop, LoadRunner, ETL processes, and data engineering will be crucial in excelling in this role.,
Posted 1 week ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Responsibilities Strategic Data & AI Advisory : Lead the development of data and AI strategies that align with clients business objectives, enabling digital transformation and data-driven Data & AI Strategy : Design comprehensive data and AI roadmaps, helping clients unlock insights and drive innovation through advanced analytics and automation. Data Governance Leadership : Design and implement data governance frameworks including policies, roles, processes, and tools to ensure data quality, security, compliance, and usability across the organization. Guide clients on data stewardship, metadata management, master data management (MDM), and regulatory compliance (e.g., GDPR, HIPAA). IT Infrastructure Optimization : Oversee the assessment and optimization of clients IT infrastructure, systems, and processes to enhance efficiency and scalability. Technology Consulting : Provide expert guidance on the selection, deployment, and integration of technology solutions including software, hardware, cloud, and network systems. Cross-functional Collaboration : Work closely with engineering, business, and data teams to deliver tailored, end-to-end IT solutions that meet client-specific needs. Assessment & Solution Design : Conduct detailed assessments of clients IT environments and recommend targeted improvements or AI & ML Automation : Architect and deploy machine learning and AI solutions to optimize operations, automate decision-making, and generate business insights, using best-in-class technologies and platforms. Client Relationship Management : Serve as a trusted advisor, cultivating long-term relationships with clients and providing continuous strategic support. Innovation & Best Practices : Stay ahead of industry trends in data, AI, and IT strategy, bringing new ideas and continuous improvement to client engagements and internal consulting frameworks. Workshops & Enablement : Facilitate client workshops, training sessions, and capability-building engagements to foster adoption of data and AI practices. Governance & Risk Oversight : Identify and mitigate risks related to data use, technology implementation, and regulatory compliance. Methodology Development : Contribute to the evolution of internal consulting frameworks, tools, and best practices to enhance team performance and service quality. Monitoring & Reporting : Track project progress against milestones and communicate updates to clients and leadership, ensuring transparency and : Education : Bachelors or Masters degree in Computer Science, Data Science, Business Analytics, or a related field. Experience : 3 years in consulting roles with a focus on data strategy, AI/ML, and project management. Technical Skills : Proficiency in data analytics tools (e.g., Python, R), machine learning frameworks (e.g., TensorFlow, scikit-learn), and data visualization platforms (e.g., Tableau, Power BI). Certifications : Relevant certifications such as PMP, Certified Data Management Professional (CDMP), or equivalent are advantageous. Soft Skills : Excellent communication, problem-solving, and interpersonal skills. Ability to work collaboratively in a team Skills And Abilities Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and as part of a team. Detail-oriented with a strong commitment to data security. Familiarity with data security tools and Qualifications : Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). Familiarity with regulatory standards related to data privacy and security (e.g., GDPR, HIPAA). Demonstrated ability to drive business development and contribute to practice growth. (ref:hirist.tech)
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
haryana
On-site
About Prospecta Founded in 2002 in Sydney, Australia, with additional offices in India, North America, Canada, and a local presence in Europe, the UK, and Southeast Asia, Prospecta is dedicated to providing top-tier data management and automation software for enterprise clients. Our journey began with a mission to offer innovative solutions, leading us to become a prominent data management software company over the years. Our flagship product, MDO (Master Data Online), is an enterprise Master Data Management (MDM) platform designed to streamline data management processes, ensuring accurate, compliant, and relevant master data creation, as well as efficient data disposal. With a strong presence in asset-intensive industries such as Energy and Utilities, Oil and Gas, Mining, Infrastructure, and Manufacturing, we have established ourselves as a trusted partner in the field. Culture at Prospecta At Prospecta, our culture is centered around growth and embracing new challenges. We boast a passionate team that collaborates seamlessly to deliver value to our customers. Our diverse backgrounds create an exciting work environment that fosters a rich tapestry of perspectives and ideas. We are committed to nurturing an environment that focuses on both professional and personal development. Career progression at Prospecta is not just about climbing the corporate ladder but about encountering a continuous stream of meaningful opportunities that enhance personal growth and technical proficiency, all under the guidance of exceptional leaders. Our organizational structure emphasizes agility, responsiveness, and achieving tangible outcomes. If you thrive in a dynamic environment, enjoy taking on various roles, and are willing to go the extra mile to achieve goals, Prospecta is the ideal workplace for you. We continuously push boundaries while maintaining a sense of fun and celebrating victories, both big and small. About the Job Position: Jr. Platform Architect/ Sr. Backend Developer Location: Gurgaon Role Summary: In this role, you will be responsible for implementing technology solutions in a cost-effective manner by understanding project requirements and effectively communicating them to all stakeholders and facilitators. Key Responsibilities - Collaborate with enterprise architects, data architects, developers & engineers, data scientists, and information designers to identify and define necessary data structures, formats, pipelines, metadata, and workload orchestration capabilities. - Possess expertise in service architecture, development, and ensuring high performance and scalability. - Demonstrate experience in Spark, Elastic Search, SQL performance tuning, and optimization. - Showcase proficiency in architectural design and development of large-scale data platforms and data applications. - Hands-on experience with AWS, Azure, and OpenShift. - Deep understanding of Spark and its internal architecture. - Expertise in designing and building new Cloud Data platforms and optimizing them at the organizational level. - Strong hands-on experience in Big Data technologies such as Hadoop, Sqoop, Hive, and Spark, including DevOps. - Solid SQL (Hive/Spark) skills and experience in tuning complex queries. Must-Have - 7+ years of experience. - Proficiency in Java, Spring Boot, Apache Spark, AWS, OpenShift, PostgreSQL, Elastic Search, Message Queue, Microservice architecture, and Spark. Nice-to-Have - Knowledge of Angular, Python, Scala, Azure, Kafka, and various file formats like Parquet, AVRO, CSV, JSON, Hadoop, Hive, and HBase. What will you get Growth Path At Prospecta, your career journey is filled with growth and opportunities. Depending on your career trajectory, you can kickstart your career or accelerate your professional development in a dynamic work environment. Your success is our priority, and as you exhibit your abilities and achieve results, you will have the opportunity to quickly progress into leadership roles. We are dedicated to helping you enhance your experience and skills, providing you with the necessary tools, support, and opportunities to reach new heights in your career. Benefits - Competitive salary. - Health insurance. - Paid time off and holidays. - Continuous learning and career progression. - Opportunities to work onsite at various office locations and/or client sites. - Participation in annual company events and workshops.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
The ideal candidate for this position will be responsible for understanding business problems and formulating analytical solutions. You will apply machine learning, data mining, and text mining techniques to develop scalable solutions for various business challenges. You will also be involved in speech to text translation for Video/Audio NLP Analysis and generative AI application for Market Research data. Additionally, you will solve problems using advanced AI techniques, including GenAI methods, by designing NLP/LLM/GenAI applications/products with robust coding practices. Experience in LLM models like PaLM, GPT4, and Mistral is preferred. You will be expected to train, tune, validate, and monitor predictive models, as well as analyze and extract relevant information from large amounts of historical business data in structured and unstructured formats. Furthermore, you will establish scalable, efficient, automated processes for large-scale data analyses and develop and deploy Data Science models on cloud platforms like GCP, Azure, and AWS. Working with large, complex data sets using tools such as SQL, Google Cloud Services, Hadoop, Alteryx, and Python will be part of your daily responsibilities. Preferred qualifications for this role include 2+ years of experience in market research, data mining, statistical analysis, Gen AI, modeling, deep learning, optimization, or similar analytics. Additionally, having 2+ years of work experience with Python, SQL, and/or visualization/dashboard tools like Tableau, PowerBI, or Qliksense is desired. Comfort with working in an environment where problems are not always well-defined and the ability to effectively advocate technical solutions to various audiences are also important. Candidates with 3-4 years of industry experience and a Bachelor's or Master's degree in a quantitative field such as Statistics, Computer Science, Economics, Mathematics, Data Science, or Operations Research are required for this position. You should have 3+ years of experience using SQL for acquiring and transforming data, as well as experience with real-world data, data cleaning, data collection, or other data wrangling challenges. Knowledge in fundamental text data processing and excellent problem-solving, communication, and data presentation skills are essential. Moreover, you should be flexible to work on multiple projects/domains/tools, possess collaborative skills to work with business teams, and be comfortable coding in Tensorflow and/or Pytorch, Numpy, and Pandas, Scikit-learn. Experience with open-source NLP modules like SpaCy, HuggingFace, TorchText, fastai.text, and others is beneficial. Proven quantitative modeling and statistical analysis skills, along with a proactive and inquisitive attitude towards learning new tools and techniques, are highly valued.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a global information technology, consulting, and business process services company headquartered in India, offering a wide range of services such as IT consulting, application development, business process outsourcing, and digital solutions to clients across diverse industries in over 167 countries. Your focus is on providing technology-driven solutions to enhance efficiency and innovation, contributing significantly to the digital transformation of businesses worldwide. You are looking for a Software Engineer with 5-8 years of experience, who is proficient in Scala programming, and has hands-on experience working in a globally distributed team. It is essential for the candidate to have experience with big-data technologies like Spark/Databricks and Hadoop/ADLS. Additionally, experience in cloud platforms such as Azure (Preferred), AWS, or Google is required, along with expertise in building data lakes and data pipelines using Azure, Databricks, or similar tools. The ideal candidate should be familiar with the development life cycle, including CI/CD pipelines, and have experience in Business Intelligence project development, preferably with Microsoft SQL Server BI stack (SSAS/SSIS). Data warehousing knowledge, designing Star schema/Snowflaking, working in source-controlled environments like GitHub, Subversion, and agile methodology are also necessary skills. You should have a proven track record of producing secure and clean code, strong analytical and problem-solving skills, and the ability to work effectively within the complexity and boundaries of a leading global organization. Being a flexible and resilient team player with strong interpersonal skills, taking initiative to drive projects forward, is highly valued. While experience in financial services is a plus, it is not required. Fluency in English is a must for this role. Interested candidates can respond by submitting their updated resumes. For more job opportunities, please visit Jobs In India - VARITE. If you are not available or interested in this position, you are encouraged to refer potential candidates who might be a good fit. VARITE offers a Candidate Referral program where you can earn a one-time referral bonus based on the experience level of the referred candidate upon completion of a three-month assignment with VARITE. VARITE is a global staffing and IT consulting company that provides technical consulting and team augmentation services to Fortune 500 Companies in the USA, UK, Canada, and India. You are a primary and direct vendor to leading corporations in various verticals, including Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. VARITE is an Equal Opportunity Employer committed to creating a diverse and inclusive workplace environment.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
You will be joining XenonStack, a company committed to being the Most Value Driven Cloud Native Platform Engineering and Decision Driven Analytics Company. As part of our team, you will have the opportunity to work on enterprise national and international projects worth millions of dollars. We take pride in our lively and purposeful work culture, emphasizing a people-oriented approach. At XenonStack, you can expect complete job and employee security, along with warm, authentic, and transparent communication. As a Data Analyst at XenonStack, your responsibilities will include determining organizational goals, mining data from primary and secondary sources, cleaning and pruning data, analyzing and interpreting results, pinpointing trends, patterns, and correlations in complex data sets, providing concise data reports with visualizations, and creating and maintaining relational databases and data systems. We are looking for candidates with technical expertise in statistical skills, programming languages (especially Python), advanced MS Excel skills, data warehousing, business intelligence, data analysis, data cleaning, data visualization, and knowledge of Spark, PySpark, SQL, NoSQL, SAS, Tableau, Hadoop, JavaScript, and Python. Additionally, we value professional attributes such as excellent communication skills, attention to detail, analytical thinking, problem-solving aptitude, strong organizational skills, and visual thinking. Ideal candidates for this position should have a technical background, including a degree in BCA, BSC, B.Tech, MSc, or M.Tech with programming skills. The desired experience for this role is 3-4 years. If you enjoy working in a collaborative environment, have a passion for data analysis, and possess the required technical skills, XenonStack welcomes you to join our team located at Plot No. C-184, Sixth Floor 603, Sector 75 Phase VIIIA, Mohali 160071. This position requires in-office work engagement.,
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Application Engineer - PySpark Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. You'll be working on complex technical problems that will involve detailed analytical skills and analysis. This will be done in conjunction with fellow engineers, business analysts and business stakeholders. To be successful as Application Engineer - PySpark Developer you should have experience with: Hands-on programming experience in a Big Data Hadoop ecosystem. Proficiency in PySpark, Hive, and Impala. Exposure to Mongo DB or any other NoSQL database. Solid experience with Unix shell. Experience with scheduling tools like AutoSys, airflow. Strong understanding of Agile methodologies and tools (JIRA, Confluence). Experience with CI/CD tools such as Jenkins, TeamCity, or GitLab. Excellent communication and collaboration skills. Ability to work independently and drive delivery with minimal supervision. Some Other Highly Valued Skills Include Bachelor’s degree in Computer Science, Engineering, or a related field. Relevant certifications in Big Data or cloud technologies are a plus. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization’s technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Data Engineer - PySpark Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. You'll be working on complex technical problems that will involve detailed analytical skills and analysis. This will be done in conjunction with fellow engineers, business analysts and business stakeholders. To be successful as Data Engineer - PySpark Developer you should have experience with: Hands-on programming experience in a Big Data Hadoop ecosystem. Proficiency in PySpark, Hive, and Impala. Exposure to Mongo DB or any other NoSQL database. Solid experience with Unix shell. Experience with scheduling tools like AutoSys, airflow. Strong understanding of Agile methodologies and tools (JIRA, Confluence). Experience with CI/CD tools such as Jenkins, TeamCity, or GitLab. Excellent communication and collaboration skills. Ability to work independently and drive delivery with minimal supervision. Some Other Highly Valued Skills Include Bachelor’s degree in Computer Science, Engineering, or a related field. Relevant certifications in Big Data or cloud technologies are a plus. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
telangana
On-site
Genpact is a global professional services and solutions firm with over 125,000 employees in 30+ countries. Driven by curiosity, entrepreneurial agility, and the desire to create value for clients, we serve leading enterprises worldwide with deep business knowledge, digital operations services, and expertise in data, technology, and AI. Our purpose is the relentless pursuit of a world that works better for people. We are currently seeking a Business Analyst - Data Scientist to join our team. In this role, you will be responsible for developing and implementing NLP models and algorithms, extracting insights from textual data, and working collaboratively with cross-functional teams to deliver AI solutions. Responsibilities: Model Development: - Proficiency in various statistical, ML, and ensemble algorithms - Strong understanding of Time series algorithms and forecasting use cases - Ability to evaluate model strengths and weaknesses for different problems Data Analysis: - Extracting meaningful insights from structured data - Preprocessing data for ML/AI applications Collaboration: - Working closely with data scientists, engineers, and business stakeholders - Providing technical guidance and mentorship to team members Integration and Deployment: - Integrating ML models into production systems - Implementing CI/CD pipelines for continuous integration and deployment Documentation and Training: - Documenting processes, models, and results - Providing training and support on NLP techniques and tools to stakeholders Qualifications: Minimum Qualifications / Skills: - Bachelor's degree in computer science, Engineering, or a related field - Strong programming skills in Python and R - Experience with DS frameworks (SKLEARN, NUMPY) - Knowledge of machine learning concepts and frameworks (TensorFlow, PyTorch) - Strong problem-solving and analytical skills - Excellent communication and collaboration abilities Preferred Qualifications / Skills: - Experience in Predictive Analytics and Machine Learning techniques - Proficiency in Python/R/Any other open-source programming language - Knowledge of visualization tools such as Tableau, Power BI, Qlikview, etc. - Applied statistics skills - Experience with big data technologies (Hadoop, Spark) - Knowledge of cloud platforms (AWS, Azure, GCP) If you are passionate about leveraging your skills in data science and analytics to drive innovation and value creation, we encourage you to apply for this exciting opportunity at Genpact.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
punjab
On-site
You should have a Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Additionally, you should possess proven experience (1-3 years) in machine learning, data science, or AI roles. Proficiency in programming languages like Python, R, or Java is essential. Experience with machine learning frameworks and libraries such as TensorFlow, PyTorch, and scikit-learn is required. A strong understanding of algorithms, data structures, and software design principles is also necessary. Familiarity with cloud platforms like AWS, Azure, and big data technologies such as Hadoop and Spark is preferred. Excellent problem-solving skills, analytical thinking, communication, collaboration skills, and the ability to work methodically and meet deadlines are important attributes. Your responsibilities will include developing and implementing machine learning models and algorithms for various applications, collaborating with cross-functional teams to understand project requirements and deliver AI solutions, preprocessing and analyzing large datasets to extract meaningful insights, designing and conducting experiments to evaluate model performance and fine-tune algorithms, deploying machine learning models to production ensuring scalability and reliability, staying updated with the latest advancements in AI and machine learning technologies, documenting model development processes, maintaining comprehensive project documentation, participating in code reviews, providing constructive feedback to team members, and contributing to the continuous improvement of our AI/ML capabilities and best practices. Join our fast-paced team of like-minded individuals who share the same passion as you and tackle new challenges every day. Work alongside an exceptionally talented and intellectual team, gaining exposure to new concepts and technologies. Enjoy a friendly and high-growth work environment that fosters learning and development. We offer a competitive compensation package based on experience and skill. This is a full-time position with day shift, fixed shift, and morning shift schedules available. The ideal candidate should have a total of 3 years of work experience and be willing to work in person.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
You will play a crucial role in enhancing the Analytics capabilities for our businesses. Your responsibilities will include engaging with key stakeholders to comprehend Fidelity's sales, marketing, client services, and propositions context. You will collaborate with internal teams such as the data support team and technology team to develop new tools, capabilities, and solutions. Additionally, you will work closely with IS Operations to expedite the development and sharing of customized data sets. Maximizing the adoption of Cloud-Based Data Management Services will be a significant part of your role. This involves setting up sandbox analytics environments using platforms like Snowflake, AWS, Adobe, and Salesforce. You will also support data visualization and data science applications to enhance business operations. In terms of stakeholder management, you will work with key stakeholders to understand business problems and translate them into suitable analytics solutions. You are expected to facilitate smooth execution, delivery, and implementation of these solutions through effective engagement with stakeholders. Your role will also involve collaborating with the team to share knowledge and best practices, including coaching on deep learning and machine learning methodologies. Taking independent ownership of projects and initiatives within the team is crucial, demonstrating leadership and accountability. Furthermore, you will be responsible for developing and evaluating tools, methodologies, or infrastructure to address long-term business challenges. This may involve enhancing modelling software, methodologies, data requirements, and optimization environments to elevate the team's capabilities. To excel in this role, you should possess 5 to 8 years of overall experience in Analytics, with at least 4 years of experience in SQL, Python, open-source Machine Learning Libraries, and Deep Learning. Experience working in an AWS Environment, preferably using Snowflake, is preferred. Proficiency in analytics applications such as Python, SAS, SQL, and interpreting statistical results is necessary. Knowledge of SPARK, Hadoop, and Big Data Platforms will be advantageous.,
Posted 1 week ago
0.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Remote Work: Hybrid Overview: At Zebra, we are a community of innovators who come together to create new ways of working to make everyday life better. United by curiosity and care, we develop dynamic solutions that anticipate our customer’s and partner’s needs and solve their challenges. Being a part of Zebra Nation means being seen, heard, valued, and respected. Drawing from our diverse perspectives, we collaborate to deliver on our purpose. Here you are a part of a team pushing boundaries to redefine the work of tomorrow for organizations, their employees, and those they serve. You have opportunities to learn and lead at a forward-thinking company, defining your path to a fulfilling career while channeling your skills toward causes that you care about – locally and globally. We’ve only begun reimaging the future – for our people, our customers, and the world. Let’s create tomorrow together. A Data scientiest will be responsible for Designs, develops, programs and implements Machine Learning solutions , Implements Artificial/Augmented Intelligence systems/Agentic Workflows/Data Engineer Workflows, Performs Statistical Modelling and Measurements by applying data engineering, feature engineering, statistical methods, ML modelling and AI techniques on structured, unstructured, diverse “big data” sources of machine acquire data to generate actionable insights and foresights for real life business problem solutions and product features development and enhancements. Responsibilities: Integrates state-of-the-art machine learning algorithms as well as the development of new methods Develops tools to support analysis and visualization of large datasets Develops, codes software programs, implements industry standard auto ML models (Speech, Computer vision, Text Data, LLM), Statistical models, relevant ML models (devices/machine acquired data), AI models and algorithms Identifies meaningful foresights based on predictive ML models from large data and metadata sources; interprets and communicates foresights, insights and findings from experiments to product managers, service managers, business partners and business managers Makes use of Rapid Development Tools (Business Intelligence Tools, Graphics Libraries, Data modelling tools) to effectively communicate research findings using visual graphics, Data Models, machine learning model features, feature engineering / transformations to relevant stakeholders Analyze, review and track trends and tools in Data Science, Machine Learning, Artificial Intelligence and IoT space Interacts with Cross-Functional teams to identify questions and issues for data engineering, machine learning models feature engineering Evaluates and makes recommendations to evolve data collection mechanism for Data capture to improve efficacy of machine learning models prediction Meets with customers, partners, product managers and business leaders to present findings, predictions, foresights; Gather customer specific requirements of business problems/processes; Identify data collection constraints and alternatives for implementation of models Working knowledge of MLOps, LLMs and Agentic AI/Workflows Programming Skills: Proficiency in Python and experience with ML frameworks like TensorFlow, PyTorch LLM Expertise: Hands-on experience in training, fine-tuning, and deploying LLMs Foundational Model Knowledge: Strong understanding of open-weight LLM architectures, including training methodologies, fine-tuning techniques, hyperparameter optimization, and model distillation. Data Pipeline Development: Strong understanding of data engineering concepts, feature engineering, and workflow automation using Airflow or Kubeflow. Cloud & MLOps: Experience deploying ML models in cloud environments like AWS, GCP (Google Vertex AI), or Azure using Docker and Kubernetes.Designs and implementation predictive and optimisation models incorporating diverse data types strong SQL, Azure Data Factory (ADF) Qualifications: Bachelors degree, Masters or PhD in statistics, mathematics, computer science or related discipline preferred 0-2 years Statistics modeling and algorithms Machine Learning experience including deep learning and neural networks, genetics algorithm etc Working knowledge with big data – Hadoop, Cassandra, Spark R. Hands on experience preferred Data Mining Data Visualization and visualization analysis tools including R Work/project experience in sensors, IoT, mobile industry highly preferred Excellent written and verbal communication Comfortable presenting to Sr Management and CxO level executives Self-motivated and self-starting with high degree of work ethic Position Specific Information Travel Requirements (as a % of time): <10% Able to telework? Yes/no – if yes, % of time and expectations while teleworking Yes, 70%. To visit Zebra site 2-3 days a week or every other week Personal Protective Equipment (PPE) Required (safety glasses, steel-toed boots, gloves, etc.): No U.S. Only – Frequency Definitions for Physical Activities, Environmental Conditions and Physical Demands: Never – 0% Occasionally - 0-20 times per shift or up to 33% of the time Frequently - 20-100 times per shift or 33-66% of the time Constantly - Over 100 times per shift or 66-100% of the time Physical Activities (all U.S. only jobs): Enter in N, O, F or C as applicable Enter in Frequency (N)Never, (O)Occasionally, (F)Frequently or (C)Constantly Ascending or descending ladders, stairs, scaffolding, ramps, poles and the like. Working from heights such as roofs, ladders, or powered lifts. N N Moving self in different positions to accomplish tasks in various environments including awkward or tight and confined spaces. N Remaining in a stationary position, often standing or sitting for prolonged periods. Stooping, kneeling, bending, crouching, reaching, pushing/pulling. N N Moving about to accomplish tasks or moving from one worksite to another. N Adjusting or moving objects up to __ pounds in all directions. N Communicating with others to exchange information. F Repeating motions that may include the wrists, hands and/or fingers. F (typing) Operating machinery and/or power tools. N Operating motor vehicles, industrial vehicles, or heavy equipment. N Assessing the accuracy, neatness and thoroughness of the work assigned. F Environmental Conditions (U.S. only): Enter in N, O, F or C as applicable Enter in Frequency (N)Never, (O)Occasionally, (F)Frequently or (C)Constantly Exposure to extreme temperatures (high or low). N Outdoor elements such as precipitation and wind. N Noisy environments. N Other hazardous conditions such as vibration, uneven ground surfaces, or dust & fumes. N Small and/or enclosed spaces. N No adverse environmental conditions expected. N Physical Demands (U.S. only): Check only one below Check only one below Sedentary work that primarily involves sitting/standing. X Light work that includes moving objects up to 20 pounds. Medium work that includes moving objects up to 50 pounds. Heavy work that includes moving objects up to 100 pounds or more (team lift) Must be able to see color. To protect candidates from falling victim to online fraudulent activity involving fake job postings and employment offers, please be aware our recruiters will always connect with you via @zebra.com email accounts. Applications are only accepted through our applicant tracking system and only accept personal identifying information through that system. Our Talent Acquisition team will not ask for you to provide personal identifying information via e-mail or outside of the system. If you are a victim of identity theft contact your local police department.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France