Jobs
Interviews

7140 Hadoop Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

5 - 8 Lacs

Bengaluru

Work from Office

SLK Software is seeking a skilled and passionate Data Engineer to join our growing data team. The ideal candidate will have a strong understanding of data engineering principles, experience building and maintaining data pipelines, and a passion for working with data to solve business problems. Job Summary: The Data Engineer is responsible for designing, building, and maintaining the infrastructure that enables us to collect, process, and store data. This includes developing data pipelines, building data warehouses, and ensuring data quality and availability. You will play a crucial role in empowering our data scientists and analysts to extract valuable insights from our data. Responsibilities: Data Pipeline Development: Design, build, and maintain robust and scalable data pipelines to ingest, process, and transform data from various sources. Data Warehousing: Design and implement data warehouses and data lakes to store and manage large datasets. ETL Processes: Develop and optimize ETL

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Software Engineer II We are the global technology company behind the world’s fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities. The Mastercard Launch program is aimed at early career talent, to help you develop skills and gain cross-functional work experience. Over a period of 18 months, Launch participants will be assigned to a business unit, learn and develop skills, and gain valuable on the job experience. Mastercard has over 2 billion payment cards issued by 25,000+ banks across 190+ countries and territories, amassing over 10 petabytes of data. Millions of transactions are flowing to Mastercard in real-time providing an ideal environment to apply and leverage AI at scale. The AI team is responsible for building and deploying innovative AI solutions for all divisions within Mastercard securing a competitive advantage. Our objectives include achieving operational efficiency, improving customer experience, and ensuring robust value propositions of our core products (Credit, Debit, Prepaid) and services (recommendation engine, anti-money laundering, fraud risk management, cybersecurity) Role Gather relevant information to define the business problem Creative thinker capable of linking AI methodologies to identified business challenges Develop AI/ML applications leveraging the latest industry and academic advancements Ability to work cross-functionally, and across borders drawing on a broader team of colleagues to effectively execute the AI agenda All About You : Demonstrated passion for AI competing in sponsored challenges such as Kaggle Previous experience with or exposure to: Deep Learning algorithm techniques, open source tools and technologies, statistical tools, and programming environments such as Python, R, and SQL Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning Classical Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Recommender Systems - Collaborative Filtering, FPMC, FISM, Fossil Deep Learning algorithm techniques like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks – Feedforward, CNN, LSTM’s GRU’s is a plus. Optimization techniques – Activity regularization (L1 and L2), Adam, Adagrad, Adadelta concepts; Cost Functions in Neural Nets – Contrastive Loss, Hinge Loss, Binary Cross entropy, Categorical Cross entropy; developed applications in KRR, NLP, Speech and Image processing Deep Learning frameworks for Production Systems like Tensorflow, Keras (for RPD and neural net architecture evaluation), PyTorch and Xgboost, Caffe, and Theono is a plus Concentration in Computer Science Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-251749

Posted 6 days ago

Apply

2.0 - 7.0 years

2 - 4 Lacs

Chennai, Bengaluru

Work from Office

Required Skills: Hands-on experience in Big Data technologies. Proficient in Apache Hive writing complex queries, partitioning, bucketing, and performance tuning. Strong programming experience with PySpark – RDDs, DataFrames, Spark SQL, UDFs. Experience in working with Hadoop ecosystem (HDFS, YARN, Oozie, etc.). Good understanding of distributed computing principles and data formats like Parquet, Avro, ORC. Strong SQL and debugging skills. Familiarity with version control tools like Git and workflow schedulers like Airflow or Oozie. Preferred Skills: Exposure to cloud-based big data platforms such as AWS EMR, Azure Data Lake, or GCP Dataproc. Experience with performance tuning of Spark jobs and Hive queries. Knowledge of Scala or Java is a plus. Familiarity with data governance, data masking, and security best practices. Experience with CI/CD pipelines, Docker, or container-based deployments is an advantage.

Posted 6 days ago

Apply

0.0 - 1.0 years

0 Lacs

Ratlam, Madhya Pradesh

On-site

# Data Analytics and Data Science Trainer ## About the Role We are seeking an experienced Data Analytics and Data Science Trainer to develop and deliver comprehensive training programs for professionals looking to enhance their skills in data analysis, statistical methods, and machine learning. The ideal candidate will combine technical expertise with exceptional teaching abilities to create engaging learning experiences that transform complex concepts into practical skills. ## Key Responsibilities - Design, develop, and deliver training courses covering data analytics, statistics, data visualization, and machine learning - Create hands-on exercises, projects, and assessments that reinforce learning outcomes - Adapt training content and delivery methods to accommodate different learning styles and skill levels - Stay current with emerging trends and technologies in data science and analytics - Evaluate training effectiveness and iterate on content based on participant feedback - Provide mentorship and guidance to learners beyond formal training sessions - Collaborate with subject matter experts to ensure technical accuracy of training materials - Develop supplementary learning resources including documentation, guides, and reference materials ## Qualifications - Bachelor's degree in Statistics, Computer Science, Mathematics, or related field; Master's degree preferred - 3+ years of practical experience in data analytics, data science, or related field - Proven track record of developing and delivering technical training - Strong programming skills in Python or R with focus on data analysis libraries - Experience with SQL, data visualization tools, and machine learning frameworks - Excellent communication and presentation skills with ability to explain complex concepts clearly - Knowledge of adult learning principles and instructional design techniques - Experience with learning management systems and virtual training delivery platforms ## Preferred Skills - Industry certifications in data science, analytics, or machine learning - Experience mentoring junior data professionals - Knowledge of business intelligence tools (Tableau, Power BI, etc.) - Experience with big data technologies (Hadoop, Spark) - Background in a specific industry vertical (finance, healthcare, retail, etc.) ## What We Offer - Competitive salary and benefits package - Continuous professional development opportunities - Collaborative and innovative work environment - Flexible work arrangements - Opportunity to shape the data skills of future professionals We value diversity of thought and experience and encourage applications from candidates of all backgrounds who are passionate about data education. Job Type: Full-time Ability to commute/relocate: Ratlam, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Analytics: 1 year (Preferred) Location: Ratlam, Madhya Pradesh (Preferred) Work Location: In person

Posted 6 days ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Roles and Responsibilities: Data Pipeline Development: Design, develop, and maintain scalable data pipelines to support ETL (Extract, Transform, Load) processes using tools like Apache Airflow, AWS Glue, or similar. Database Management: Design, optimize, and manage relational and NoSQL databases (such as MySQL, PostgreSQL, MongoDB, or Cassandra) to ensure high performance and scalability. SQL Development: Write advanced SQL queries, stored procedures, and functions to extract, transform, and analyze large datasets efficiently. Cloud Integration: Implement and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud, utilizing services like Redshift, BigQuery, or Snowflake. Data Warehousing: Contribute to the design and maintenance of data warehouses and data lakes to support analytics and BI requirements. Programming and Automation: Develop scripts and applications in Python or other programming languages to automate data processing tasks. Data Governance: Implement data quality checks, monitoring, and governance policies to ensure data accuracy, consistency, and security. Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data needs and translate them into technical solutions. Performance Optimization: Identify and resolve performance bottlenecks in data systems and optimize data storage and retrieval. Documentation: Maintain comprehensive documentation for data processes, pipelines, and infrastructure. Stay Current: Keep up-to-date with the latest trends and advancements in data engineering, big data technologies, and cloud services. Required Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Engineering, or a related field. Technical Skills: Proficiency in SQL and relational databases (PostgreSQL, MySQL, etc.). Experience with NoSQL databases (MongoDB, Cassandra, etc.). Strong programming skills in Python; familiarity with Java or Scala is a plus. Experience with data pipeline tools (Apache Airflow, Luigi, or similar). Expertise in cloud platforms (AWS, Azure, or Google Cloud) and data services (Redshift, BigQuery, Snowflake). Knowledge of big data tools like Apache Spark, Hadoop, or Kafka is a plus. Data Modeling: Experience in designing and maintaining data models for relational and non-relational databases. Analytical Skills: Strong analytical and problem-solving abilities with a focus on performance optimization and scalability. Soft Skills: Excellent verbal and written communication skills to convey technical concepts to non-technical stakeholders. Ability to work collaboratively in cross-functional teams. Certifications (Preferred): AWS Certified Data Analytics, Google Professional Data Engineer, or similar. Mindset: Eagerness to learn new technologies and adapt quickly in a fast-paced environment.

Posted 6 days ago

Apply

4.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-6 Years Of Relevant Work Experience Is Required. Experience with stakeholder management is an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.

Posted 6 days ago

Apply

5.0 years

7 - 9 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Provide expert technical guidance and solutions to the POD for complex business problems Design, develop, and implement technical solutions, ensuring they meet business requirements and are scalable and maintainable Troubleshoot and resolve escalated technical issues instantly. Experience in providing risk assessment for new functionality and enhancements As an ITSO (IT Service Owner), complete BOW tasks within the timelines and ensure that your application services are vulnerability, ICE, resiliency, and contingency testing compliant. As an ITSO, ensure that application have an effective escalation and support framework in place for all IT production Incidents and one that shall meet the agreed operational and service level agreements of business Accountable for leading the POD Sound Knowledge of corporate finance experience exhibiting knowledge of Interest rate risk in the banking book Experience with Agile delivery methodologies (JIRA, Scrum, FDD, SAFe) Experience with DevOps tools (Jenkins, Ansible, Git) Requirements To be successful in this role, you should meet the following requirements: Graduation in technology (B.E, B.Tech & Above) with 5+ years of IT experience. Strong knowledge on Pentaho ETL tool with map reduce build knowledge Writing complex SQL queries Good knowledge on Shell scripting, Python, Java Exposure to Hadoop and Bigdata is plus Infrastructure as Code & CICD – Git, Ansible, Jenkins Having experience in working in Agile/DevOps env. Monitoring, Alerting, Incident Tracking, Reporting, etc. Good understanding of Google cloud and latest tools/technologies exposure will be add-on. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 6 days ago

Apply

5.0 years

6 - 9 Lacs

Hyderābād

On-site

Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Join Micron’s ambitious Global Facilities SMART Facilities team, where you will play a pivotal role in transforming data into actionable insights to optimize our world-class facilities! We are seeking a dynamic and innovative manager to lead our efforts and develop our team members. Responsibilities: Lead and manage a team of data scientists and data engineers, encouraging a collaborative and innovative environment. Develop and implement data strategies that support the company's global facilities operations. Create, build, and maintain data pipelines to process and analyze large volumes of facilities data. Design and deploy machine learning models and data analytics tools to optimize facilities management and operations. Collaborate with cross-functional teams to integrate data solutions into existing systems. Design, develop, deploy, and maintain AI solutions that provide operational benefits. Minimum Qualifications: Bachelor’s or Master’s degree in Data Science, Computer Science, Engineering, or a related field. Minimum of 5 years of experience in data science and data engineering roles. Proven track record of leading and managing high-performing teams. Excellent communication and interpersonal skills. Strong problem-solving and analytical abilities. Preferred Qualifications: Experience with data warehousing solutions like Amazon Redshift, Google BigQuery, or Snowflake. Proficiency in programming languages such as Python, Java, and SQL. Knowledge of sophisticated analytics and predictive modeling. Familiarity with cloud computing platforms such as AWS, Azure, and Google Cloud. Understanding of big data technologies and frameworks like Hadoop and Spark. This is an outstanding chance to create a significant impact on the efficiency and effectiveness of Micron’s global facilities through the power of data. About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.

Posted 6 days ago

Apply

5.0 - 7.0 years

6 - 9 Lacs

Hyderābād

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Policy Oversight and Governance is responsible for management, oversight, and creation of the Data, Records, Regulatory and risk Reporting, Standards and quality assurance of adherence. The process revolves around building a robust governance structure to ensure policy adherence across enterprise. Overall QA process shall ensure quality and standard conformance based on set policy guidelines. Job Description* As part of the ESDGO, Policy and QA Enablement team, the role is focused on supporting policy owners and QA owners through solution that provides execution and operational efficiencies. This role requires the ability to develop and own end to end processes, maintain existing code, analyze report requirements, and develop based on requirements using SharePoint, JavaScript, Tableau, and/or other technology recommended tools. Knowledge of Regulatory Reporting Policy and Risk Data Aggregation Policy is a benefit. This position requires the ability to communicate effectively and work across multiple teams to support various initiatives. Self motivated, responsible, and due diligence are key drivers to be successful in this role. Responsibilities* Responsible for supporting business process automation through SharePoint, JavaScript and other necessary tools. Responsible for performing more complex analysis for minimizing risk and operating losses and/or other financial and marketing exposures. Utilizes portfolio trends to propose policy/procedural changes within segmentation structure to produce optimal results. Excels at risk/reward trade off. Build relationships with internal partners. Duties primarily include the regular use of discretion, independent judgment, the ability to communicate with multiple levels of management and the utilization of core leadership behaviors. Experience with systems functional analysis, technology business analysis, and basic understanding of the different technical platforms, SharePoint (or similar tools), databases, and related technologies Proficiency in SharePoint, DFFS, Nintex, JavaScript/JQuery Good expertise in SQL/T-SQL Experience with Enterprise Databases; MS SQL Server, Oracle, Hadoop, Teradata Ability to work in a fast pace environment Ability to translate high level business requirements into technical data requirements Strong communication skills (verbal, written and presentations) Strong attention to detail and due diligence Requirements* Education* Graduation / Post Graduation Certifications If Any: Experience Range* 5 - 7 Years Foundational Skills* Experience with systems functional analysis, technology business analysis, and basic understanding of the different technical platforms, SharePoint (or similar tools), databases, and related technologies Proficiency in SharePoint, DFFS, Nintex, JavaScript/JQuery Good expertise in SQL/T-SQL Desired Skills* Tableau expertise Alteryx experience Proficiency in SQL/T-SQL Work Timings* 11:30 am – 8:30 pm IST Job Location* Hyderabad

Posted 6 days ago

Apply

0 years

2 - 9 Lacs

Hyderābād

On-site

Job description We are seeking for the role of Consultant Specialist In this role, you will have to: A senior full stack Automation test engineer with experience and knowledge in Software Automation testing using Tosca, Selenium and other tools, knowledge of ETL tools like Data stage, Dataflow, SQL, Shell scripting, Control-M, API Development, Design Patterns, SDLC, IaC tools, testing and site reliability engineering. Need deep understanding of Desktop, Web, Data warehouse application automating testing, API testing, related ways to design and develop automation framework. Proven experience in writing automation test scripts, conducting reviews, building test packs for regression, smoke, integration testing scenarios. Identify ways to increase test coverage, metric based test status reporting, manage defect lifecycle Define and implement best practices for software automation testing (framework, patterns, standards, reviews, coverage, requirement traceability), including testing methodologies. Be a generalist with the breadth and depth of experience in CICD best practices and has core experience in testing (i.e. TDD/BDD/Automated testing/Contract testing/API testing/Desktop/web apps, DW test automation) Able to see a problem or an opportunity with the ability to engineer a solution, be respected for what they deliver not just what they say, should think about the business impact of their work and has a holistic view to problem- solving. Apply thinking to many problems across multiple technical domains and suggest way to solve the problems. Contributes to architectural discussions by asking the right questions to ensure a solution matches the business needs. Excellent verbal and written communication skills to articulate technical concepts to both technical and non-technical stakeholders. Represent at Scrum meetings and all other key project meetings and provide a single point of accountability and escalation for Automation testing within the scrum teams. Work with cross-functional team, opportunity to work with software product, development, and support teams, capable of handling tasks to accelerate the testing delivery and to improve the quality for Applications at HSBC. Willing to adapt, learn innovative technologies/tools and be flexible to work on projects as demanded by business. Requirements To be successful in this role, you must meet the following requirements: Experience in software testing approaches on automation testing using Tosca, Selenium, cucumber BDD framework. Experienced on writing test plans, test strategy, test data management includes test artifacts management for both automation and manual testing. Experience on setting up CI/CD pipeline and work experience on GitHub, Jenkins along with integration to cucumber and Jira. Experience in agile methodology and proven experience in working on agile projects. Experience in analysis of bug tracking, prioritizing and bug reporting with bug tracking tools. API Automation using Rest Assured. Communicate effectively with stakeholders across the Bank. Experience in SQL, Unix, Control-M, ETL, Data Testing, API testing. Expert level experience on Jira and Zephyr. Good to have skills: Knowledge on latest technology, tools like, GITHUB Co Pilot, Python Scripting, Tricentis Tosca, Dataflow, Hive, DevOpS, REST API, Hadoop, Kafka framework, GCP, AWS, will be an added advantage. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 6 days ago

Apply

0 years

6 - 9 Lacs

Hyderābād

On-site

Job description Some careers have more impact than others. If you’re looking for a career where you can make a real impression, join HSBC and discover how valued you’ll be. HSBC is one of the largest banking and financial services organisations in the world, with operations in 62 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Data Analyst Principal responsibilities Performing exploratory data analysis across one or multiple data domains / business subject areas to understand the data’s structure and relationships that would support business data requirements. Perform queries on data platforms to validate analysis / hypothesis and ensure good quality / trusted data are being identified for the business. Collaborate with upstream data domain / data platform owners to source trusted enterprise-level data, ensuring good quality data is provisioned downstream. Participate in agile ceremonies within the assigned pods, adopting agile ways of working and best practices. Ensure deliverables meet data governance standards, such as data accuracy, data lineage transparency, data consistency and security. Understand, follow and demonstrate compliance with all relevant internal and external rules, regulations and procedures that apply to the conduct of the business in which you are involved, specifically Internal Controls and any Compliance policy including, inter alia, the Group Compliance policy. Maintain HSBC Internal Control standards, including the timely implementation of internal and external audit points together with any issues raised by external regulators. Be aware of the Operational Risk scenario associated with your role and act in a manner that takes account of operational risk considerations. This job description is non-contractual and is intended only as a summary of your role and responsibilities from time to time. This document will be subject to review by you and your line manager as appropriate during the course of your employment. The jobholder will continually reassess the operational risks associated with the role and inherent in the business, taking account of changing legal and regulatory requirements, operating procedures and practices, management restructurings, and the impact of new technology This will be achieved by ensuring all actions take into account the likelihood of operational risk events, and by addressing any areas of concern in conjunction with line management and/or the appropriate department. The role will implement the Operational Risk control framework and per the BRCMs – “Three Lines of Defence” Requirements University degree in relevant disciplines. Strong analytical and problem-solving skills. Experience working within the Hadoop and GCP ecosystems in addition to strong technical skills in analytical languages such as Python, R, SQL, SAS. Good understanding of banking operations and processes, preferably in Risk, Compliance and Finance functions. Proven experience working in Agile environments (Kanban / Scrum) and familiarity with Agile tools like JIRA, Confluence, MS Teams & SharePoint. Excellent stakeholder engagement and management skills. Ability to navigate within the organization Proficient skills in MS Excel and PowerPoint. You’ll achieve more at HSBC HSBC is an equal opportunity employer committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and, opportunities to grow within an inclusive and diverse environment. We encourage applications from all suitably qualified persons irrespective of, but not limited to, their gender or genetic information, sexual orientation, ethnicity, religion, social status, medical care leave requirements, political affiliation, people with disabilities, color, national origin, veteran status, etc., We consider all applications based on merit and suitability to the role.” Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. ***Issued By HSBC Electronic Data Processing (India) Private LTD***

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Enthusiastic and self-motivated, with ability to execute Supply Chain Analytics projects proactively Meticulous attention to detail, with an overall passion for continuous improvement Innovative and creative, with a logical and methodical approach to problem solving Credible and articulate, with excellent communication, presentation, and interpersonal skills Responsibilities Execute high impact business projects with time bound and effective project management leveraging tools like Rally, Jira Gather business requirements and convert them into analytical problems and identify relevant tools, techniques, and an overall framework to provide solutions Use statistical methodologies leveraging analytical tools to support different business initiatives Continual enhancement of statistical techniques and their applications in solving business objectives Compile and analyze the results from modeling output and translate into actionable insights through dashboards Acquire and share deep knowledge of data utilized by the team and its business partners Participate in global conference calls and meetings as needed and manage multiple customer interfaces Execute analytics special studies and ad hoc analyses in a quick turn around time Evaluate new tools and technologies to improve analytical processes Efforts will focus on the following key areas: Domain – Supply Chain Analytics Hands on with Machine Learning Good understanding of various classical Statistical techniques such as Regression, Multivariate Analysis Data & Text Mining, NLP, Gen AI, Large Language Models, Time Series based forecasting modeling Experience with SQL and data warehousing (e.g. GCP/Hadoop/Teradata/Oracle/DB2) Experience using tools in BI, ETL, Reporting /Visualization/Dashboards - Qliksense/Power BI etc. Programming experience in languages like Python Exposure to Bigdata based analytical solutions Good Soft Skills - able to communicate clearly with stakeholders Good Analysis and problem-solving skills. Ability to get Insights from Data, provide visualization, and storytelling. Flexibility to explore and work with newer technologies and tools. Ability to learn quickly, adapt, and set direction when faced with ambiguous scenarios. Excellent collaborative communication and Team skills Qualifications Bachelors/Masters Candidates should have significant hands-on experience with analytics projects or related quantitative techniques across various functions Candidates will be expected to successfully prioritize and manage multiple analytical projects Good technical depth with strong analytical/programing skills and ability to apply technical knowledge Experience in Python, SQL, GCP or any other Cloud Platforms highly desired

Posted 6 days ago

Apply

3.0 - 5.0 years

10 - 12 Lacs

Cochin

On-site

Location: Cochin Employment Type: Full-time Department: Data Engineer About Us Digitrell, a JoeNJack Touch Venture, is a leading digital transformation company specializing in web development, mobile app development, data solutions, SAP implementation, and IT support. At Digitrell, we’re building the future of content creation through cutting-edge Generative AI. Our focus lies in combining creativity with AI-powered tools to bring visionary storytelling to life—at scale and speed. From realistic visuals to immersive video experiences, we’re on a mission to redefine what’s possible with prompt engineering and generative video technology. We are seeking an experienced and driven Data Engineer with 3-5 years of hands-on experience in building scalable data infrastructure and systems. You will play a key role in designing and developing robust, high-performance ETL pipelines and managing large-scale datasets to support critical business functions. This role requires deep technical expertise, strong problem-solving skills, and the ability to thrive in a fast-paced, evolving environment. Key Responsibilities 1) Design, develop, and maintain scalable and reliable ETL/ELT pipelines for processing large volumes of data (terabytes and beyond). 2) Model and structure data for performance, scalability, and usability. 3) Work with cloud infrastructure (preferably Azure) to build and optimize data workflows. 4) Build and manage data lake/lakehouse architectures in alignment with best practices. 5) Optimize ETL performance and manage cost-effective data operations. 6) Collaborate closely with cross-functional teams including data science, analytics, and software engineering. 7) Ensure data quality, integrity, and security across all stages of the data lifecycle. Required Skills & Qualifications 1) 3 to 5 years of relevant experience in data engineering. 2) Advanced proficiency in Python, including libraries such as Pandas and NumPy. 3) Strong skills in SQL for complex data manipulation and analysis. 4) Hands-on experience with Apache Spark, Hadoop, or similar distributed systems. 5) Proven track record of handling large-scale datasets (TBs) in production environments. 6) Cloud development experience with Azure(preferred), AWS, or GCP. 7) Solid understanding of data lake and data lakehouse architectures. 8) Expertise in ETL performance tuning and cost optimization techniques. 9) Knowledge of data structures, algorithms, and modern software engineering practices. Soft Skills 1) Strong communication skills with the ability to explain complex technical concepts clearly and concisely. 2) Self-starter who learns quickly and takes ownership. 3) High attention to detail with a strong sense of data quality and reliability. 4) Comfortable working in an agile, fast-changing environment with incomplete requirements. Preferred Qualifications 1) Experience with tools like Azure Data Factory, or similar. 2) Familiarity with CI/CD and DevOps in the context of data engineering. 3) Knowledge of data governance, cataloging, and access control principles. Job Types: Full-time, Permanent Pay: ₹1,000,000.00 - ₹1,200,000.00 per year Work Location: In person Application Deadline: 28/07/2025 Expected Start Date: 28/07/2025

Posted 6 days ago

Apply

5.0 years

0 Lacs

India

On-site

Data Engineer Astreya offers comprehensive IT support and managed services. These services include Data Center and Network Management, Digital Workplace Services (like Service Desk, Audio Visual, and IT Asset Management), as well as Next-Gen Digital Engineering services encompassing Software Engineering, Data Engineering, and cybersecurity solutions. Astreya's expertise lies in creating seamless interactions between people and technology to help organizations achieve operational excellence and growth. Job Description We are seeking experienced Data Engineer to join our analytics division. You will be aligned with our Data Analytics and BI vertical. You will have to conceptualize and own the build out of problem-solving data marts for consumption by data science and BI teams, evaluating design and operational tradeoffs within systems. Design, develop, and maintain robust data pipelines and ETL processes using data platforms for the organization's centralized data warehouse. Create or contribute to frameworks that improve the efficacy of logging data, while working with the Engineering team to triage issues and resolve them. Validate data integrity throughout the collection process, performing data profiling to identify and comprehend data anomalies. Influence product and cross-functional (engineering, data science, operations, strategy) teams to identify data opportunities to drive impact. Requirements Experience & Education Bachelor's degree in Computer Science, Mathematics, a related field, or equivalent practical experience. 5 years of experience coding with SQL or one or more programming languages (e.g., Python, Java, R, etc.) for data manipulation, analysis, and automation 5 years of experience designing data pipelines (ETL) and dimensional data modeling for synchronous and asynchronous system integration and implementation. Experience in managing troubleshooting technical issues, and working with Engineering and Sales Services teams. Preferred qualifications: Master’s degree in Engineering, Computer Science, Business, or a related field. Experience with cloud-based services relevant to data engineering, data storage, data processing, data warehousing, real-time streaming, and serverless computing. Experience with experimentation infrastructure, and measurement approaches in a technology platform. Experience with data processing software (e.g., Hadoop, Spark) and algorithms (e.g., MapReduce, Flume).

Posted 6 days ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting - Data and Analytics – Senior – AWS EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Senior – Cloud Experts with design experience in Bigdata cloud implementations. Your Key Responsibilities AWS Experience with Kafka, Flume and AWS tool stack such as Redshift and Kinesis are preferred. Experience building on AWS using S3, EC2, Redshift, Glue,EMR, DynamoDB, Lambda, QuickSight, etc. Experience in Pyspark/Spark / Scala Experience using software version control tools (Git, Jenkins, Apache Subversion) AWS certifications or other related professional technical certifications Experience with cloud or on-premise middleware and other enterprise integration technologies Experience in writing MapReduce and/or Spark jobs Demonstrated strength in architecting data warehouse solutions and integrating technical components Good analytical skills with excellent knowledge of SQL. 4+ years of work experience with very large data warehousing environment Excellent communication skills, both written and verbal 7+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools. 4+ years of experience data modelling concepts 3+ years of Python and/or Java development experience 3+ years’ experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Skills And Attributes For Success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Exposure to tools like Tableau, Alteryx etc. To qualify for the role, you must have BE/BTech/MCA/MBA Minimum 4+ years hand-on experience in one or more key areas. Minimum 7+ years industry experience What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 6 days ago

Apply

175.0 years

9 - 9 Lacs

Gurgaon

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. Join Team Amex and let's lead the way together. From building next-generation apps and microservices in Kotlin to using AI to help protect our franchise and customers from fraud, you could be doing entrepreneurial work that brings our iconic, global brand into the future. As a part of our tech team, we could work together to bring ground-breaking and diverse ideas to life that power our digital systems, services, products and platforms. If you love to work with APIs, contribute to open source, or use the latest technologies, we’ll support you with an open environment and learning culture. Function Description: American Express is looking for energetic, successful and highly skilled Engineers to help shape our technology and product roadmap. Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every day. Today, innovative ideas, insight and new points of view are at the core of how we create a more powerful, personal and fulfilling experience for our customers and colleagues, with batch/real-time analytical solutions using ground-breaking technologies to deliver innovative solutions across multiple business units. This Engineering role is based in our Global Risk and Compliance Technology organization and will have a keen focus on platform modernization, bringing to life the latest technology stacks to support the ongoing needs of the business as well as compliance against global regulatory requirements. Qualifications: Support the Compliance and Operations Risk data delivery team in India to lead and assist in the design and actual development of applications. Responsible for specific functional areas within the team, this involves project management and taking business specifications. The individual should be able to independently run projects/tasks delegated to them. Technology Skills: Bachelor degree in Engineering or Computer Science or equivalent 2 to 5 years experience is required GCP professional certification - Data Engineer Expert in Google BigQuery tool for data warehousing needs. Experience on Big Data (Spark Core and Hive) preferred Familiar with GCP offerings, experience building data pipelines on GCP a plus Hadoop Architecture, having knowledge on Hadoop, Map Reduce, Hbase. UNIX shell scripting experience is good to have Creative problem solving (Innovative) We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 6 days ago

Apply

1.0 - 3.0 years

5 - 6 Lacs

Mohali

On-site

What You Need for this Position: Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field. Proven experience (1-3 years) in machine learning, data science, or AI roles. Proficiency in programming languages such as Python, R, or Java. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). Strong understanding of algorithms, data structures, and software design principles. Familiarity with cloud platforms (e.g., AWS, Azure) and big data technologies (e.g., Hadoop, Spark). Excellent problem-solving skills and analytical thinking. Strong communication and collaboration skills. Ability to work methodically and meet deadlines. What You Will Be Doing: Develop and implement machine learning models and algorithms for various applications. Collaborate with cross-functional teams to understand project requirements and deliver AI solutions. Preprocess and analyze large datasets to extract meaningful insights. Design and conduct experiments to evaluate model performance and fine-tune algorithms. Deploy machine learning models to production and ensure scalability and reliability. Stay updated with the latest advancements in AI and machine learning technologies. Document model development processes and maintain comprehensive project documentation. Participate in code reviews and provide constructive feedback to team members. Contribute to the continuous improvement of our AI/ML capabilities and best practices. Top Reasons to Work with Us: Join a fast-paced team of like-minded individuals who share the same passion as you with whom you'll tackle new challenges every day. Work alongside an exceptionally talented and intellectual team, gaining exposure to new concepts and technologies. Enjoy a friendly and high-growth work environment that fosters learning and development. Competitive compensation package based on experience and skill. Job Type: Full-time Pay: ₹500,000.00 - ₹600,000.00 per year Schedule: Day shift Fixed shift Morning shift Experience: total work: 2 years (Required) Work Location: In person

Posted 6 days ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role Summary Data Engineer Position Overview We are searching for a talented and motivated Data Engineer to join our team. The ideal candidate will have expertise in data modeling, analytical thinking, and developing ETL processes using Python. In this role, you will be pivotal in transforming raw data from landing tables into reliable, curated master tables, ensuring accuracy, accessibility, and integrity within our Snowflake data platform. Main Responsibilities Design, Develop, and Maintain ETL Processes: Build and maintain scalable ETL pipelines in Python to extract, transform, and load data into Snowflake master tables. Automate data mastering, manage incremental updates, and ensure consistency between landing and master tables. Data Modeling: Create and optimize logical and physical data models in Snowflake for efficient querying and reporting. Translate business needs into well-structured data models, defining tables, keys, relationships, and constraints. Analytical Thinking and Problem Solving: Analyze complex datasets, identify trends, and work with analysts and stakeholders to resolve data challenges. Investigate data quality issues and design robust solutions aligned with business goals. Data Quality and Governance: Implement routines for data validation, cleansing, and error handling to ensure accuracy and reliability in Snowflake. Support the creation and application of data governance standards. Automation and Optimization: Seek automation opportunities for data engineering tasks, enhance ETL processes for performance, and scale systems as data volumes grow within Snowflake. Documentation and Communication: Maintain thorough documentation of data flows, models, transformation logic, and pipeline configurations. Clearly communicate technical concepts to all stakeholders. Collaboration: Work closely with data scientists, analysts, and engineers to deliver integrated data solutions, contributing to cross-functional projects with your data engineering expertise. Required Qualifications Bachelor’s or Master’s degree in Computer Science, IT, Engineering, Mathematics, or related field At least 2 years of experience as a Data Engineer or similar role Strong Python skills, including experience developing ETL pipelines and automation scripts Solid understanding of relational and dimensional data modeling Experience with Snowflake for SQL, schema design, and managing pipelines Proficient in SQL for querying and data analysis in Snowflake Strong analytical and problem-solving skills Familiarity with data warehousing and best practices Knowledge of data quality, cleansing, and validation techniques Experience with version control systems like Git and collaborative workflows Excellent communication, both verbal and written Preferred Qualifications In-depth knowledge of Snowflake features like Snowpipe, Streams, Tasks, and Time Travel Experience with cloud platforms such as AWS, Azure, or Google Cloud Familiarity with workflow orchestration tools like Apache Airflow or Luigi Understanding of big data tools like Spark, Hadoop, or distributed databases Experience with CI/CD pipelines in data engineering Background in streaming data and real-time processing Experience deploying data pipelines in production Sample Responsibilities In Practice Develop automated ETL pipelines in Python to ingest daily CSVs into a Snowflake landing table, validate data, and merge clean records into a master table, handling duplicates and change tracking. Design scalable data models in Snowflake to support business intelligence reporting, ensuring both integrity and query performance. Collaborate with business analysts to adapt data models and pipelines to evolving needs. Monitor pipeline performance and troubleshoot inconsistencies, documenting causes and solutions. Key Skills And Competencies Technical Skills: Python (including pandas, SQLAlchemy); Snowflake SQL and management; schema design; ETL process development Analytical Thinking: Ability to translate business requirements into technical solutions; strong troubleshooting skills Collaboration and Communication: Effective team player; clear technical documentation Adaptability: Willingness to adopt new technologies and proactively improve processes Our Data Environment Our organization manages diverse data sources, including transactional systems, third-party APIs, and unstructured data. We are dedicated to building a top-tier Snowflake data infrastructure for analytics, reporting, and machine learning. In this role, you will influence our data architecture, implement modern data engineering practices, and contribute to a culture driven by data.

Posted 6 days ago

Apply

3.0 - 6.0 years

13 - 18 Lacs

Bengaluru

Work from Office

We are looking to hire Data engineer for the Platform Engineering team. It is a collection of highly skilled individuals ranging from development to operations with a security first mindset who strive to push the boundaries of technology. We champion a DevSecOps culture and raise the bar on how and when we deploy applications to production. Our core principals are centered around automation, testing, quality, and immutability all via code. The role is responsible for building self-service capabilities that improve our security posture, productivity, and reduce time to market with automation at the core of these objectives. The individual collaborates with teams across the organization to ensure applications are designed for Continuous Delivery (CD) and are well-architected for their targeted platform which can be on-premise or the cloud. If you are passionate about developer productivity, cloud native applications, and container orchestration, this job is for you! Principal Accountabilities: The incumbent is mentored by senior individuals on the team to capture the flow and bottlenecks in the holistic IT delivery process and define future tool sets Skills and Software Requirements: Experience with a language such as Python, Go,SQL, Java, or Scala GCP data services (BigQuery; Dataflow; Dataproc; Cloud Composer; Pub/Sub; Google Cloud Storage; IAM) Experience with Jenkins, Maven, Git, Ansible, or CHEF Experience working with containers, orchestration tools (like Kubernetes, Mesos, Docker Swarm etc.) and container registries (GCE, Docker hub etc.) Experience with [SPI]aaS- Software-as-a-Service, Platform-as-a-Service, or Infrastructure-as- a-Service Acquire, cleanse, and ingest structured and unstructured data on the cloud Combine data from disparate sources in to a single, unified, authoritative view of data (e.g., Data Lake) Enable and support data movement from one system service to another system service Experience implementing or supporting automated solutions to technical problems Experience working in a team environment, proactively executing on tasks while meeting agreed delivery timelines Ability to contribute to effective and timely solutions Excellent oral and written communication skills

Posted 6 days ago

Apply

5.0 - 6.0 years

8 - 15 Lacs

India

On-site

We are seeking a highly skilled Python Developer with expertise in Machine Learning and Data Analytics to join our team. The ideal candidate should have 5-6 years of experience in developing end-to-end ML-driven applications and handling data-driven projects independently. You will be responsible for designing, developing, and deploying Python-based applications that leverage data analytics, statistical modeling, and machine learning techniques. Key Responsibilities: Design, develop, and deploy Python applications for data analytics and machine learning. Work independently on machine learning model development, evaluation, and optimization. Develop ETL pipelines and process large-scale datasets for analysis. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Strong expertise in Machine Learning, Data Analytics, and AI frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹800,000.00 - ₹1,500,000.00 per year Schedule: Day shift Experience: Python: 5 years (Required) Work Location: In person Expected Start Date: 01/08/2025

Posted 6 days ago

Apply

6.0 years

6 - 6 Lacs

Noida

On-site

Data Engineering – Technical Lead About Us: Paytm is India’s leading digital payments and financial services company, which is focused on driving consumers and merchants to its platform by offering them a variety of payment use cases. Paytm provides consumers with services like utility payments and money transfers, while empowering them to pay via Paytm Payment Instruments (PPI) like Paytm Wallet, Paytm UPI, Paytm Payments Bank Netbanking, Paytm FASTag and Paytm Postpaid - Buy Now, Pay Later. To merchants, Paytm offers acquiring devices like Soundbox, EDC, QR and Payment Gateway where payment aggregation is done through PPI and also other banks’ financial instruments. To further enhance merchants’ business, Paytm offers merchants commerce services through advertising and Paytm Mini app store. Operating on this platform leverage, the company then offers credit services such as merchant loans, personal loans and BNPL, sourced by its financial partners. About the Role: This position requires someone to work on complex technical projects and closely work with peers in an innovative and fast-paced environment. For this role, we require someone with a strong product design sense & specialized in Hadoop and Spark technologies. Requirements: Minimum 6+ years of experience in Big Data technologies. The position Grow our analytics capabilities with faster, more reliable tools, handling petabytes of data every day. Brainstorm and create new platforms that can help in our quest to make available to cluster users in all shapes and forms, with low latency and horizontal scalability. Make changes to our diagnosing any problems across the entire technical stack. Design and develop a real-time events pipeline for Data ingestion for real-time dash- boarding.Develop complex and efficient functions to transform raw data sources into powerful, reliable components of our data lake. Design & implement new components and various emerging technologies in Hadoop Eco- System, and successful execution of various projects. Be a brand ambassador for Paytm – Stay Hungry, Stay Humble, Stay Relevant! Skills that will help you succeed in this role: Fluent with Strong hands-on experience with Hadoop, MapReduce, Hive, Spark, PySpark etc.Excellent programming/debugging skills in Python/Java/Scala. Experience with any scripting language such as Python, Bash etc. Good to have experience of working with noSQL databases like Hbase, Cassandra.Hands-on programming experience with multithreaded applications.Good to have experience in Database, SQL, messaging queues like Kafka. Good to have experience in developing streaming applications e.g. Spark Streaming, Flink, Storm, etc.Good to have experience with AWS and cloud technologies such as S3 Experience with caching architectures like Redis etc. Why join us: Because you get an opportunity to make a difference, and have a great time doing that.You are challenged and encouraged here to do stuff that is meaningful for you and for those we serve.You should work with us if you think seriously about what technology can do for people.We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!

Posted 6 days ago

Apply

8.0 - 10.0 years

32 - 35 Lacs

Hyderabad

Work from Office

Position Summary MetLife established a Global capability center (MGCC) in India to scale and mature Data & Analytics, technology capabilities in a cost-effective manner and make MetLife future ready. The center is integral to Global Technology and Operations with a with a focus to protect & build MetLife IP, promote reusability and drive experimentation and innovation. The Data & Analytics team in India mirrors the Global D&A team with an objective to drive business value through trusted data, scaled capabilities, and actionable insights Role Value Proposition MetLife Global Capability Center (MGCC) is looking for a Senior Cloud data engineer who has the responsibility of building ETL/ELT, data warehousing and reusable components using Azure, Databricks and spark. He/She will collaborate with the business systems analyst, technical leads, project managers and business/operations teams in building data enablement solutions across different LOBs and use cases. Job Responsibilities Collect, store, process and analyze large datasets to build and implement extract, transfer, load (ETL) processes Develop metadata and configuration based reusable frameworks to reduce the development effort Develop quality code with integral performance optimizations in place right at the development stage. Collaborate with global team in driving the delivery of projects and recommend development and performance improvements. Extensive experience of various databases types and knowledge to leverage the right one for the need. Strong understanding of data tools and ability to leverage them to understand the data and generate insights Hands on experience in building/designing at-scale Data Lake, Data warehouses, data stores for analytics consumption On prem and Cloud (real time as well as batch use cases) Ability to interact with business analysts and functional analysts in getting the requirements and implementing the ETL solutions. Education, Technical Skills & Other Critical Requirement Education Bachelors degree in computer science, Engineering, or related discipline Experience (In Years) 8 to 10 years of working experience on Azure Cloud using Databricks or Synapse Technical Skills Experience in transforming data using Python, Spark or Scala Technical depth in Cloud Architecture Framework, Lakehouse Architecture and One Lake solutions. Experience in implementing data ingestion and curation process using Azure with tools such as Azure Data Factory, Databricks Workflows, Azure Synapse, Cosmos DB, Spark (Scala/python), Data bricks . Experience in cloud optimized code on Azure using Databricks, Synapse dedicated SQL Pool and serverless Pools, Cosmos, SQL APIs loading and consumption optimizations. Scripting experience primarily on shell/bash/PowerShell would be desirable. Experience in writing SQL and performing data analysis skills for data anomaly detection and data quality assurance. Other Preferred Skills Expertise in Python and experience writing Azure functions using Python/Node.js Experience using Event Hub for data integrations . Required working knowledge of Azure DevOps pipelines Self-starter and ability to adapt with changing business needs

Posted 6 days ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Minimum of 6+ years of experience in IT IndustryCreating data models, building data pipelines, and deploying fully operational data warehouses within Snowflake Writing and optimizing SQL queries, tuning database performance, and identifying and resolving performance bottlenecks Integrating Snowflake with other tools and platforms, including ETL/ELT processes and third party applications Implementing data governance policies, maintaining data integrity, and managing access controls Creating and maintaining technical documentation for data solutions, including data models, architecture, and processes Familiarity with cloud platforms and their integration with Snowflake Basic coding skills in languages like Python or Java can be helpful for scripting and automation Outstanding ability to communicate, both verbally and in writing Strong analytical and problem solving skills Experience in Banking domain

Posted 6 days ago

Apply

4.0 - 7.0 years

5 - 10 Lacs

Bengaluru

Work from Office

1 Have a good understanding of AWS services specifically in the following areas RDS, S3, add EC2, VPC, KMS, ECS, Lambda, AWS Organizations and IAM policy setup. Also Python as a main skill. 2 Architect/design/code database infrastructure deployment using terraform. Should be able to write terraform modules that will deploy database services in AWS 3 Provide automation solutions using python lambda's for repetitive tasks such as running quarterly audits, daily health checks in RDS in multiple accounts. 4 Have a fair understanding of Ansible to automate Postgres infrastructure deployment and automation of repetitive tasks for on prem servers 5 Knowledge of Postgres and plpgsql functions6 Hands on experience with Ansible and Terraform and the ability to contribute to ongoing projects with minimal coaching.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Where Data Does More. Join the Snowflake team. Our Solutions Engineering organization is seeking a Data Platform Architect to join our Field CTO team who can provide leadership in working with both technical and business executives in the design and architecture of the Snowflake Cloud Data Platform as a critical component of their enterprise data architecture and overall ecosystem. In this role you will work directly with the sales team and channel partners to understand the needs of our customers, strategize on how to navigate winning sales cycles, provide compelling value-based demonstrations, support enterprise Proof of Concepts, and ultimately close business. You will leverage your expertise, best practices and reference architectures highlighting Snowflake’s Cloud Data Platform capabilities across Data Warehouse, Data Lake, and Data Engineering workloads. You are equally comfortable in both a business and technical context, interacting with executives and talking shop with technical audiences. IN THIS ROLE YOU WILL GET TO: Apply your multi-cloud data architecture expertise while presenting Snowflake technology and vision to executives and technical contributors at strategic prospects, customers, and partners Work hands-on with prospects and customers to demonstrate and communicate the value of Snowflake technology throughout the sales cycle, from demo to proof of concept to design and implementation Immerse yourself in the ever-evolving industry, maintaining a deep understanding of competitive and complementary technologies and vendors and how to position Snowflake in relation to them Collaborate with Product Management, Engineering, and Marketing to continuously improve Snowflake’s products and marketing Help to scope security related feature enhancements on behalf of customers to work with Product Management on ways to improve the product from a data engineering perspective Assist in clarifying concepts of data engineering, data warehousing, data collaboration, AI and Machine Learning. Share industry best practices, and features in the Snowflake platform; Assist in building demos and prototypes using Snowflake in conjunction with other integrated solutions; Engage with other relevant Snowflake stakeholders internally Collaborate with cross-functional teams, including Sales, Product, Engineering, Marketing, and Support in order to drive customer security feature adoption Partner with our Product Marketing team to define and support Snowflake's AI and Data Cloud and workload awareness and pipeline building via marketing initiatives including conferences, trade shows, and events Work and collaborate with other Field CTO members in areas of: Enterprise AI, Gen AI, Data Engineering, Security, Applications, as a Data Engineering expert you may be involved in collaborative engagements Partner with the best in the industry product team, helping to shape vision and capabilities for Snowflake Data Marketplace ON DAY ONE, WE WILL EXPECT YOU TO HAVE: 5+ years of architecture and data engineering experience within the Enterprise Data space 5+ years experience within a pre-sales environment Outstanding presenting skills to both technical and executive audiences, whether impromptu on a whiteboard or using presentations and demos Broad range of experience within large-scale Database and/or Data Warehouse technology, ETL, analytics and cloud technologies. For example, Data Lake, Data Mesh, Data Fabric Hands-on Development experience with technologies such as SQL, Python, Pandas, Spark, PySpark, Hadoop, Hive and any other Big data technologies Knowledge and hands-on experience with Big data processing tools (Nifi, Spark etc.) and common optimization techniques Knowledge and hands-on experience with CI/CD for data/ml pipelines Knowledge and design expertise in Stream processing pipelines Knowledge and hands-on experience with Containerisation & Orchestration, including Kubernetes, EKS, AKS, Terraform and equivalent technologies Knowledge and hands-on experience with cloud platforms & services (AWS, Azure, GCP) Ability to connect a customer’s specific business problems and Snowflake’s solutions Ability to do deep discovery of customer’s architecture framework and connect those with Snowflake Data Architecture Must have some prior knowledge of Data Engineering tools for ingestion, transformation and curation Familiarity with real-time or near real time use cases (ex. CDC) and technologies (ex. Kafka, Flink) and deep understanding of integration services and tools for building ETL and ELT data pipelines and its orchestration technologies such as Matillion, Fivetran, Airflow, Informatica, Azure Data Factory, etc. Strong architectural expertise in data engineering to confidently present and demo to business executives and technical audiences, and effectively handle any impromptu questions Bachelor’s Degree required, Masters Degree in computer science, engineering, mathematics or related fields, or equivalent experience preferred BONUS POINTS FOR THE FOLLOWING: Hands-on expertise with SQL, Python, Java, Scala and APIs AI/ML skills and hands-on experience and knowledge with model building and training, MLOps Experience selling enterprise SaaS software Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies