Home
Jobs

4851 Hadoop Jobs - Page 48

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

15 - 25 Lacs

Noida, Greater Noida, Delhi / NCR

Work from Office

Naukri logo

Streaming data Technical skills requirements :- Mandatory Skills- Hands on experience with Spark, Scala, AWS (Lambda, Glue, S3) -Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena - Data Warehouse exposure on Apache Nifi, Apache Airflow, Kylo - Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.) - Hands-on working experience in analysing source system data and data flows, working with structured and unstructured data - Must be very strong in writing SQL queries - Strengthen the Data engineering team with Big Data solutions - Strong technical, analytical, and problem-solving skills Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Overview Sr. Data Engineer II performs development activities within a data engineering team and helps guide, onboard, and train Data Engineer I. You will work closely with product management, engineering, account management, ETL, data warehouse, business intelligence, and reporting teams as you develop data pipelines and enhancements and investigate and troubleshoot issues. You possess an understanding of multiple data structures, including relational and non-relational data models. Roles And Responsibilities Extracting, cleansing, and loading data. Building data pipelines using SQL, Kafka, and other technologies. Investigation and documentation of new data sets Triage incoming bugs and incidents. Perform technical operation tasks. Investigate and troubleshoot issues with data and data pipelines. Participation in sprint refinement, planning, and kick-off to help estimate stories, raise awareness and additional implementation details. Help monitor areas of the data pipeline and raise awareness to the team when issues arise. Performing and implementing new quality assurance rules to maintain consistent and accurate data. Knowledge A solid understanding of data science concepts is required Data analysis expertise Working knowledge of ETL tools Knowledge of BI tools Handling DevOps Task is preferable Experience with Big Data technologies such as Hadoop and Kafka Extensive experience with ML frameworks and libraries including TensorFlow, Spark, PyTorch, and MLPACK Skills (Technical) Experience designing and implementing a full-scale data warehouse solution based on Snowflake. A minimum of three years’ experience in developing production-ready data ingestion and processing pipelines using Java, Spark, Scala, Python. Experience with complex data warehouse solutions on Teradata, Oracle, or DB2 platforms with 2 years of hands-on experience Expertise and excellent proficiency with Snowflake internals and integration of Snowflake with other technologies for data processing and reporting. Experience or Knowledge of Excel and several analytical tools such as Tableau, MicroStrategy, PowerBI would be added advantage Abilities (Competencies) Work Independently. Collaborates with team members. Self-motivated. Typical Experience 4 - 6 years Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Consultant, Performance Analytics, Advisors & Consulting Services Advisors & Consulting Services Services within Mastercard is responsible for acquiring, engaging, and retaining customers by managing fraud and risk, enhancing cybersecurity, and improving the digital payments experience. We provide value-added services and leverage expertise, data-driven insights, and execution. Our Advisors & Consulting Services team combines traditional management consulting with Mastercard’s rich data assets, proprietary platforms, and technologies to provide clients with powerful strategic insights and recommendations. Our teams work with a diverse global customer base across industries, from banking and payments to retail and restaurants. The Advisors & Consulting Services group has five specializations: Strategy & Transformation, Performance Analytics, Business Experimentation, Marketing, and Program Management. Our Performance Analytics consultants translate data into insights by leveraging Mastercard and customer data to design, implement, and scale analytical solutions for customers. They use qualitative and quantitative analytical techniques and enterprise applications to synthesize analyses into clear recommendations and impactful narratives. Positions for different specializations and levels are available in separate job postings. Please review our consulting specializations to learn more about all opportunities and apply for the position that is best suited to your background and experience: https://careers.mastercard.com/us/en/consulting-specializations-at-mastercard Roles and Responsibilities Client Impact Provide creative input on projects across a range of industries and problem statements Contribute to the development of analytics strategies and programs for regional and global clients by leveraging data and technology solutions to unlock client value Collaborate with Mastercard team to understand clients’ needs, agenda, and risks Develop working relationship with client analysts/managers, and act as trusted and reliable partner Team Collaboration & Culture Collaborate with senior project delivery consultants to identify key findings, prepare effective presentations, and deliver recommendations to clients Independently identify trends, patterns, issues, and anomalies in defined area of analysis, and structure and synthesize own analysis to highlight relevant findings Lead internal and client meetings, and contribute to project management Contribute to the firm's intellectual capital Receive mentorship from performance analytics leaders for professional growth and development Qualifications Basic qualifications Undergraduate degree with data and analytics experience in business intelligence and/or descriptive, predictive, or prescriptive analytics Experience managing clients or internal stakeholders Ability to analyze large datasets and synthesize key findings Proficiency using data analytics software (e.g., Python, R, SQL, SAS) Advanced Word, Excel, and PowerPoint skills Ability to perform multiple tasks with multiple clients in a fast-paced, deadline-driven environment Ability to communicate effectively in English and the local office language (if applicable) Eligibility to work in the country where you are applying, as well as apply for travel visas as required by travel needs Preferred Qualifications Additional data and analytics experience in building, managing, and maintaining database structures, working with data visualization tools (e.g., Tableau, Power BI), or working with Hadoop framework and coding using Impala, Hive, or PySpark Ability to analyze large datasets and synthesize key findings to provide recommendations via descriptive analytics and business intelligence Experience managing tasks or workstreams in a collaborative team environment Ability to identify problems, brainstorm and analyze answers, and implement the best solutions Relevant industry expertise Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-249730 Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Do you want to be in the forefront of engineering big data solutions that takes Transportation models to the next generation? Do you have a solid analytical thinking, metrics driven decision making and want to solve problems with solutions that will meet the growing worldwide need? We are looking for top notch Data Engineers to be part of our world class Transportation Business Intelligence team. We are building real time analytical platforms using big data tools and AWS technologies like Hadoop, Spark, EMR, SNS, SQS, Lambda, Kinesis Firehose, DynamoDB Streams. The ideal candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and, above all else, is passionate about data and analytics. He/she is an expert with data modeling, ETL design and business intelligence tools and passionately partners with the business to identify strategic opportunities where improvements in data infrastructure creates out-sized business impact. He/she is a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail), and enjoys working in a fast-paced and global team. It's a big ask, and we're excited to talk to those up to the challenge! 4 to 7 years of performing quantitative analysis, preferably for an Internet with large, complex data sources. Hands-on experience on Big data technologies and frameworks. Hive, Spark, Hadoop, SQL on Big Data, Redshift Experience in near real time analytics Experience with scripting languages i.e. Python, Perl, etc. Experience with ETL, Data Modeling, and working with large-scale datasets. Extremely proficient in writing performant SQL working with large data volumes Ability to manage competing priorities simultaneously and drive projects to completion. Bachelor's degree or higher in a quantitative/technical field (e.g. Computer Science, Statistics, Engineering). Experience with large scale data processing, data structure optimization and scalability of algorithms a plus Basic Qualifications 3+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2992773 Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation – e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3 rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role- Manager Data Science TRAVEL DOMAIN Location- Gurgaon Work From Office seeking a talented, Energetic data enthusiast Data Scientist. to join our growing team As a Data Scientist with us, you will play a crucial role in analysing large complex datasets, developing predictive models, solving optimization problems and delivering actionable insights that significantly impact our strategic decision-making processes. This role offers an exciting opportunity to work at the intersection of cutting-edge technology and aviation, where your expertise in data science will be instrumental. we embrace a culture that values innovation and harnesses the power of data and cutting-edge data science to drive transformative advancements in the aviation industry while prioritizing excellence in service delivery. An ideal candidate with a robust foundation in statistical modelling, machine learning, and artificial intelligence techniques. Proficiency in programming, particularly in Python and SQL, is essential, with knowledge of R considered advantageous. Role & responsibilities- • Collaborate closely with cross-functional teams to understand business requirements and translate them into analytics and data science solutions. • Architect and design scalable data pipelines and infrastructure to support data processing, modelling, and analysis. • Develop and deploy machine learning models into production, ensuring scalability, reliability, and performance. • Develop frameworks and tools for automated data analysis, model training, and deployment. • Monitor model performance using appropriate metrics and iterating on models to improve accuracy and efficiency. • Lead cross-functional initiatives to innovate and optimize business processes using data￾driven insights. • Create compelling data visualizations and reports to communicate findings to stakeholders effectively. • Stay abreast of industry trends and advancements in data science, machine learning, and artificial intelligence, and provide thought leadership within the organization. Preferred candidate profile- • Bachelor or Masters in computer science, Statistics, Mathematics, or related field. • Minimum 7+ years of experience in data science, with a proven track record of leading complex projects and delivering impactful solutions • Expertise in statistical modelling, machine learning, and deep learning techniques, with practical experience in applying these methods to real-world problems. • Proficiency in programming languages such as Python, R, and SQL; experience with big data technologies (e.g., BigQuery, Hadoop, Spark) is a plus. • Experience in designing and implementing scalable data architectures and solutions. • Strong problem-solving skills and a strategic mindset to drive innovation and operational efficiency through data-driven approaches. • Hands-on experience in implementing AI and ML solutions for Travel & Technology is advantageous. • Strong leadership and communication skills, with the ability to effectively collaborate with diverse teams and stakeholders. • Demonstrated ability to mentor and coach junior team members, fostering a culture of continuous learning and growth Show more Show less

Posted 1 week ago

Apply

6.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Position Requirements 6-9 years of experience in IT and data base/data warehouse domain Experience in writing sqls in Hive, Impala Exposure to scripting language like shell scripting, python is required. Strong SQL skills are essential; experience writing complex queries spanning multiple tables is required. Experience in AWS is plus. Working knowledge of performance tuning of database queries (understanding the inner working of the query optimizer, query plans, indexes, partitions etc.). Working knowledge of data modelling and dimensional modelling tools and techniques. Experience in data quality, data governance, DataOps and latest data management techniques a bonus. Must Have Big Data stacks: Hive/Impala/Hadoop/PySpark Advanced Python knowledge Advanced SQL knowledge Git hub/bit bucket Shell scripting Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

16 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! REQUIREMENTS: Total experience 5+ Years. Excellent knowledge and experience in Big data engineer. Strong hands-on experience with Apache Spark and Python. Solid experience with Hadoop, MapReduce , Hive, and SQL-lik e languages. Familiarity with GCP Pub/Sub, Kafka, and Trino. Experience building end-to-end data pipelines and integrating various data sources. Understanding of both relational (e.g., PostgreSQL) and NoSQL (e.g. , MongoDB) databases. Experience with Git, CI/CD tools, and Agile development practices. Experience working on Google Cloud Platform (GCP), particularly with BigQuery, GCS, Airflow, and Kubernetes. Excellent problem-solving and analytical skills. Strong verbal and written communication abilities. RESPONSIBILITIES: Writing and reviewing great quality code Understanding the client's business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements Mapping decisions with requirements and be able to translate the same to developers Identifying different solutions and being able to narrow down the best option that meets the clients' requirements Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements.

Posted 1 week ago

Apply

1.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

What you’ll do This position is at the forefront of Equifax's post cloud transformation, focusing on developing and enhancing Java applications within the Google Cloud Platform (GCP) environment. The ideal candidate will combine strong Java development skills with cloud expertise to drive innovation and improve existing systems Key Responsibilities Design, develop, test, deploy, maintain, and improve software applications on GCP Enhance existing applications and contribute to new initiatives leveraging cloud-native technologies Implement best practices in serverless computing, microservices, and cloud architecture Collaborate with cross-functional teams to translate functional and technical requirements into detailed architecture and design Participate in code reviews and maintain high development and security standards Provide technical oversight and direction for Java and GCP implementations What Experience You Need Bachelor's or Master's degree in Computer Science or equivalent experience 1+ years of IT experience with a strong focus on Java development Experience in modern Java development and cloud computing concepts Familiarity with agile methodologies and test-driven development (TDD) Strong understanding of software development best practices, including continuous integration and automated testing What could set you apart Experience with GCP or other cloud platforms (AWS, Azure) Active cloud certifications (e.g., Google Cloud Professional certifications) Experience with big data technologies (Spark, Kafka, Hadoop) and NoSQL databases Knowledge of containerization and orchestration tools (Docker, Kubernetes) Familiarity with financial services industry Experience with open-source frameworks (Spring, Ruby, Apache Struts, etc.) Experience with Python Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Title: Data Engineer Job Summary Data Engineers will be responsible for the design, development, testing, maintenance, and support data assets including; Azure Data Lake and data warehouse development, modeling, package creation, SQL script creation, stored procedure development, integration services support among other responsibilities. Candidate have at least 3-5 years hands-on Azure experience as a Data Engineer, must be an expert in SQL and have extensive expertise building data pipelines. Candidate will be accountable for meeting deliverable commitments including schedule and quality compliance. This Candidate must have skills to plan and schedule own work activities, coordinate activities with other cross-functional team members to meet project goals. Basic Understanding Of Scheduling and workflow management & working experience in either ADF, Informatica, Airflow or Similar Enterprise Data Modelling and Semantic Modelling & working experience in ERwin, ER/Studio, PowerDesigner or Similar Logical/Physical model on Big Data sets or modern data warehouse & working experience in ERwin, ER/Studio, PowerDesigner or Similar Agile Process (Scrum cadences, Roles, deliverables) & basic understanding in either Azure DevOps, JIRA or Similar Architecture and data modelling for Data Lake on cloud & working experience in Amazon WebServices (AWS), Microsoft Azure, Google Cloud Platform (GCP) Basic understanding of Build and Release management & working experience in Azure DevOps, AWS CodeCommitt or Similar Strong In Writing code in programming language & working experience in Python, PySpakrk, Scala or Similar Big Data Framework & working experience in Spark or Hadoop or Hive (incl. derivatives like pySpark (prefered), SparkScala or SparkSQL) or Similar Data warehouse working experience of concepts and development using SQL on single (SQL Server, Oracle or Similar) and parallel platforms (Azure SQL Data Warehouse or Snowflake) Code Management & working experience in GIT Hub, Azure DevOps or Similar End to End Architecture and ETL processes & working experience in ETL Tool or Similar Reading Data Formats & working experience in JSON, XML or Similar Data integration processes (batch & real time) using tools & working experience in either Informatica PowerCenter and/or Cloud, Microsoft SSIS, MuleSoft, DataStage, Sqoop or Similar Writing requirement, functional & technical documentation & working experience in Integration design document, architecture documentation, data testing plans or Similar SQL queries & working experience in SQL code or Stored Procedures or Functions or Views or Similar Database & working experience in any of the database like MS SQL, Oracle or Similar Analytical Problem Solving skills & working experience in resolving complex problems or Similar Communication (read & write in English), Collaboration & Presentation skills & working experience as team player or Similar Good To Have Stream Processing & working experience in either Databricks Streaming, Azure Stream Analytics or HD Insight or Kinesis Data Analytics or Similar Analytical Warehouse & working experience in either SQL Data Warehouse or Amazon Athena or AWS Redshift or Big Query or Similar Real-Time Store & working experience in either Azure Cosmos DB or Amazon Dynamo-DB or Cloud Bigdata or Similar Batch Ingestion & working experience in Data Factory or Amazon Kinesis or Lambda or Cloud Pub/Sub or Similar Storage & working experience in Azure Data Lake Storage GEN1/GEN2 or Amazon S3 or Cloud Storage or Similar Batch Data Processing & working experience in either Azure Databricks or HD Insight or Amazon EMR or AWS Glue or Similar Orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Decision Analytics EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning Business EXLerator Framework™, which integrates analytics, automation, benchmarking, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 24,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients’ decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 2,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. Role Overview Transform raw data into meaningful insights and reports using Power BI. Design and develop interactive dashboards and visualizations to support business decision-making. Integrate data from various sources, ensuring data accuracy and consistency. Work with business stakeholders to understand their requirements and translate them into technical solutions. Optimize data models and queries for better performance. Maintain documentation for data models, reports, and dashboards. Training and Support: Provide training and support to end-users on Power BI tools and report Doing independent research, analyze, and present data as assigned Managing the learning and development of new team members Identify and participate in continuous improvement initiatives Ensure compliance with Data Privacy and Protection Guidelines Participate in business development activities Experience in insurance preferred but not mandatory. Key Responsibilities & Skillsets Common Skillsets: 4-6 years of experience in reporting & transformation, analytics, dash boarding, ETL, Power BI and associated data management Deep rooted knowledge and understanding of analytics tools Power BI, Power Automate, Power Apps Must have consulting back ground with structured thinking Excellent communication & presentation skills with consultative mindset Should be able to lead team independently and prepare/manage client ready deliverable Able to understand cross cultural differences and can work with clients across the globe Data Management Skillsets: Strong familiarity with BI tools (Power BI) and related software applications Good knowledge of data base management and SQL, Python, Hadoop/Spark, and ETL tools. Strong ability to translate functional specifications / requirements to technical requirements Exposure to Cloud data management and Azure services (Preferred) Candidate Profile Bachelor’s/Master's degree in economics, mathematics, computer science/engineering, operations research or related analytics areas Data analysis experience Superior analytical and problem solving skills Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges What We Offer EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 - 3 Lacs

Pune, Chennai

Hybrid

Naukri logo

Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 5 to Maximum 8 Yrs of exp Location : Chennai / Hyderabad / Bangalore / Mumbai / Pune Mandatory Skills : Big Data | Hadoop | pyspark | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer #hadoop #spark #python #hive #pysaprk

Posted 1 week ago

Apply

5.0 - 8.0 years

0 - 3 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 5 to Maximum 8 Yrs of exp Location : Chennai / Hyderabad / Bangalore / Mumbai / Pune Mandatory Skills : Big Data | Hadoop | SCALA | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer #hadoop #spark #python #hive #pysaprk

Posted 1 week ago

Apply

7.0 years

0 - 4 Lacs

Hyderābād

On-site

GlassDoor logo

About the Role We are looking for a Senior Data Engineer to contribute to high-impact initiatives within the Payments Core and Payments Compliance Data Engineering team. In this critical role, you will design, build, and optimize data systems that support the scalability, accuracy, and compliance of our global payments infrastructure. You'll work closely with cross-functional partners in Engineering, Product, Risk, and Compliance to transform complex financial and transactional data into reliable pipelines, actionable insights, and audit-ready datasets. The ideal candidate is a strong individual contributor who combines deep technical expertise with a pragmatic approach to solving real-world data challenges. You are comfortable navigating ambiguity, driving projects forward, and have a passion for improving payment systems through robust, high-quality data solutions. What the Candidate Will Need / Bonus Points - What the Candidate Will Do - Partner with engineers, analysts, and product managers to define technical solutions that support business goals Contribute to the architecture and implementation of distributed data systems and platforms Identify inefficiencies in data processing and proactively drive improvements in performance, reliability, and cost Serve as a thought leader and mentor in data engineering best practices across the organization - Basic Qualifications - 7+ years of hands-on experience in software engineering with a focus on data engineering Proficiency in at least one programming language such as Python, Java, or Scala Strong SQL skills and experience with large-scale data processing frameworks (e.g., Apache Spark, Flink, MapReduce, Presto) Demonstrated experience designing, implementing, and operating scalable ETL pipelines and data platforms Proven ability to work collaboratively across teams and communicate technical concepts to diverse stakeholders - Preferred Qualifications - Deep understanding of data warehousing concepts and data modeling best practices Hands-on experience with Hadoop ecosystem tools (e.g., Hive, HDFS, Oozie, Airflow, Spark, Presto) Familiarity with streaming technologies such as Kafka or Samza Expertise in performance optimization, query tuning, and resource-efficient data processing Strong problem-solving skills and a track record of owning systems from design to production

Posted 1 week ago

Apply

0 years

2 - 3 Lacs

Hyderābād

On-site

GlassDoor logo

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview DAIT is Data Analytics and Insights Technology provides end-to-end technology solutions for multiple line of business. Job Description The individual will be a part of Production support L2 team – Batch Operations with technical expertise in – Hadoop/Teradata/Datastage/Autosys/Linux. Responsible for platform stability, proactive application and job monitoring, issue management & resolution, triage, reporting and timely escalation. Responsible for break-fix activities requiring to review Root Cause Analysis, Small Changes to Code, Unit Test results and help to Deploy in production following the release management & code deployment process. The ideal candidate must be highly self-motivated, proactive, attention to detail, good documentation & communication skills to interact with partners like TI, Application, Other Prod Support teams like CCO, L1, L2, L3, Application and Business stakeholders as required. Ability to think of process improvements to improve platform stability and resiliency. Responsibilities Monitor and support applications for 100% SLA meets On call support Production Ticket/Issue Triage Preparing RCA – Root Cause Analysis (RCA) document Partner with Application team, CCO, L1, Level 2 support teams to resolve the issue Prepare and/or review Impact Analysis based on issue analysis Hands on experience with Batch Ops(L1/L2) and L3 support work load. Write scripts to automate mundane daily BAU tasks Willing to provide support after office hours, weekends and stay on call when business needs Identify root cause in the code, perform break-fix activities in the code and/or DB Work on addition projects for improving production efficiency as well as reducing risk Requirements Education: B.E. / B. Tech/M.E. /M. Tech/B.Sc./M.Sc./BCA/MCA (prefer IT/CS specialization) Certifications, If Any: BFSI Domain certifications (Not Mandatory) Experience Range: 6-10 years Foundational skills: Experience in Bigdata (Hadoop) Experience in UNIX and shell scripting. Experience in ETL (Datastage/Informatica). Experience in Database (Oracle/Exadata), Teradata , DB2 Experience in Job scheduling tools like Autosys Aware of ITIL concepts like Incident and Problem Management Experience in application development or production support.( preferably in Batch Processing , scheduling , monitoring , triaging. Desired skills: Experience with Hadoop architecture ,HIVE , Impala, coding in Python , Experience in Datastage 11.7 and above Working Experience with SQL , Teradata, Oracle ,DB2 . Work Timings: 06:30 a.m. to 03:30 p.m. and 11:30 a.m. to 08:30 p.m, Job Location: Chennai

Posted 1 week ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

GlassDoor logo

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS: 4 years (Required) Data Engineering: 6 years (Required) Python: 3 years (Required) Pyspark/Spark: 3 years (Required) Work Location: In person

Posted 1 week ago

Apply

175.0 years

7 - 8 Lacs

Gurgaon

On-site

GlassDoor logo

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? GTLS MIS & Analytics (within GS MIS) team provides analytical and reporting support to T&LS stakeholders in US and 22 international markets. The candidate will be part of Enablement team in GTLS MIS & Analytics team. The main objective is to provide support for automation and platform enablement, manage platform and data governance, ETL builds, Data Quality, BI & downstream data enablement in collaboration with tech organization. Key responsibilities: Understanding business use cases and be able to convert to technical design Part of a cross-disciplinary team, working closely with other data engineers, software engineers, data scientists, data managers and business partners. You will be designing scalable, testable and maintainable data pipelines Identify areas for data governance improvements and help to resolve data quality problems through the appropriate choice of error detection and correction, process control and improvement, or process design changes Developing metrics to measure effectiveness and drive adoption of Data Governance policies and standards that will be applied to mitigate identified risks across the data lifecycle (e.g., capture / production, aggregation / processing, reporting / consumption). You will continuously monitor, troubleshoot, and improve data pipelines and workflows to ensure optimal performance and cost-effectiveness. Reviewing architecture and design on various aspects like scalability, security, design patterns, user experience, non-functional requirements and ensure that all relevant best practices are followed. Key Skills required : 2-4 years of experience in data engineering roles. Advanced SQL skills with a focus on optimisation techniques Big data and Hadoop experience, with a focus on Spark, Hive (or other query engines), big data storage formats (such as Parquet, ORC, Avro). Cloud experience (GCP preferred) with solutions designed and implemented at production scale Strong understanding of key GCP services, especially those related to data processing [Batch/Real Time] Big Query, Cloud Scheduler, Airflow, Cloud Logging and Monitoring Hands-on experience with Git, advanced automation capabilities & shell scripting. Experience in design, development and implementation of data pipelines for Data Warehousing applications Hands on experience in performance tuning and debugging ETL jobs We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 1 week ago

Apply

15.0 years

0 Lacs

India

Remote

Linkedin logo

About the Role: We are looking for a hands-on AWS Data Architect OR Lead Engineer to design and implement scalable, secure, and high-performing data solutions. This is an individual contributor role where you will work closely with data engineers, analysts, and stakeholders to build modern, cloud-native data architectures across real-time and batch pipelines. Experience: 7–15 Years Location: Fully Remote Company: Armakuni India Key Responsibilities: Data Architecture Design: Develop and maintain a comprehensive data architecture strategy that aligns with the business objectives and technology landscape. Data Modeling: Create and manage logical, physical, and conceptual data models to support various business applications and analytics. Database Design: Design and implement database solutions, including data warehouses, data lakes, and operational databases. Data Integration: Oversee the integration of data from disparate sources into unified, accessible systems using ETL/ELT processes. Data Governance: Implemented enforce data governance policies and procedures to ensure data quality, consistency, and security. Technology Evaluation: Evaluate and recommend data management tools, technologies, and best practices to improve data infrastructure and processes. Collaboration: Work closely with data engineers, data scientists, business analysts, and other stakeholders to understand data requirements and deliver effective solutions. Trusted by the world’s leading brands Documentation: Create and maintain documentation related to data architecture, data flows, data dictionaries, and system interfaces. Performance Tuning: Optimize database performance through tuning, indexing, and query optimization. Security: Ensure data security and privacy by implementing best practices for data encryption, access controls, and compliance with relevant regulations (e.g., GDPR, CCPA) Required Skills: Helping project teams with solutions architecture, troubleshooting, and technical implementation assistance. Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, Oracle, SQL Server). Minimum7to15 years of experience in data architecture or related roles. Experience with big data technologies (e.g., Hadoop, Spark, Kafka, Airflow). Expertise with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of data integration tools (e.g., Informatica, Talend, FiveTran, Meltano). Understanding of data warehousing concepts and tools (e.g., Snowflake, Redshift, Synapse, BigQuery).  Experience with data governance frameworks and tools. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

About NCR Atleos NCR Atleos, headquartered in Atlanta, is a leader in expanding financial access. Our dedicated 20,000 employees optimize the branch, improve operational efficiency and maximize self-service availability for financial institutions and retailers across the globe. Overview: Data is at the heart of our global financial network. In fact, the ability to consume, store, analyze and gain insight from data has become a key component of our competitive advantage. Our goal is to build and maintain a leading-edge data platform that provides highly available, consistent data of the highest quality for all users of the platform, including our customers, operations teams and data scientists. We focus on evolving our platform to deliver exponential scale to NCR Atleos, powering our future growth. Data & AI Engineers at NCR Atleos experience working at one of the largest and most recognized financial companies in the world, while being part of a software development team responsible for next generation technologies and solutions. Our engineers design and build large scale data storage, computation and distribution systems. They partner with data and AI experts to deliver high quality AI solutions and derived data to our consumers. We are looking for Data & AI Engineers who like to innovate and seek complex problems. We recognize that strength comes from diversity and will embrace your unique skills, curiosity, drive, and passion while giving you the opportunity to grow technically and as an individual. Engineers looking to work in the areas of orchestration, data modelling, data pipelines, APIs, storage, distribution, distributed computation, consumption and infrastructure are ideal candidates. Responsibilities As a Data Engineer, you will be joining a Data & AI team transforming our global financial network and improving the quality of our products and services we provide to our customers. and you will be responsible for designing, implementing, and maintaining data pipelines and systems to support the organization's data needs. Your role will involve collaborating with data scientists, analysts, and other stakeholders to ensure data accuracy, reliability, and accessibility. Key Responsibilities: Data Pipeline Development: Design, build, and maintain scalable and efficient data pipelines to collect, process, and store structured and unstructured data from various sources. Data Integration: Integrate data from multiple sources such as databases, APIs, flat files, and streaming platforms into centralized data repositories. Data Modeling: Develop and optimize data models and schemas to support analytical and operational requirements. Implement data transformation and aggregation processes as needed. Data Quality Assurance: Implement data validation and quality assurance processes to ensure the accuracy, completeness, and consistency of data throughout its lifecycle. Performance Optimization: Monitor and optimize data processing and storage systems for performance, reliability, and cost-effectiveness. Identify and resolve bottlenecks and inefficiencies in data pipelines and leverage Automation and AI to improve overall Operations. Infrastructure Management: Manage and configure cloud-based or on-premises infrastructure components such as databases, data warehouses, compute clusters, and data processing frameworks. Collaboration: Collaborate with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders to understand data requirements and deliver solutions that meet business objectives. Documentation and Best Practices: Document data pipelines, systems architecture, and best practices for data engineering. Share knowledge and provide guidance to colleagues on data engineering principles and techniques. Continuous Improvement: Stay updated with the latest technologies, tools, and trends in data engineering and recommend improvements to existing processes and systems. Qualifications and Skills: Bachelor's degree or higher in Computer Science, Engineering, or a related field. Proven experience in data engineering or related roles, with a strong understanding of data processing concepts and technologies. Mastery of programming languages such as Python, Java, or Scala. Knowledge of database systems such as SQL, NoSQL, and data warehousing solutions. Knowledge of stream processing technologies such as Kafka or Apache Beam. Experience with distributed computing frameworks such as Apache Spark, Hadoop, or Apache Flink. Experience deploying pipelines in cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience in implementing enterprise systems in production setting for AI, natural language processing. Exposure to self-supervised learning, transfer learning, and reinforcement learning is a plus. Have full stack experience to build the best fit solutions leveraging Large Language Models (LLMs) and Generative AI solutions with focus on privacy, security, fairness. Have good engineering skills to design the output from the AI with nodes and nested nodes in JSON or array, HTML formats for as-is consumption and display on the dashboards/portals. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Experience with containerization and orchestration tools such as Docker and Kubernetes. Familiarity with data visualization tools such as Tableau or Power BI. Offers of employment are conditional upon passage of screening criteria applicable to the job. EEO Statement NCR Atleos is an equal-opportunity employer. It is NCR Atleos policy to hire, train, promote, and pay associates based on their job-related qualifications, ability, and performance, without regard to race, color, creed, religion, national origin, citizenship status, sex, sexual orientation, gender identity/expression, pregnancy, marital status, age, mental or physical disability, genetic information, medical condition, military or veteran status, or any other factor protected by law. Statement to Third Party Agencies To ALL recruitment agencies: NCR Atleos only accepts resumes from agencies on the NCR Atleos preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR Atleos employees, or any NCR Atleos facility. NCR Atleos is not responsible for any fees or charges associated with unsolicited resumes.

Posted 1 week ago

Apply

2.0 years

0 Lacs

India

On-site

Linkedin logo

Ascendeum is looking for mathematicians, with extensive hands-on experience in the field of data science, who can analyze large data sets efficiently to generate actionable business intelligence that drives peak performance for our clients. About Us: We provide AdTech strategy consulting to leading internet websites and apps hosting over 200 million monthly audiences worldwide. Since 2015, our consultants and engineers have consistently delivered intelligent solutions that enable enterprise-level websites and apps to maximize their digital advertising returns. Job Responsibilities: Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data. Analyze large amounts of information to discover trends and patterns Helping develop reports and analysis. Present information using data visualisation techniques. Assessing tests, implementing new or upgraded software, and assisting with strategic decisions on new systems. Evaluating changes and updates to source production systems. Develop, implement, and maintain leading-edge analytic systems, taking complicated problems and building simple frameworks. Providing technical expertise in data storage structures, data mining, and data cleansing. Propose solutions and strategies to business challenges Desired Skills and Experience: Relevant 2+ years of experience in Data Analysis Complete understanding of Operations Research, Data Modelling, ML, and AI concepts. Knowledge of Python is mandatory, familiarity with MySQL, SQL, Scala, Java or C++ is an asset Experience using visualization tools (e.g. Jupyter Notebook) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) Problem-solving aptitude Excellent communication and presentation skills. Bachelor’s / Master's Degree in Computer Science, Engineering, Data Science or other quantitative or relevant field is preferred Thank you for your interest in joining Ascendeum. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Data Modeller JD We are seeking a skilled Data Modeller to join our Corporate Banking team. The ideal candidate will have a strong background in creating data models for various banking services, including Current Account Savings Account (CASA), Loans, and Credit Services. This role involves collaborating with the Data Architect to define data model structures within a data mesh environment and coordinating with multiple departments to ensure cohesive data management practices. Data Modelling: oDesign and develop data models for CASA, Loan, and Credit Services, ensuring they meet business requirements and compliance standards. Create conceptual, logical, and physical data models that support the bank's strategic objectives. Ensure data models are optimized for performance, security, and scalability to support business operations and analytics. Collaboration With Data Architect Work closely with the Data Architect to establish the overall data architecture strategy and framework. Contribute to the definition of data model structures within a data mesh environment. Data Quality And Governance Ensure data quality and integrity in the data models by implementing best practices in data governance. Assist in the establishment of data management policies and standards. Conduct regular data audits and reviews to ensure data accuracy and consistency across systems. Data Modelling Tools: ERwin, IBM InfoSphere Data Architect, Oracle Data Modeler, Microsoft Visio, or similar tools. Databases: SQL, Oracle, MySQL, MS SQL Server, PostgreSQL, Neo4j Graph Data Warehousing Technologies: Snowflake, Teradata, or similar. ETL Tools: Informatica, Talend, Apache NiFi, Microsoft SSIS, or similar. Big Data Technologies: Hadoop, Spark (optional but preferred). Technologies: Experience with data modelling on cloud platforms Microsoft Azure (Synapse, Data Factory) Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

What You’ll Do Works in Data and Analytics under the close supervision of the team manager or senior associates Leverage coding best practices to ensure efficient execution of code against large datasets Run standard processes to ensure metrics, reports and insights are delivered consistently to stakeholders Leverage knowledge of data structures to prepare data for ingestion efforts analysis, assembling data from disparate data sources for the creation of insights Integrate Equifax, customer and third party data to solve basic internal or customer analytical problems and report findings to managers and internal stakeholders Review output of code for anomalies and perform analysis to determine cause, and work with Data, Analytics, Product and Technology counterparts to implement corrective measures Supports discussion on impact and importance of findings on the business (either Equifax or external customer) Ensure proper use of Equifax data assets by working closely with data governance and compliance professionals What Experience You Need 2 years of proven experience as a Data Analyst or Data Scientist Cognizance of BFSI or marketing analytics landscape Experience of working with Python (mandatory) R, SQL Experience of using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop, BigQuery) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) Problem-solving aptitude Excellent communication and presentation skills BSc/BA/BTech in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative STEM streams What Could Set You Apart Cloud certification such as GCP strongly preferred Self Starter Excellent communicator / Client Facing Ability to work in fast paced environment Flexibility work across A/NZ time zones based on project needs Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Looking to hire a Data Engineer at the G4 level who has a deep understanding of Data Product Lifecycle, Standards and Practices. Will be responsible for building scalable and efficient data solutions to support the Brand Marketing / Menu function with a specific focus on the Menu Data product and initiatives. As a Data Engineer, you will collaborate with data scientists, analysts, and other cross-functional teams to ensure the availability, reliability, and performance of data systems. Leads initiatives to enable trusted Menu data, supports decision-making, and partners with business and technology teams to deliver scalable data solutions that drive insights into menu performance, customer preferences, and marketing effectiveness. Expertise in cloud computing platforms, technologies and data engineering best practices will play a crucial role within this domain. Who we’re looking for: Primary Responsibilities: Builds and maintains relevant and reliable Menu data products that support menu and marketing Analytics. Develops and implements new technology solutions as needed to ensure ongoing improvement with data reliability and observability in-view. Participates in new software development engineering and Lead data engineering initiatives supporting Product Mix Analytics, ensuring timely and accurate delivery of marketing and menu-related products. Work closely with the Product owner and help to define business rules that determines the quality of Menu datasets. Drive and implement best practices for pipeline development, data governance, data security and quality across marketing and menu-related datasets. Ensure scalability, maintainability, and quality of data systems powering menu item tracking, promotion data, and marketing analytics. Staying up to date with emerging data engineering technologies, trends, and best practices, and evaluating their applicability to meet evolving Product Mix analytics needs. Documenting data engineering processes, workflows, and solutions for knowledge sharing and future reference. Mentor and coach junior data engineers, particularly in areas related to menu item tracking, promotion data, and marketing analytics. Ability and flexibility to coordinate and work with teams distributed across time zones, as needed Skill: Leads teams to drive scalable data engineering practices and technical excellence within the Menu Data ecosystem. Bachelor's or master's degree in computer science or related engineering field and deep experience with Cloud computing 5+ years of professional experience in data engineering or related fields Proficiency in Python, Java, or Scala for data processing and automation Hands-on experience with data orchestration tools (e.g., Apache Airflow, Luigi) and big data ecosystems (e.g., Hadoop, Spark, NoSQL) Expert knowledge of Data quality functions like cleansing, standardization, parsing, de-duplication, mapping, hierarchy management, etc. Ability to perform extensive data analysis (comparing multiple datasets) using a variety of tools Proven ability to mentor team members and lead technical initiatives across multiple workstreams Effective communication and stakeholder management skills to drive alignment and adoption of data engineering standards Demonstrated experience in data management & data governance capabilities Familiarity with data warehousing principles and best practices. Excellent problem solver - use of data and technology to solve problems or answer complex data related questions Excellent collaboration skills to work effectively in cross-functional teams. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction Software Developers at IBM are the backbone of our strategic initiatives to design, code, test, and provide industry-leading solutions that make the world run today - planes and trains take off on time, bank transactions complete in the blink of an eye and the world remains safe because of the work our software developers do. Whether you are working on projects internally or for a client, software development is critical to the success of IBM and our clients worldwide. At IBM, you will use the latest software development tools, techniques and approaches and work with leading minds in the industry to build solutions you can be proud of Your Role And Responsibilities This Candidate is responsible for DB2 installation and configuration on the below environments. On Prem Multi Cloud Redhat Open shift Cluster HADR Non-DPF and DPF. Migration of other databases to Db2(eg TERADATA / SNOWFLAKE / SAP/ Cloudera to db2 migration) Create high-level designs, detail level designs, maintaining product roadmaps which includes both modernization and leveraging cloud solutions Design scalable, performant, and cost-effective data architectures within the Lakehouse to support diverse workloads, including reporting, analytics, data science, and AI/ML. Perform health check of the databases, make recommendations and deliver tuning. At the Database and system level. Deploy DB2 databases as containers within Red Hat OpenShift clusters Configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Lead the architectural design and implementation of solutions on IBM watsonx.data, ensuring alignment with overall enterprise data strategy and business objectives. Define and optimize the watsonx.data ecosystem, including integration with other IBM watsonx components (watsonx.ai, watsonx.governance) and existing data infrastructure (DB2, Netezza, cloud data sources) Establish best practices for data modeling, schema evolution, and data organization within the watsonx.data lakehouse Act as a subject matter expert on Lakehouse architecture, providing technical leadership and guidance to data engineering, analytics, and development teams. Mentor junior architects and engineers, fostering their growth and knowledge in modern data platforms. Participate in the development of architecture governance processes and promote best practices across the organization. Communicate complex technical concepts to both technical and non-technical stakeholders. Required Technical And Professional Expertise 8+years of experience in data architecture, data engineering, or a similar role, with significant hands-on experience in cloud data platforms Strong proficiency in DB2, SQL and Python. Strong understanding of: Database design and modelling(dimensional, normalized, NoSQL schemas) Normalization and indexing Data warehousing and ETL processes Cloud platforms (AWS, Azure, GCP) Big data technologies (e.g., Hadoop, Spark) Database Migration project experience from one database to another database (target database Db2). Experience in deployment of DB2 databases as containers within Red Hat OpenShift clusters and configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Excellent communication, collaboration, problem-solving, and leadership skills Preferred Technical And Professional Experience Experience with machine learning environments and LLMs Certification in IBM watsonx.data or related IBM data and AI technologies Hands-on experience with Lakehouse platform (e.g., Databricks, Snowflake) Having exposure to implement or understanding of DB replication process Experience with integrating watsonx.data with GenAI or LLM initiatives (e.g., RAG architectures). Experience with NoSQL databases (e.g., MongoDB, Cassandra). Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Design, develop, and deploy machine learning models and algorithms using Python. Implement computer vision solutions for image and video analysis. Develop natural language processing (NLP) applications for text analysis and understanding. Utilize analytics workbench tools to create and manage data pipelines and workflows. Collaborate with data scientists, software engineers, and other stakeholders to understand project requirements and deliver effective solutions. Conduct research and stay updated with the latest advancements in AI and ML technologies. Ensure the scalability and performance of AI/ML solutions through rigorous testing and optimization. Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proven experience as an AI/ML Engineer or similar role. Strong proficiency in Python and its libraries (e.g., TensorFlow, PyTorch, scikit-learn). Expertise in computer vision techniques and tools (e.g., OpenCV, YOLO, Mask R-CNN). c (e.g., NLTK, spaCy, Transformers). Familiarity with analytics workbench tools and data pipeline management. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) for deploying AI/ML solutions. Knowledge of big data technologies (e.g., Hadoop, Spark). Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies