Home
Jobs

2732 Data Quality Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

4 - 8 Lacs

Bengaluru

Work from Office

About the Role We are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data quality and availability across the organization. This role requires a strong background in big data ecosystems, cloud-native tools, and advanced data processing techniques. The ideal candidate has hands-on experience with data ingestion, transformation, and optimization on the Cloudera Data Platform, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights. Responsibilities Data Pipeline DevelopmentDesign, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data IngestionImplement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and ProcessingUse PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance OptimizationConduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and ValidationImplement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and OrchestrationAutomate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Education and Experience Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills PySparkAdvanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data PlatformStrong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data WarehousingKnowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data TechnologiesFamiliarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and SchedulingExperience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and AutomationStrong scripting skills in Linux.

Posted 6 days ago

Apply

9.0 - 14.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Kafka Data Engineer Data Engineer to build and manage data pipelines that support batch and streaming data solutions. The role requires expertise in creating seamless data flows across platforms like Data Lake/Lakehouse in Cloudera, Azure Databricks, Kafka for both batch and stream data pipelines etc. Responsibilities Strong experience in develop, test, and maintain data pipelines (batch & stream) using Cloudera, Spark, Kafka and Azure services like ADF, Cosmos DB, Databricks, NoSQL DB/ Mongo DB etc. Strong programming skills in spark, python or scala & SQL. Optimize data pipelines to improve speed, performance, and reliability, ensuring that data is available for data consumers as required. Create ETL pipelines for downstream consumers by transform data as per business logic. Work closely with Data Architects and Data Analysts to align data solutions with business needs and ensure the accuracy and accessibility of data. Implement data validation checks and error handling processes to maintain high data quality and consistency across data pipelines. Strong analytical and problem solving skills, with a focus on optimizing data flows and addressing impacts in the data pipeline. Qualifications 8+ years of IT experience with at least 5+ years in data engineering and cloud-based data platforms. Strong experience with Cloudera/any Data Lake, Confluent/Apache Kafka, and Azure Data Services (ADF, Databricks, Cosmos DB). Deep knowledge of NoSQL databases (Cosmos DB, MongoDB) and data modeling for performance and scalability. Proven expertise in designing and implementing batch and streaming data pipelines using Databricks, Spark, or Kafka. Experience in creating scalable, reliable, and high-performance data solutions with robust data governance policies. Strong collaboration skills to work with stakeholders, mentor junior Data Engineers, and translate business needs into actionable solutions. Bachelors or masters degree in computer science, IT, or a related field.

Posted 6 days ago

Apply

8.0 - 13.0 years

5 - 9 Lacs

Pune

Work from Office

Responsibilities / Qualifications: Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Glue Data Catalog, Lake formation, Apache Airflow, Lambda, etc Experience with development of data governance framework including the management of data, operating model, data policies and standards. Experience with orchestration of workflows in an enterprise environment. Working experience with Agile Methodology Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.

Posted 6 days ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,Bachelor Of Technology Service Line Enterprise Package Application Services Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to lead the engagement effort of providing high-quality and value-adding consulting solutions to customers at different stages- from problem definition to diagnosis to solution design, development and deployment. You will review the proposals prepared by consultants, provide guidance, and analyze the solutions defined for the client business problems to identify any potential risks and issues. You will identify change Management requirements and propose a structured approach to client for managing the change using multiple communication mechanisms. You will also coach and create a vision for the team, provide subject matter training for your focus areas, motivate and inspire team members through effective and timely feedback and recognition for high performance. You would be a key contributor in unit-level and organizational initiatives with an objective of providing high-quality, value-adding consulting solutions to customers adhering to the guidelines and processes of the organization. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem-solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team managementLocation of posting - Infosys Ltd. is committed to ensuring you have the best experience throughout your journey with us. We currently have open positions in a number of locations across India - Bangalore, Pune, Hyderabad, Chennai, Chandigarh, Mysore, Kolkata, Trivandrum, Indore, Nagpur, Mangalore, Noida, Bhubaneswar, Coimbatore, Mumbai, Jaipur, Hubli, Vizag. While we work in accordance with business requirements, we shall strive to offer you the location of your choice, where possible. Technical and Professional : Must implemented S/4 Data Migration Projects either as hands-on ETL developer or SAP functional module lead capacity Good Understanding of Greenfield, Bluefield and Data migration life cycle and phases Ability to lead workshop at client sites with multiple stakeholders Must have developed data migration strategy and approach for multi-year multi country global rollout programs (template, pilot, waves) Must have executed data quality and validation programs and have worked with SAP migration cockpit Ability to define functional dependencies and project planning involving client’s business/IT, project functional and technical teams Must have participated in presales process and can develop RFP responses with estimation, resource plan, work package planning and orals Preferred Skills: ETL - Data Integration-SAP Business Objects Data Services (SAP BODS) Data Migration-SAP Specific Data Migration

Posted 6 days ago

Apply

9.0 - 14.0 years

12 - 16 Lacs

Pune

Work from Office

Skills requiredStrong SQL(minimum 6-7 years experience), Datawarehouse, ETL Data and Client Platform Tech project provides all data related services to internal and external clients of SST business. Ingestion team is responsible for getting and ingesting data into Datalake. This is Global team with development team at Shanghai, Pune, Dublin and Tampa. Ingestion team uses all Big Data technologies like Impala, Hive, Spark and HDFS. Ingestion team uses Cloud technologies such as Snowflake for cloud data storage. Responsibilities: You will gain an understanding of the complex domain model and define the logical and physical data model for the Securities Services business. You will also constantly improve the ingestion, storage and performance processes by analyzing them and possibly automating them wherever possible. You will be responsible for defining standards and best practices for the team in the areas of Code Standards, Unit Testing, Continuous Integration, and Release Management. You will be responsible for improving performance of queries from lake tables views You will be working with a wide variety of stakeholders source systems, business sponsors, product owners, scrum masters, enterprise architects and possess excellent communication skills to articulate challenging technical details to various class of people. You will be working in Agile Scrum and complete all assigned tasks JIRAs as per Sprint timelines and standards. Qualifications 5 8 years of relevant experience in Data Development, ETL and Data Ingestion and Performance optimization. Strong SQL skills are essential experience writing complex queries spanning multiple tables is required. Knowledge of Big Data technologies Impala, Hive, Spark nice to have. Working knowledge of performance tuning of database queries understanding the inner working of the query optimizer, query plans, indexes, partitions etc. Experience in systems analysis and programming of software applications in SQL and other Big Data Query Languages. Working knowledge of data modelling and dimensional modelling tools and techniques. Knowledge of working with high volume data ingestion and high volume historic data processing is required. Exposure to scripting language like shell scripting, python is required. Working knowledge of consulting project management techniques methods Knowledge of working in Agile Scrum Teams and processes. Experience in data quality, data governance, DataOps and latest data management techniques a plus. Education Bachelors degree University degree or equivalent experience

Posted 6 days ago

Apply

8.0 - 11.0 years

10 - 13 Lacs

Pune

Work from Office

: Job Title- Lead Business Functional Analyst for Adjustments acceleration, VP Location- Pune, India Role Description The Credit Risk Data Unit provides quality assured, and timely Finance relevant Risk information and analysis to key stakeholders in a transparent and controlled manner covering the end to end processes for all relevant metrics in an efficient and regulatory compliant way. This role is for the Global Risk Data Control and Validation Group Function team responsible for aggregating, quality assuring and timely submitting credit exposure data into FDW as per BCBS standards. This data impacts all downstream regulatory and regional reporting of the Bank including key metrics like Credit Risk RWA, Leverage Exposure and Regulatory Capital. RDV- GF is part of the Credit Risk Data Unit (CRDU) team within Group Finance and their key stakeholders include but are not limited toCRDU, Business Finance, Accounting Close, Book Runners and Source & FDW IT Support teams. This Group process is centrally based out of Pune. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities This is a key role requiring proactively managing the resolution of Data Quality (DQ) issues relating to sourcing of good quality input data into FDW from various source systems i.e LS2, SUMMIT, RMS, Magellan etc. This includes strategic, non strategic and manual data feeds. Support the change book of work as set out by FRM KD workstream, by engaging with Business, Finance, Change teams and Technology on initiatives for strategic implementations and Data Quality (DQ) remediation Navigate through the complex logics and algorithms built in the data enrichment layers i.e FCL, EOS, Kannon, risk engine to perform root cause analysis on the data quality issues. Provide input into relevant governance processes relating to of Data Quality issues, ensuring accurate monitoring, tracking and escalation. Providing subject matter expertise and analytics to support Finance and the Risk team regarding risk and regulatory topics or initiatives e.g. optimization topics Represent the team in relevant Production and Change forums and raise issues relating to month end data quality issues and their resolution Your skills and experience Minimum 8-9 years experience in Credit Risk Controls, Banking Operations, Business Process Reengineering, Change, Audit or Finance Industry. Good understanding of banking products (Debt, SFT and Derivatives) with working knowledge of Global Markets Financial products A good working knowledge of the front to back system architecture within an investment bank. Advance skills in MS Applications (Excel, Word, PowerPoint and Access). Working knowledge of SQLs a plus. Strong quantitative analysis skills Strong stakeholder management skills/able to manage diverse stakeholders across regions. How well support you

Posted 6 days ago

Apply

4.0 - 9.0 years

37 - 40 Lacs

Bengaluru

Work from Office

: Job TitleOperations Manager, AVP LocationBangalore, India Role Description Successful candidate will be joining the ORDS team as part of the Reference Data Accelerator (RDA) project as it moves into the BAU stage. This a key regulatory requirement in providing a single obligor view of authorised data to ensure adherence to BSBS239 compliance and is a Mgmt Board objective Primary responsibility will be to provide cRDS data oversight into the RDA model. Operational Reference Data Services (ORDS) function comprises of Client Data, Tax & Regulatory teams (including Instrument Reference Data). The group provides operational services across the Global Markets and Corporate Investment Banking (CIB) clients globally, which enable client business, regulatory and tax compliance, protect against client lifecycle risk and drive up data standards within the firm. The ORDS function is focused on driving compliance within operations. The primary focus of this Client data; which has a significant impact on how we perform on-boarding and KYC of our customers, maintenance of client accounts and downstream operations. About the Organization Deutsche Banks Operations group provides support for all of DBs businesses to enable them to deliver operational transactions and processes to clients. Our people work in established global financial centres such as London, New York, Frankfurt and Singapore, as well as specialist development and operations centres in locations including Birmingham, Jacksonville, Bangalore, Jaipur, Pune, Dublin, Bucharest, Moscow, and Cary. We move over EUR 1.6 trillion across the Banks platforms, support thousands of trading desks and enable millions of banking transactions, share trades and emails every day. Our goal is to deliver world-class client service at exceptional value to internal partners and clients. A dynamic and diverse division, our objective is to make sure that all our services are executed in a timely and professional manner, that risk is minimised and that the client experience is positive. We are proud of the professionalism of our people, and the service they deliver. In return, we offer career development opportunities to foster skills and talent. We work across a wide range of product groups, including derivatives, securities, global finance and foreign exchange, cash and trade loans and trust and securities services as well as cross-product functions. Operations interface with Regulatory and Tax is a growing area of interest and helps Deutsche Bank to be compliant at all times. About Operations Reference Data Services (ORDS) Operational Reference Data Services (ORDS) function comprises of Client Data, Tax & Regulatory teams (including Instrument Reference Data). The group provides operational services across the Global Markets and Corporate Investment Banking (CIB) clients globally, which enable client business, regulatory and tax compliance, protect against client lifecycle risk and drive up data standards within the firm. The ORDS function is focused on driving compliance within operations. The primary focus of this is Client data; which has a significant impact on how we perform on-boarding and KYC of our customers, maintenance of client accounts and downstream operations What well offer you 100% reimbursement under child care assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities ORDS CRDS RDA Objectives Legal Entity Creation, Modification, and Clearing Data Quality Exceptions through independent external research, where possible. Remediate all known data issues and gaps to ensure all data is ready for use by various processes built around dbCAR Reduce time to market and increase accuracy for tax and regulatory compliance programs Identify root cause and work towards prioritising strategic improvements to mitigate issues overall Hand-holding the Legal Entity Master process end to end to ensure data quality satisfies consumer expectations and cRDS Standards and Policies Analysing the root cause of any DQ issues and work with the relevant team to resolve the DQ issues. Responsible for ensuring agreed standards and procedures are duly followed with an eye for delivering against the set timelines. Understanding client hierarchy and relationship structures and identification of gaps for proactive remediation on systematic basis Identification, review and remediation of duplicate relationship structures in order to improve reporting accuracy Work closely with SMEs within the larger Client Data Services across locations to obtain required skillsets to have the right level of support required for the end to end Data Remediation Create and agree required KOPs and KPIs around key processes. Manage delivery against agreed timelines and milestones with key Stakeholders / Business Set the stage for measurement of KPIs and KRIs and other metrics Record, Track and Report on Risks and Issues Responsibilities and Tasks Evaluating availability of required information and is line with stakeholders requirements Maintain and document data changes Ensuring RDA Business rules are adhered to, refining where required and analysing data inconsistencies Remediating and prioritising cRDS data challenges Your skills and experience Experience/skills 2+ years of experience in investment banking, especially in Client Onboarding / KYC / Client Reference Data function. A proven, high level of analytical and problem solving experience Ability to break down complex situations into easy to understand components Work with DQ tools and SQL Experience in Data Management and Data Analytics Strong and well-developed relationship / stakeholder skills Demonstration of excellent communication skills High motivation and pro-active approach to situations Open minded, able to share information, knowledge and expertise with peers & team members High emphasis on teamwork and leading situations How well support you

Posted 6 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Hyderabad

Work from Office

Design, develop, and maintain ETL processes using Ab Initio and other ETL tools. Manage and optimize data pipelines on AWS. Write and maintain complex PL/SQL queries for data extraction, transformation, and loading. Provide Level 3 support for ETL processes, troubleshooting and resolving issues promptly. Collaborate with data architects, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in ETL and cloud computing. : Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Certification in Ab Initio. Proven experience with AWS and cloud-based data solutions. Strong proficiency in PL/SQL and other ETL tools. Experience in providing Level 3 support for ETL processes. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with other ETL tools such as Informatica, Talend, or DataStage. Knowledge of data warehousing concepts and best practices. Familiarity with scripting languages (e.g., Python, Shell scripting).

Posted 6 days ago

Apply

3.0 - 5.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,Bachelor Of Technology (Integrated),BCA,BSc,MTech,MSc,Master Of Business Management Service Line Enterprise Package Application Services Responsibilities You will be part of an innovative team that drives our Celonis initiatives and to dive into business processes to determine root causes, quantify potential, and establish and drive improvement initiatives that make businesses more efficient. You will set up and maintain data models that will be the basis of the analyses and work together closely with the business analysts to generate the customized set of analytics that serve as a single source of truth for business performance measurement as well as data-driven decision making. You are responsible for setting data dictionary and maintaining data governance on the created structure. You identify the best possible strategy for data collection, ensure the data quality and work together with the stakeholders responsible for the data input to ensure we can correctly measure and track all necessary information. Collaborate with source system experts to ensure the source systems are set up correctly to gather all relevant information and support the most effective data structures. Create and maintain comprehensive documentation for data models, processes, and systems to facilitate knowledge sharing. Technical and Professional : Celonis You have 2+ years of relevant work experience in process and data modelling. You have worked with data from ERP systems like SAP. You have a proven track record in using SQL and Python. You are a team player and can communicate data structural concepts and ideas to both technical and non-technical stakeholders. You have strong analytical skills and have an affinity with business concepts. Celonis Data Engineer/Implementation Professional certification will be an advantage. Celonis project experience will be a big plus. Preferred Skills: Foundational-Business Process Management-Business Process Model and Notation (BPMN) ver 2.0-Celonis

Posted 6 days ago

Apply

2.0 - 7.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,Bachelor Of Technology (Integrated),Bachelor Of Technology,Bachelor Of Business Adm.,Master Of Business Adm.,Master of Science (Technology),Master Of Technology,Master of Technology (Integrated) Service Line Enterprise Package Application Services Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Technical and Professional : At least 2 years of configuration & development experience with respect to implementation of OFSAA Solutions (such as ERM, EPM, etc.) Expertise in implementing OFSAA Technical areas covering OFSAAI and frameworks Data Integrator, Metadata Management, Data Modelling. Perform and understand data mapping from Source systems to OFSAA staging; Execution of OFSAA batches and analyses result area tables and derived entities. Perform data analysis using OFSAA Metadata (i.e Technical Metadata, Rule Metadata, Business Metadata) and identify if any data mapping gap and report them to stakeholders. Participate in requirements workshops, help in implementation of designed solution, testing (UT, SIT), coordinate user acceptance testing etc. Knowledge and experience with full SDLC lifecycle Experience with Lean / Agile development methodologies Preferred Skills: Technology-Oracle Industry Solutions-Oracle Financial Services Analytical Applications (OFSAA)

Posted 6 days ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP Master Data Migration Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and troubleshooting to ensure that the applications function as intended, contributing to the overall success of the projects you are involved in. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in continuous learning to stay updated with the latest technologies and methodologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Master Data Migration.- Strong understanding of data migration processes and best practices.- Experience with data mapping and transformation techniques.- Familiarity with SAP modules and their integration points.- Ability to troubleshoot and resolve data-related issues efficiently. Additional Information:- The candidate should have minimum 3 years of experience in SAP Master Data Migration.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 6 days ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP Data Migration Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are optimized for performance and usability. You will also engage in problem-solving activities, providing support and guidance to your team members while continuously seeking opportunities for improvement and innovation in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure alignment with business goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Data Migration.- Strong understanding of data integration techniques and methodologies.- Experience with data mapping and transformation processes.- Familiarity with SAP modules and their data structures.- Ability to troubleshoot and resolve data migration issues efficiently. Additional Information:- The candidate should have minimum 7.5 years of experience in SAP Data Migration.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 6 days ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Manhattan Warehouse Solutions Technical Good to have skills : Warehouse Management SolutionsMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Develop and implement software solutions to meet business requirements.- Collaborate with cross-functional teams to ensure application functionality.- Conduct code reviews and provide technical guidance to team members.- Troubleshoot and resolve application issues in a timely manner.- Stay updated on industry trends and technologies to enhance application development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Manhattan Warehouse Solutions Technical.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 3 years of experience in Manhattan Warehouse Solutions Technical.- This position is based at our Hyderabad office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 6 days ago

Apply

3.0 - 7.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Talend - Designing, developing, and documenting existing Talend ETL processes, technical architecture, data pipelines, and performance scaling using tools to integrate Talend data and ensure data quality in a big data environment. AWS / Snowflake - Design, develop, and maintain data models using SQL and Snowflake / AWS Redshift-specific features. Collaborate with stakeholders to understand the requirements of the data warehouse. Implement data security, privacy, and compliance measures. Perform data analysis, troubleshoot data issues, and provide technical support to end-users. Develop and maintain data warehouse and ETL processes, ensuring data quality and integrity. Stay current with new AWS/Snowflake services and features and recommend improvements to existing architecture. Design and implement scalable, secure, and cost-effective cloud solutions using AWS / Snowflake services. Collaborate with cross-functional teams to understand requirements and provide technical guidance.

Posted 6 days ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Hyderabad

Work from Office

This role will be instrumental in building and maintaining robust, scalable, and reliable data pipelines using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. The ideal candidate will have a strong understanding of data streaming concepts, experience with real-time data processing, and a passion for building high-performance data solutions. This role requires excellent analytical skills, attention to detail, and the ability to work collaboratively in a fast-paced environment. Essential Responsibilities Design & develop data pipelines for real time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Required Bachelors degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problemsolving, analytical, and communication skills. Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile

Posted 6 days ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Hyderabad

Work from Office

Design, develop, and maintain ETL processes using Talend. Manage and optimize data pipelines on Amazon Redshift. Implement data transformation workflows using DBT (Data Build Tool). Write efficient, reusable, and reliable code in PySpark. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in data engineering. : Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or similar role. High proficiency in Talend. Strong experience with Amazon Redshift. Expertise in DBT and PySpark. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms and services. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with other data engineering tools and frameworks. Knowledge of machine learning frameworks and libraries.

Posted 6 days ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Pune

Work from Office

Design, develop, and maintain ETL processes using Ab Initio and other ETL tools. Manage and optimize data pipelines on AWS. Write and maintain complex PL/SQL queries for data extraction, transformation, and loading. Provide Level 3 support for ETL processes, troubleshooting and resolving issues promptly. Collaborate with data architects, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in ETL and cloud computing. : Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Certification in Ab Initio. Proven experience with AWS and cloud-based data solutions. Strong proficiency in PL/SQL and other ETL tools. Experience in providing Level 3 support for ETL processes. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with other ETL tools such as Informatica, Talend, or DataStage. Knowledge of data warehousing concepts and best practices. Familiarity with scripting languages (e.g., Python, Shell scripting).

Posted 6 days ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

1Airflow with Pyspark Emphasize expertise in designing, developing, and deploying data pipelines using Apache Airflow. The focus is on creating, managing, and monitoring workflows, ensuring data quality, and collaborating with other data teams.

Posted 6 days ago

Apply

4.0 - 9.0 years

4 - 8 Lacs

Gurugram

Work from Office

Data Engineer Location PAN INDIA Workmode Hybrid Work Timing :2 Pm to 11 PM Primary Skill Data Engineer Experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark.. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Also, experience in Redshift is required along with other SQL DB experience Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. understanding of building an end-to end Data pipeline. Strong understanding of Kinesis, Kafka, CDK. Experience with Kafka and ECS is also required. strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required Experience in Node Js and CDK. JDResponsibilities Lead the architectural design and development of a scalable, reliable, and flexible metadata-driven data ingestion and extraction framework on AWS using Python/PySpark. Design and implement a customizable data processing framework using Python/PySpark. This framework should be capable of handling diverse scenarios and evolving data processing requirements. Implement data pipeline for data Ingestion, transformation and extraction leveraging the AWS Cloud Services Seamlessly integrate a variety of AWS services, including S3,Glue, Kafka, Lambda, SQL, SNS, Athena, EC2, RDS (Oracle, Postgres, MySQL), AWS Crawler to construct a highly scalable and reliable data ingestion and extraction pipeline. Facilitate configuration and extensibility of the framework to adapt to evolving data needs and processing scenarios. Develop and maintain rigorous data quality checks and validation processes to safeguard the integrity of ingested data. Implement robust error handling, logging, monitoring, and alerting mechanisms to ensure the reliability of the entire data pipeline. QualificationsMust Have Over 6 years of hands-on experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Strong working experience in Redshift is required along with other SQL DB experience. Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. Complete understanding of building an end-to end Data pipeline. Nice to have Strong understanding of Kinesis, Kafka, CDK. A strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required. Experience in Node Js and CDK. Experience with Kafka and ECS is also required.

Posted 6 days ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP PP Production Planning & Control Discrete Industries Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems that apply across multiple teams. With your expertise and leadership, you will contribute to the success of the project and drive innovation in application development. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Expected to provide solutions to problems that apply across multiple teams- Lead the effort to design, build, and configure applications- Act as the primary point of contact for the project- Manage the team and ensure successful project delivery- Collaborate with multiple teams to make key decisions- Provide solutions to problems that apply across multiple teams Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP PP Production Planning & Control Discrete Industries- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have a minimum of 12 years of experience in SAP PP Production Planning & Control Discrete Industries- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education

Posted 6 days ago

Apply

8.0 - 13.0 years

5 - 10 Lacs

Pune

Work from Office

Data Engineer Position Summary The Data Engineer is responsible for building and maintaining data pipelines ensuring the smooth operation of data systems and optimizing workflows to meet business requirements This role will support data integration and processing for various applications Minimum Qualifications 6 Years overall IT experience with minimum 4 years of work experience in below tech skills Tech Skills Proficient in Python scripting and PySpark for data processing tasks Strong SQL capabilities with hands on experience managing big data using ETL tools like Informatica Experience with the AWS cloud platform and its data services including S3 Redshift Lambda EMR Airflow Postgres SNS and EventBridge Skilled in BASH Shell scripting Understanding of data lakehouse architecture particularly with Iceberg format is a plus Preferred Experience with Kafka and Mulesoft API Understanding of healthcare data systems is a plus Experience in Agile methodologies Strong analytical and problem solving skills Effective communication and teamwork abilities Responsibilities Develop and maintain data pipelines and ETL processes to manage large scale datasets Collaborate to design test data architectures to align with business needs Implement and optimize data models for efficient querying and reporting Assist in the development and maintenance of data quality checks and monitoring processes Support the creation of data solutions that enable analytical capabilities Contribute to aligning data architecture with overall organizational solutions

Posted 6 days ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Responsible for driving the strategy, development, and deployment of IBM's MDM solutions. This role involves translating business needs into product requirements, leading cross-functional teams, and ensuring the success of MDM products within IBM's portfolio. Key Responsibilities: Strategy and Vision: Defining the MDM product roadmap, identifying market opportunities, and aligning product strategy with overall business objectives. Gathering and Analysis: Working with stakeholders to understand customer needs, conduct market research, and translate these into detailed product requirements. Product Development: Leading the development process, including prioritizing features, managing the product backlog, and collaborating with engineering and design teams. Product Launch and Support: Overseeing the product launch process, providing ongoing product support, and ensuring customer satisfaction. Market Analysis and Competitive Intelligence: Staying up-to-date on market trends, competitor offerings, and customer feedback to identify opportunities for product innovation. Cross-functional Collaboration: Working effectively with various teams, including engineering, marketing, sales, and product design, to ensure product success. Data Quality and Governance: Understanding and promoting best practices for data governance and quality within the context of MDM. Technology: Staying informed about new technologies related to data management and artificial intelligence. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5+ years in software product management, with a focus on using data to inform prioritization and decision-making. 5+ years experience engaging in user research, testing, and optimizing workflows to enhance user experiences. 5+ years in agile product delivery, including writing detailed requirements, managing backlogs, and performing user acceptance testing (UAT). 5+ years with technical with good understanding of AI, foundation models, and related technologies 5+ years demonstrating exceptional communication and problem-solving skills, with the ability to bridge technical and business perspectives. Preferred technical and professional experience Background with IBM Master Data Management, data quality or governance software solutions

Posted 6 days ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Hyderabad

Work from Office

Data Tester Highlights: 5 plus years experience in data testing ETL TestingValidating the extraction, transformation, and loading (ETL) of data from various sources. Data ValidationEnsuring the accuracy, completeness, and integrity of data in databases and data warehouses. SQL ProficiencyWriting and executing SQL queries to fetch and analyze data. Data ModelingUnderstanding data models, data mappings, and architectural documentation. Test Case DesignCreating test cases, test data, and executing test plans. TroubleshootingIdentifying and resolving data-related issues. Dashboard TestingValidating dashboards for accuracy, functionality, and user experience. CollaborationWorking with developers and other stakeholders to ensure data quality and functionality. Primary Responsibilities Dashboard Testing Components: Functional TestingSimulating user interactions and clicks to ensure dashboards are functioning correctly. Performance TestingEvaluating dashboard responsiveness and load times. Data Quality TestingVerifying that the data displayed on dashboards is accurate, complete, and consistent. Usability TestingAssessing the ease of use and navigation of dashboards. Data Visualization TestingEnsuring charts, graphs, and other visualizations are accurate and present data effectively. Security TestingVerifying that dashboards are secure and protect sensitive data. Tools and Technologies: SQLUsed for querying and validating data. Hands on snowflake ETL ToolsTools like Talend, Informatica, or Azure Data Factory used for data extraction, transformation, and loading. Data Visualization ToolsTableau, Power BI, or other BI tools used for creating and testing dashboards. Testing FrameworksFrameworks like Selenium or JUnit used for automating testing tasks. Cloud PlatformsAWS platforms used for data storage and processing. Hands on Snowflake experience HealthCare Domain knowledge is plus point. Secondary Skills Automation framework, Life science domain experience. UI Testing, API Testing Any other ETL Tools

Posted 6 days ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Data Modeller JD: We are seeking a skilled Data Modeller to join our Corporate Banking team. The ideal candidate will have a strong background in creating data models for various banking services, including Current Account Savings Account (CASA), Loans, and Credit Services. This role involves collaborating with the Data Architect to define data model structures within a data mesh environment and coordinating with multiple departments to ensure cohesive data management practices. Data Modelling: oDesign and develop data models for CASA, Loan, and Credit Services, ensuring they meet business requirements and compliance standards. Create conceptual, logical, and physical data models that support the bank's strategic objectives. Ensure data models are optimized for performance, security, and scalability to support business operations and analytics. Collaboration with Data Architect: Work closely with the Data Architect to establish the overall data architecture strategy and framework. Contribute to the definition of data model structures within a data mesh environment. Data Quality and Governance: Ensure data quality and integrity in the data models by implementing best practices in data governance. Assist in the establishment of data management policies and standards. Conduct regular data audits and reviews to ensure data accuracy and consistency across systems. Data Modelling ToolsERwin, IBM InfoSphere Data Architect, Oracle Data Modeler, Microsoft Visio, or similar tools. DatabasesSQL, Oracle, MySQL, MS SQL Server, PostgreSQL, Neo4j Graph Data Warehousing TechnologiesSnowflake, Teradata, or similar. ETL ToolsInformatica, Talend, Apache NiFi, Microsoft SSIS, or similar. Big Data TechnologiesHadoop, Spark (optional but preferred). TechnologiesExperience with data modelling on cloud platforms Microsoft Azure (Synapse, Data Factory)

Posted 6 days ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Job Title Data Analyst / Technical Business Analyst Job Summary We are looking for a skilled Data Analyst to support a large-scale data migration initiative within the banking and insurance domain. The role involves analyzing, validating, and transforming data from legacy systems to modern platforms, ensuring regulatory compliance, data integrity, and business continuity. Key Responsibilities Collaborate with business stakeholders, data architects, and IT teams to gather and understand data migration requirements. Analyze legacy banking and insurance systems (e.g., core banking, policy admin, claims, CRM) to identify data structures and dependencies. Work with large-scale datasets and understand big data architectures (e.g., Hadoop, Spark, Hive) to support scalable data migration and transformation. Perform data profiling, cleansing, and transformation using SQL and ETL tools, with the ability to understand and write complex SQL queries and interpret the logic implemented in ETL workflows. Develop and maintain data mapping documents and transformation logic specific to financial and insurance data (e.g., customer KYC, transactions, policies, claims). Validate migrated data against business rules, regulatory standards, and reconciliation reports. Support UAT by preparing test cases and validating migrated data with business users. Ensure data privacy and security compliance throughout the migration process. Document issues, risks, and resolutions related to data quality and migration. Required Skills & Qualifications Bachelor’s degree in Computer Science, Information Systems, Finance, or a related field. 5+ years of experience in data analysis or data migration projects in banking or insurance. Strong SQL skills and experience with data profiling and cleansing. Familiarity with ETL tools (e.g., Informatica, Talend, SSIS) and data visualization tools (e.g., Power BI, Tableau). Experience working with big data platforms (e.g., Hadoop, Spark, Hive) and handling large volumes of structured and unstructured data. Understanding of banking and insurance data domains (e.g., customer data, transactions, policies, claims, underwriting). Knowledge of regulatory and compliance requirements (e.g., AML, KYC, GDPR, IRDAI guidelines). Excellent analytical, documentation, and communication skills. Preferred Qualifications Experience with core banking systems (e.g., Finacle, Flexcube) or insurance platforms Exposure to cloud data platforms (e.g.,AWS, Azure, GCP). Experience working in Agile/Scrum environments. Certification in Business Analysis (e.g., CBAP, CCBA) or Data Analytics.

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies