Jobs
Interviews

3436 Data Quality Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 3.0 years

4 - 5 Lacs

Hyderabad

Work from Office

We are seeking a detail-oriented data annotator to join the Oracle Analytics team. As a member of our data operations team you will be responsible for creating and annotating natural language and structured data to facilitate the development of new AI solutions for Oracles Analytics product portfolio. You will work with applied science and data science teams to design and evaluate new AI capabilities, test systems ahead of deployment, and support customers with implementing and customizing AI solutions for industry verticals or specific use cases. The ideal candidate will have in-depth familiarity with Oracles analytics products (Fusion Data Intelligence, Oracle Analytics Cloud), SQL, Python, and data visualization, as well as experience in subject matter areas related to ERP, Finance, Supply Chain Management, or Human Capital Management. Knowledge of data collection management practices, quality control processes is a strong plus. Career Level - IC2 Key Responsibilities: Collaborate with applied scientists and product managers to gather data requirements. Help develop and improve data annotation guidelines and quality control mechanisms. Produce and deliver data meeting predefined quality targets and timelines. Help test and evaluate AI solutions. Partner with customers and solution architects to improve usability of AI products. Required Qualifications: Bachelors degree in Business Administration, Information Systems, Data Science, Linguistics, or a related field. 2-3 years of industry experience in data operations, data annotation, business analytics, or a related role. Experience with Oracle Analytics tools and dashboards, SQL, and general task management tools (e.g., Jira). Excellent communication and collaboration abilities. Attention to detail, sense of ownership, and customer-focused mindset. Software engineering skills (SQL, Python). Preferred Qualifications: Knowledge of machine learning concepts and how annotated data supports model development. Previous experience with large-scale data collection/annotation and data quality control processes.

Posted 5 days ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Product Owner Quality Management (MDM Specific) BSH Household Appliances Manufacturing Private Limited | Full time | Bangalore - Whitefield Bangalore Your responsibilities Responsible for the further development and management of the "Master Data Management (MDM) Data Quality" area Collecting, assessing, and prioritizing requirements in close collaboration with business units and IT. Leading the implementation of data quality initiatives and ensuring high data quality across the organization. Reporting, analyzing, and visualizing data quality metrics (using tools such as PowerBI) Data Integration, creation of Dashboards. Microsoft PowerBI, Backend Development, DENODO, SAP R3, S4 HANA, Data Integration. Your profile Excellent stakeholder management and moderation skills. Structured, solution-oriented, and independent working style. Demonstrated experience working in an Agile environment. Proven ability to adapt to changing priorities and collaborate effectively with cross-functional teams. 6 or more years of experience in designing, developing and maintaining interactive PowerBI, dashboard and at least 2 years of Product owner.

Posted 5 days ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Mumbai

Work from Office

Title: Data Specialist Team Lead, Level II, Mumbai Location: Vashi, Navi Mumbai Company: Morningstar is a leading provider of independent investment research in North America, Europe, Australia, and Asia. We offer a wide variety of products and solutions that serve market participants of all kinds, including individual and institutional investors in public and private capital markets, financial advisors, asset managers, retirement plan providers and sponsors, and issuers of securities. Morningstar India has been a Great Place to Work-certified company for the past eight consecutive years. Role : As the Data Level II Team Lead, you will manage a team of Level II support in APAC shift for complex data cases escalated from the Level I support teams. You will work closely with Client Service Support and Data Operations to ensure resolution of these issues. Your team will be required to research identified issues, ensure cycle time requirements are met, and communicate outcomes to all appropriate team members. Timely resolution requires coordinating across various data teams. By managing this Level II team, you will greatly influence the client experience in a positive way. The position is based in our Mumbai office. You will report directly to the Senior Manager. Shift : AU Responsibilities: Lead the team to achieve operational excellence, process efficiency and data quality control. Support process improvement by applying LEAN Six Sigma tools and implementing process automation. Understand priorities and organize team workload based on business requirement. Provide the team with the requisite knowledge on Morningstar business, products and finance industry and create a learning culture in the team. Conducts performance reviews of each team member and explains/sets career development paths and opportunities for team members Responsible for working together with the team on a queue of data questions within the prescribed cycle times and track ongoing progress of cases via the CRM tools. Identify process defects, defined as, for example, the amount of time required to respond to data cases, to help improve overall responsiveness to Level II cases. Contribute to ongoing education of Morningstar Client Service teams on common issues, which may involve data processing and calculations, to reduce Level II ticket volumes. Qualifications : Master s/bachelor s degree in finance/business with an exceptional academic record. 3-5 years of professional work experience at various levels of seniority, prior team management experience would be a plus Understanding of the financial industry, curious and passionate about investment data. Excellent writing, communication, problem solving, organizational, analytical and interpersonal skills. Commitment to deliver honest, high quality and personal service to our clients. Experience in process improvement and project management is preferred. Proficiency in Microsoft Suite, especially Excel; knowledge of SQL, Power BI will be an advantage.

Posted 5 days ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Hyderabad

Work from Office

Description & Requirements Summary: As an AI/ML Developer, you ll play a pivotal role in creating and delivering cutting-edge enterprise applications and automations using Infor s AI, RPA, and OS platform technology. Your mission will be to identify innovative use cases, develop proof of concepts (PoCs), and deliver enterprise automation solutions that elevate workforce productivity and improve business performance for our customers. Key Responsibilities: Use Case Identification: Dive deep into customer requirements and business challenges. Identify innovative use cases that can be addressed through AI/ML solutions. Data Insights: Perform exploratory data analysis on large and complex datasets. Assess data quality, extract insights, and share findings. Data Preparation: Gather relevant datasets for training and testing. Clean, preprocess, and augment data to ensure suitability for AI tasks. Model Development: Train and fine-tune AI/ML models. Evaluate performance using appropriate metrics and benchmarks, optimizing for efficiency. Integration and Deployment: Collaborate with software engineers and developers to seamlessly integrate AI/ML models into enterprise systems and applications. Handle production deployment challenges. Continuous Improvement: Evaluate and enhance the performance and capabilities of deployed AI products. Monitor user feedback and iterate on models and algorithms to address limitations and enhance user experience. Proof of Concepts (PoCs): Develop PoCs to validate the feasibility and effectiveness of proposed solutions. Showcase the art of the possible to our clients. Collaboration with Development Teams: Work closely with development teams on new use cases. Best Practices and Requirements: Collaborate with team members to determine best practices and requirements. Innovation: Contribute to our efforts in enterprise automation and cloud innovation. Key Requirements: Experience: A minimum 3 years of hands-on experience in implementing AI/ML models in enterprise systems. AI/ML Concepts: In-depth understanding of supervised and unsupervised learning, reinforcement learning, deep learning, and probabilistic models. Programming Languages: Proficiency in Python or R, along with querying languages like SQL. Data Handling: Ability to work with large datasets, perform data preprocessing, and wrangle data effectively. Cloud Infrastructure: Experience with AWS Sagemaker or Azure ML for implementing ML solutions is highly preferred. Frameworks and Libraries: Familiarity with scikit-learn, Keras, TensorFlow, PyTorch, or NLTK is a plus. Analytical Skills: Strong critical thinking abilities to identify problems, formulate hypotheses, and design experiments. Business Process Understanding : Good understanding of business processes and how they can be automated. Domain Expertise: Familiarity with Demand Forecasting, Anomaly Detection, Pricing, Recommendation, or Analytics solutions. Global Project Experience: Proven track record of working with global customers on multiple projects. Customer Interaction: Experience facing customers and understanding their needs. Communication Skills: Excellent verbal and written communication skills. Analytical Mindset: Strong analytical and problem-solving skills. Collaboration: Ability to work independently and collaboratively. Educational Background: Bachelor s or Master s degree in Computer Science, Mathematics, Statistics, or a related field. Specialization: Coursework or specialization in AI, ML, Statistics & Probability, Deep Learning, Computer Vision, or NLP/NLU is advantageous.

Posted 5 days ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

The primary responsibility of this role is delivery of CIB credit risk reporting and MI, working directly with and supporting the group CIB credit risk team, which rolls up to into the Global Head of Reporting, Risk & CFCR. The risk reports are used by risk managers, Group Risk Framework Owners, committees and senior management, Regulators, and other internal stakeholders. The team involved in every stage of a reporting process:- Requirement gathering, collaboration with group and country teams to identify metrics for reporting. Evaluate Data Availability & proactively manage data quality requirements. Tactical solutions to bridge the gap while implementing strategic solutions. From testing till Go live Partner with Data Strategy /Technology for strategic sourcing & self- service models Insights & Commentaries Enhancements to the reports Risk Identification and Controls review for the process. BCBS239 requirements Worked with Multinational Banks Credit Risk Management and Underwriting experience. Managed specific Industry profiles like Oil & Gas, Shipping, Commodities, etc Hands-on skills - PYTHON Awareness of trading environment, market moves & potential impact on exposures. Excellent communication skills - verbal and written. Be able to work in a global team. Project management experience and documentation skills Be able to implement strategic ideas and enhance productivity and efficiency levels Significant Experience in major regulatory programs in the Risk/Finance, Banking industry - BCBS, IMM, CVA etc Postgraduates / Professional qualification with 12+yrs experience in Risk domain Risk Certifications from established institutions is recommended Any technical Certification in MS Office Advanced tools / Python/ Tableau will be preferred The primary responsibility of this role is delivery of CIB credit risk reporting and MI, working directly with and supporting the group CIB credit risk team, which rolls up to into the Global Head of Reporting, Risk & CFCR. The risk reports are used by risk managers, Group Risk Framework Owners, committees and senior management, Regulators, and other internal stakeholders. The team involved in every stage of a reporting process:- Requirement gathering, collaboration with group and country teams to identify metrics for reporting. Evaluate Data Availability & proactively manage data quality requirements. Tactical solutions to bridge the gap while implementing strategic solutions. From testing till Go live Partner with Data Strategy /Technology for strategic sourcing & self- service models Insights & Commentaries Enhancements to the reports Risk Identification and Controls review for the process. BCBS239 requirements Worked with Multinational Banks Credit Risk Management and Underwriting experience. Managed specific Industry profiles like Oil & Gas, Shipping, Commodities, etc Hands-on skills - PYTHON Awareness of trading environment, market moves & potential impact on exposures. Excellent communication skills - verbal and written. Be able to work in a global team. Project management experience and documentation skills Be able to implement strategic ideas and enhance productivity and efficiency levels Significant Experience in major regulatory programs in the Risk/Finance, Banking industry - BCBS, IMM, CVA etc Postgraduates / Professional qualification with 12+yrs experience in Risk domain Risk Certifications from established institutions is recommended Any technical Certification in MS Office Advanced tools / Python/ Tableau will be preferred

Posted 5 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Summary #LI-Onsite #LI-Hybrid GFMD Process Manager role provides quality services to Novartis Group entities and is responsible for executing the maintenance of master data used in Novartis. GFMD Process Manager is responsible for the daily operations, high quality of the service as well as timely processing of the master data changes for Customer Master Data and Finance and Controlling Master Data. This role leads and coordinates team of Master data specialists and Senior Master data specialist, supervises end to end process within NOCC, ensuring high quality and effectiveness of data entries. The associate ensures reporting and analysis are in place and managed in accordance with defined data standards and operating procedures. This includes ensuring all necessary master data is available, accurate and compliant and providing regular and timely reporting to support FRA to enable improved performance, compliance and cost reduction. About the Role Key responsibilities: Oversee and ensure the processing of all master data changes within the system of record (e.g. SAP), ensuring creation and maintenance of Customer and Finance Master Data Entities, attributes and hierarchies (Customer Master Data, Finance and Controlling Master Data including Intercompany, Internal Orders, WBS, Fixed assets), and any other data entity as defined by the Novartis Finance Core GFMD cycle Onboarding new team members, assigning buddy to the new joiner and monitoring progress in the onboarding. Supporting manager with hiring process. SPOC for all process relevant topics. Ensure the coordination of Country FRA Connects. Preparing and coordinating KPI s and other analysis . Close monitoring of process trends and deviation Close collaboration with other FRA and GFMD processes, specially GPO and Governance. Handling 2nd level of escalations. Ongoing Risk monitoring, highlighting to leaders and FC&C withing GFMD Responsible for efficient back-up system and updated process documentations for consistent onboarding. Oversee and ensure correct and complete execution of User Acceptance Testing for any tool related to Finance and Controlling and Customer Master data entities Ensure that applicable policies, processes, and procedures are fully documented and consistently applied through the master data management lifecycle. Maintain process documentation for master data objects and share processes and technical knowledge with team members, locally and globally Ensure that any guidelines, enhancements to existing procedures and escalation paths are properly followed for the execution of master data management activities. Deliver quality and productivity targets as well as contribute to the achievement of master data performance and quality standards as per defined SLAs Interact proactively with all functional areas of the NOCC to ensure appropriate master data management activities are accomplished, e.g. month end closing with relevant exceptions only. Proactively challenge the status quo and comes up with making fact-based recommendations to drive continuous improvement. Escalate data issues (process and/or technical) and conduct data impact assessments to ensure data quality is maintained to agreed standards. Work in conjunction with technical teams to resolve any technical related issue that affects master data integrity and processes. Ensure application of agreed Novartis control procedures and actively supports to deliver SOX certification. Promote a strong and control environment, follow up on audit issues and support the implementation of correcting measures. Ensure talent development for the team members Actively support the NOCC working style by being inclusive, proactive, respectful, and results driven in alignment with Novartis values and behaviors. Any other duties that may reasonably be required in line with his/her main duties associated with the accountability of the position Essential Requirements: University level degree in Business Administration/ Economics/Finance or IT related fields 5+ years experience with good functional exposure to finance processes (accounting, reporting, tax, treasury, forecasting ) from a finance master data management perspective Master data concept knowledge and master data application set up. SAP knowledge: experience in SAP ECC and Business Warehouse. Familiarization with SAP MDG, Conceptual thinking and ability to align specific functional requirements with the overall MDM strategy Demonstrated ability to work effectively in a multi-national organization Previous experience working in projects to drive performance improvements Good track of people management skills Benefits and Rewards: Read our handbook to learn about all the ways we ll help you thrive personally and professionally:

Posted 5 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

We are seeking a highly skilled Senior Data Engineer Developer with 5+ years of experience to join our talented team in Bangalore. In this role, you will be responsible for designing, implementing, and optimizing data pipelines, ETL processes, and data integration solutions using Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Additionally, you will bring strong domain expertise in operations organizations, with a focus on supply chain and manufacturing functions. If youre a seasoned data engineer with a proven track record of delivering impactful data solutions in operations contexts, we want to hear from you. Responsibilities: Lead the design, development, and optimization of data pipelines, ETL processes, and data integration solutions using Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Apply strong domain expertise in operations organizations, particularly in functions like supply chain and manufacturing, to understand data requirements and deliver tailored solutions. Utilize big data processing frameworks such as Apache Spark to process and analyze large volumes of operational data efficiently. Implement data transformations, aggregations, and business logic to support analytics, reporting, and operational decision-making. Leverage cloud-based data platforms such as Snowflake to store and manage structured and semi-structured operational data at scale. Utilize dbt (Data Build Tool) for data modeling, transformation, and documentation to ensure data consistency, quality, and integrity. Monitor and optimize data pipelines and ETL processes for performance, scalability, and reliability in operations contexts. Conduct data profiling, cleansing, and validation to ensure data quality and integrity across different operational data sets. Collaborate closely with cross-functional teams, including operations stakeholders, data scientists, and business analysts, to understand operational challenges and deliver actionable insights. Stay updated on emerging technologies and best practices in data engineering and operations management, contributing to continuous improvement and innovation within the organization. All listed requirements are deemed as essential functions to this position; however, business conditions may require reasonable accommodations for additional task and responsibilities. Preferred Experience/Education/Skills: Bachelor s degree in computer science, Engineering, Operations Management, or related field. 5+ years of experience in data engineering, with proficiency in Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Strong domain expertise in operations organizations, particularly in functions like supply chain and manufacturing. Strong domain expertise in life sciences manufacturing equipment, with a deep understanding of industry-specific challenges, processes, and technologies. Experience with big data processing frameworks such as Apache Spark and cloud-based data platforms such as Snowflake. Hands-on experience with data modeling, ETL development, and data integration in operations contexts. Familiarity with dbt (Data Build Tool) for managing data transformation and modeling workflows. Familiarity with reporting and visualization tools like Tableau, Powerbi etc. Good understanding of advanced data engineering and data science practices and technologies like pypark, sagemaker, cloudera MLflow etc. Experience with SAP, SAP HANA and Teamcenter applications is a plus. Excellent problem-solving skills, analytical thinking, and attention to detail. Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams and operations stakeholders. Eagerness to learn and adapt to new technologies and tools in a fast-paced environment. Pyspark, Sql

Posted 5 days ago

Apply

10.0 - 12.0 years

35 - 40 Lacs

Hyderabad

Work from Office

Job Title: Manager Data Analyst Marketing Department: Marketing Location: Hyderabad, Corporate Experience: 10 12 Years Reporting To: Head Marketing / Chief Marketing Officer Employment Type: Full-Time Role Overview: We are looking for a highly experienced and strategic Manager Data Analyst to join our Marketing team. This role requires a data leader who can go beyond reporting someone who can transform complex data into strategic marketing insights, influence decision-making at the leadership level, and enhance performance across digital, offline, and integrated campaigns. Key Responsibilities: Lead the design, development, and automation of marketing dashboards and performance reports Analyze customer behavior, channel performance, campaign effectiveness, lead funnel, and ROI across all marketing activities Partner with senior marketing leadership to define KPIs and drive insights for brand, digital, content, and CRM teams Develop models for segmentation, targeting, predictive analytics, and marketing mix optimization Build scalable data processes and reporting frameworks aligned to business growth and campaign scale Guide experimentation and A/B testing frameworks; evaluate marketing hypotheses with statistical rigor Mentor and guide junior analysts or data associates within the team Present findings to senior stakeholders using visualization tools and compelling storytelling Collaborate with Data Engineering, IT, and external partners to ensure marketing data quality and governance Key Skills & Qualifications: Bachelor s or Master s degree in Statistics, Mathematics, Computer Science, Business Analytics, or a related field 10 12 years of experience in data analytics, with significant focus on marketing analytics Deep understanding of multi-channel marketing campaigns (digital, offline, CRM) and customer journeys Advanced proficiency in SQL , Excel , Power BI/Tableau , and Google Analytics or similar tools Experience with CRM and marketing platforms such as Salesforce, HubSpot, Adobe Analytics, or Marketo Strong statistical knowledge and experience in predictive modeling and marketing mix modeling Proven ability to influence strategic decisions through data-driven insights Excellent stakeholder management, communication, and presentation skills

Posted 5 days ago

Apply

12.0 - 15.0 years

45 - 50 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

The primary responsibility of this role is delivery of CIB credit risk reporting and MI, working directly with and supporting the group CIB credit risk team, which rolls up to into the Global Head of Reporting, Risk & CFCR. The risk reports are used by risk managers, Group Risk Framework Owners, committees and senior management, Regulators, and other internal stakeholders. The team involved in every stage of a reporting process:- Requirement gathering, collaboration with group and country teams to identify metrics for reporting. Evaluate Data Availability & proactively manage data quality requirements. Tactical solutions to bridge the gap while implementing strategic solutions. From testing till Go live Partner with Data Strategy /Technology for strategic sourcing & self- service models Insights & Commentaries Enhancements to the reports Risk Identification and Controls review for the process. BCBS239 requirements Worked with Multinational Banks Credit Risk Management and Underwriting experience. Managed specific Industry profiles like Oil & Gas, Shipping, Commodities, etc Hands-on skills - PYTHON Awareness of trading environment, market moves & potential impact on exposures. Excellent communication skills - verbal and written. Be able to work in a global team. Project management experience and documentation skills Be able to implement strategic ideas and enhance productivity and efficiency levels Significant Experience in major regulatory programs in the Risk/Finance, Banking industry - BCBS, IMM, CVA etc Postgraduates / Professional qualification with 12+yrs experience in Risk domain Risk Certifications from established institutions is recommended Any technical Certification in MS Office Advanced tools / Python/ Tableau will be preferred The primary responsibility of this role is delivery of CIB credit risk reporting and MI, working directly with and supporting the group CIB credit risk team, which rolls up to into the Global Head of Reporting, Risk & CFCR. The risk reports are used by risk managers, Group Risk Framework Owners, committees and senior management, Regulators, and other internal stakeholders. The team involved in every stage of a reporting process:- Requirement gathering, collaboration with group and country teams to identify metrics for reporting. Evaluate Data Availability & proactively manage data quality requirements. Tactical solutions to bridge the gap while implementing strategic solutions. From testing till Go live Partner with Data Strategy /Technology for strategic sourcing & self- service models Insights & Commentaries Enhancements to the reports Risk Identification and Controls review for the process. BCBS239 requirements Worked with Multinational Banks Credit Risk Management and Underwriting experience. Managed specific Industry profiles like Oil & Gas, Shipping, Commodities, etc Hands-on skills - PYTHON Awareness of trading environment, market moves & potential impact on exposures. Excellent communication skills - verbal and written. Be able to work in a global team. Project management experience and documentation skills Be able to implement strategic ideas and enhance productivity and efficiency levels Significant Experience in major regulatory programs in the Risk/Finance, Banking industry - BCBS, IMM, CVA etc Postgraduates / Professional qualification with 12+yrs experience in Risk domain Risk Certifications from established institutions is recommended Any technical Certification in MS Office Advanced tools / Python/ Tableau will be preferred

Posted 5 days ago

Apply

2.0 - 4.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Your Impact: As a Python Developer in the Debricked data science team, you will work on enhancing data intake processes and optimizing data pipelines. You will apply many different approaches, depending on the needs of the product and the challenges you encounter. In some cases, we use AI/LLM techniques, and we expect the number of such cases to increase. Your contributions will directly impact Debrickeds scope and quality and will help ensure future commercial growth of the product. What the role offers: As a Python Developer, you will: Innovative Data Solutions: Develop and optimize data pipelines that improve the efficiency, accuracy, and automation of the Debricked SCA tools data intake processes. Collaborative Environment: Work closely with engineers and product managers from Sweden and India to create impactful, data-driven solutions. Continuous Improvement: Play an essential role in maintaining and improving the data quality that powers Debrickeds analysis, improving the products competitiveness. Skill Development: Collaborate across teams and leverage OpenTexts resources (including an educational budget) to develop your expertise in software engineering,data science and AI, expanding your skill set in both traditional and cutting-edge technologies. What you need to Succeed: 2-4 years of experience in Python development, with a focus on optimizing data processes and improving data quality. Proficiency in Python and related tools and libraries like Jupyter, Pandas and Numpy. A degree in Computer Science or a related discipline. An interest in application security. Asset to have skills in Go, Java, LLMs (specifically Gemini), GCP, Kubernetes, MySQL, Elastic, Neo4J. A strong understanding of how to manage and improve data quality in automated systems and pipelines. Ability to address complex data challenges and develop solutions to optimize systems. Comfortable working in a distributed team, collaborating across different time zones.

Posted 5 days ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Navi Mumbai

Work from Office

About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : MicroStrategy Business Intelligence, Microsoft Power Business Intelligence (BI) Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems for your immediate team and across multiple teams. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the effort to design, build, and configure applications- Act as the primary point of contact for the project- Manage the team and ensure successful project delivery- Collaborate with multiple teams to make key decisions- Provide solutions to problems for the immediate team and across multiple teams Professional & Technical Skills: - Must To Have Skills: Proficiency in MicroStrategy Business Intelligence, Microsoft Power Business Intelligence (BI)- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have a minimum of 5 years of experience in MicroStrategy Business Intelligence- This position is based at our Indore office- A 15 years full-time education is required Qualification 15 years full time education

Posted 5 days ago

Apply

3.0 - 8.0 years

9 - 13 Lacs

Kolkata

Work from Office

About The Role Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : PySpark Good to have skills : Python (Programming Language)Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist with the data platform blueprint and design.- Collaborate with Integration Architects and Data Architects.- Ensure cohesive integration between systems and data models.- Implement data platform components effectively.- Provide insights for enhancing data platform performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 3 years of experience in PySpark.- This position is based at our Kolkata office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Hyderabad

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver effective solutions.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes.- Familiarity with cloud platforms and data storage solutions.- Knowledge of programming languages such as Python or Scala. Additional Information:- The candidate should have minimum 2 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

7.0 - 12.0 years

5 - 9 Lacs

Pune

Work from Office

About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Apache Spark, PySparkMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for creating efficient and scalable applications that align with the organization's goals and objectives. Your typical day will involve collaborating with cross-functional teams, analyzing business requirements, designing application architecture, and implementing solutions to meet user needs. You will also be involved in troubleshooting and resolving application issues, as well as ensuring the security and integrity of the applications. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Design, build, and configure applications to meet business process and application requirements.- Analyze business requirements and translate them into technical specifications.- Collaborate with cross-functional teams to gather requirements and understand user needs.- Design application architecture and develop efficient and scalable solutions.- Implement and test application functionality and ensure its alignment with business objectives. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Apache Spark, PySpark, Talend ETL.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Bengaluru

Remote

Senior Data Developer with Strong MS/Oracle SQL, Python Skills and Critical Thinking Description: The EDA team seeks a dedicated and detail-oriented Senior Developer I to join our dynamic team. The responsibility of the successful candidate will be to handle repetitive technical tasks, such as Healthy Planet MS SQL file loads into a data warehouse, monitor Airflow DAGs, manage alerts, and rerun failed processes. Additionally, the role will require the analyst to monitor various daily and weekly jobs, which may include generation of revenue cycle reports and data delivery to external vendors. The perfect candidate will have a robust experience with MS/Oracle SQL, Python, Epic Health Systems, and other relevant technologies. Overview : As a Senior Developer I at Useready team, you will play a vital role to improve the operation of our data load and management processes. Your primary responsibilities will be to ensure the accuracy and timeliness of data loads, maintain the health of data pipelines, and monitor that all scheduled jobs are completed successfully. You will collaborate with cross-functional teams to identify and resolve issues, improve processes, and maintain a high standard of data integrity. Responsibilities: Manage and perform Healthy Planet file loads into a data warehouse. Monitor Airflow DAGs for successful completion, manage alerts, and rerun failed tasks as necessary. Monitor and oversee other daily and weekly jobs, including FGP cash reports and external reports. Collaborate with the data engineering team to streamline data processing workflows. Develop automation scripts to reduce manual intervention in repetitive tasks using SQL and Python. Ensure all data-related tasks are performed accurately and on time. Investigate and resolve data discrepancies and processing issues. Prepare and maintain documentation for processes and workflows. Conduct periodic data audits to ensure data integrity and compliance with defined standards. Skillset Requirements: MS/Oracle SQL Python Data warehousing and ETL processes Monitoring tools such as Apache Airflow Data quality and integrity assurance Strong analytical and problem-solving abilities Excellent written and verbal communication Additional Skillset Familiarity with monitoring and managing Apache Airflow DAGs. Experience: Minimum of 5 years experience in a similar role, with a focus on data management and process automation. Proven track record of successfully managing complex data processes and meeting deadlines. Education: Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field. Certifications: Epic Cogito MS/Oracle SQL, Python, or data management are a plus. Notice period - 0-30 days

Posted 5 days ago

Apply

15.0 - 20.0 years

18 - 22 Lacs

Bengaluru

Work from Office

About The Role Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with the team to develop and implement solutions that align with the organization's goals and objectives. You will utilize your expertise in Databricks Unified Data Analytics Platform to create efficient and effective applications that enhance business processes and drive innovation. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Collaborate with stakeholders to gather requirements and understand business needs.- Design and develop applications using Databricks Unified Data Analytics Platform.- Configure and customize applications to meet specific business process requirements.- Perform code reviews and ensure adherence to coding standards.- Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Microsoft Azure Data Services, Microsoft Azure Analytics Services.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 12 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A Full Time Education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

1.0 - 3.0 years

20 - 25 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

[{"Salary":"12-24 Lakhs Per Annum" , "Remote_Job":false , "Posting_Title":"Data Engineer" , "Is_Locked":false , "City":"Mumbai" , "Industry":"Financial Services","Job_Description":" Build and maintain Sav\u2019s data infrastructure, pipelines, and storage systems for structured and unstructured data Ingest and normalize 350+ user-level data points across banking APIs, behavioral events, transaction data, and 3rd party services Architect a data mesh that allows distributed ownership but unified insight Set up and maintain modern ELT/ETL pipelines with performance, cost, and scale in mind Integrate LLM tooling (Ollama, Gemini, OpenAI) into data workflows for real-time inference and analysis Enable self-service data access and build visualization layers using tools like Looker, Metabase, or Superset Collaborate with data scientists and ML engineers to build model-ready datasets and ensure feature consistency Own data quality, governance, security, and documentation Continuously improve latency, observability, and cost-e\uFB03ciency of data systems Requirements 4\u20137 years of hands-on experience as a Data Engineer or in a similar data infrastructure role Strong programming skills in Python, SQL, and experience with dbt or similar tools Experience building data pipelines using modern stack: Airflow, Kafka, Spark, or equivalent Familiarity with cloud data platforms (GCP preferred), BigQuery, Firestore, or similar Experience handling large-scale transactional and behavioral data Bonus: Prior experience integrating LLMs/AI APIs (Ollama, OpenAI, Gemini) into analytics or automation workflows Good understanding of data warehousing, governance, and privacy-by-design principles Self-starter, comfortable in 0 \u2192 1 environments, and excited to work cross-functionally with tech and non-tech teams Clear communication and documentation skills Benefits Be part of a mission-driven fintech scaling across the GCC. Work alongside a passionate and visionary leadership team. A flat, fast-paced, no-drama culture with high ownership and high trust Work alongside a globally distributed team of experienced builders Compensation: INR 12-24LPA + ESOPs + Employee benefits ","Work_Experience":"5+ years","Job_Type":"Full time","Job_Opening_Name":"Data Engineer" , "State":"Maharashtra" , "Country":"India" , "Zip_Code":"400058" , "id":"781666000000580021" , "Publish":true , "Date_Opened":"2025-06-11" , "Keep_on_Career_Site":false}]

Posted 5 days ago

Apply

3.0 - 8.0 years

20 - 25 Lacs

Bengaluru

Work from Office

As a Software Engineer III at JPMorgan Chase within the Consumer and Community Banking- JP Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Recommend improvements and modifications on new and existing data and ETL pipelines. Create data analytics for data scientists to innovate, build and optimize our ecosystem Assemble large, complex data sets that meet functional and non-functional business requirements Be responsible for ingesting data into our data lake and providing frameworks and services for operating on that data including the use of Spark Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Proficient in coding in one or more languages Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Overall knowledge of the Software Development Life Cycle Analyze, debug and correct issues with data pipelines Operate on or build solution required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, AWS and Spark technologies. Collaborate with product teams to resolve data quality issues promptly. Preferred qualifications, capabilities, and skills Experience implementing Data pipelines preferably in connection with Hadoop or Spark with Java or Python Exceptional coding and design skills in Java or Python and PYSPARK Experience with API calls for data extraction Good knowledge of using SQL and analytical queries Experience in JAVA,PYSARK and AWS Implementations As a Software Engineer III at JPMorgan Chase within the Consumer and Community Banking- JP Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Recommend improvements and modifications on new and existing data and ETL pipelines. Create data analytics for data scientists to innovate, build and optimize our ecosystem Assemble large, complex data sets that meet functional and non-functional business requirements Be responsible for ingesting data into our data lake and providing frameworks and services for operating on that data including the use of Spark Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Proficient in coding in one or more languages Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Overall knowledge of the Software Development Life Cycle Analyze, debug and correct issues with data pipelines Operate on or build solution required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, AWS and Spark technologies. Collaborate with product teams to resolve data quality issues promptly. Preferred qualifications, capabilities, and skills Experience implementing Data pipelines preferably in connection with Hadoop or Spark with Java or Python Exceptional coding and design skills in Java or Python and PYSPARK Experience with API calls for data extraction Good knowledge of using SQL and analytical queries Experience in JAVA,PYSARK and AWS Implementations

Posted 5 days ago

Apply

10.0 - 15.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Job Title Senior AWS Data Engineer About the Role We are seeking an experienced and highly skilled Senior AWS Data Engineer with over 10 years of experience to join our dynamic team. The ideal candidate will have a deep understanding of data engineering principles, extensive experience with AWS services, and a proven track record of designing and implementing scalable data solutions. Key Responsibilities Design and implement robust, scalable, and efficient data pipelines and architectures on AWS. Develop data models and schemas to support business intelligence and analytics requirements. Utilize AWS services such as S3, Redshift, EMR, Glue, Lambda, and Kinesis to build and optimize data solutions. Implement data security and compliance measures using AWS IAM, KMS, and other security services. Design and develop ETL processes to ingest, transform, and load data from various sources into data warehouses and lakes. Ensure data quality and integrity through validation, cleansing, and transformation processes. Optimize data storage and retrieval performance through indexing, partitioning, and other techniques. Monitor and troubleshoot data pipelines to ensure high availability and reliability. Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Provide technical leadership and mentorship to junior data engineers and team members. Identify opportunities to automate and streamline data processes for increased efficiency. Participate in on-call rotations to provide 24/7 support for critical systems and services. Required Skills and Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field. Over 10 years of experience in data engineering, with a strong focus on AWS services. Proficiency in programming languages such as Python, Java, or Scala. Extensive experience with SQL and NoSQL databases. Strong understanding of data warehousing concepts and technologies. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Job Title Senior AWS Data Engineer About the Role We are seeking an experienced and highly skilled Senior AWS Data Engineer with over 10 years of experience to join our dynamic team. The ideal candidate will have a deep understanding of data engineering principles, extensive experience with AWS services, and a proven track record of designing and implementing scalable data solutions. Key Responsibilities Design and implement robust, scalable, and efficient data pipelines and architectures on AWS. Develop data models and schemas to support business intelligence and analytics requirements. Utilize AWS services such as S3, Redshift, EMR, Glue, Lambda, and Kinesis to build and optimize data solutions. Implement data security and compliance measures using AWS IAM, KMS, and other security services. Design and develop ETL processes to ingest, transform, and load data from various sources into data warehouses and lakes. Ensure data quality and integrity through validation, cleansing, and transformation processes. Optimize data storage and retrieval performance through indexing, partitioning, and other techniques. Monitor and troubleshoot data pipelines to ensure high availability and reliability. Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Provide technical leadership and mentorship to junior data engineers and team members. Identify opportunities to automate and streamline data processes for increased efficiency. Participate in on-call rotations to provide 24/7 support for critical systems and services. Required Skills and Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field. Over 10 years of experience in data engineering, with a strong focus on AWS services. Proficiency in programming languages such as Python, Java, or Scala. Extensive experience with SQL and NoSQL databases. Strong understanding of data warehousing concepts and technologies. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.

Posted 5 days ago

Apply

3.0 - 8.0 years

20 - 25 Lacs

Bengaluru

Work from Office

As a Software Engineer III at JPMorgan Chase within the Risk Central development team, you will have the opportunity to work at the intersection of critical front office, product control facing data analytics, and big data technologies. You will be part of a high-energy team, helping us build our next-generation business analytics platform by landing financial data from various front office and product control platforms. This role offers a chance to be the primary subject matter expert for data in our analytics platform, develop data quality rules and controls, and analyze and solve query performance bottlenecks in cloud-based warehouses like Redshift and AWS Glue. Job Responsibilities Land data from various firm sources into big data warehouse Investigate data issues, provide support on data issues Develop automation for data extraction. Design and tune schema for data landed on platform Partner with information modelling teams on firmwide logical data models Drive primary subject matter expert (SME) for data in analytics platform Develop data quality rules and controls for data Analyze and solve query performance bottlenecks in Cloud based warehouses like Redshift and AWS Glue Required qualifications, capabilities and skills Formal training or certification on Computer Science concepts and 3+ years applied experience Strong hands on coding in Python, Java and Apache Spark & SQL Strong CS fundamentals, data structures, algorithms with good understanding of big data Experience with AWS application development including services such as Lambda, Glue, ECS/EKS Excellent communication skills are a must for this position Experience with Unix/Linux and shell scripting Preferred qualifications, capabilities and skills Good understanding of data modelling challenges with big data Good understanding of Financial data especially in front office investment banking is a major plus Ability to code in Apache Spark using Scala is an added advantage As a Software Engineer III at JPMorgan Chase within the Risk Central development team, you will have the opportunity to work at the intersection of critical front office, product control facing data analytics, and big data technologies. You will be part of a high-energy team, helping us build our next-generation business analytics platform by landing financial data from various front office and product control platforms. This role offers a chance to be the primary subject matter expert for data in our analytics platform, develop data quality rules and controls, and analyze and solve query performance bottlenecks in cloud-based warehouses like Redshift and AWS Glue. Job Responsibilities Land data from various firm sources into big data warehouse Investigate data issues, provide support on data issues Develop automation for data extraction. Design and tune schema for data landed on platform Partner with information modelling teams on firmwide logical data models Drive primary subject matter expert (SME) for data in analytics platform Develop data quality rules and controls for data Analyze and solve query performance bottlenecks in Cloud based warehouses like Redshift and AWS Glue Required qualifications, capabilities and skills Formal training or certification on Computer Science concepts and 3+ years applied experience Strong hands on coding in Python, Java and Apache Spark & SQL Strong CS fundamentals, data structures, algorithms with good understanding of big data Experience with AWS application development including services such as Lambda, Glue, ECS/EKS Excellent communication skills are a must for this position Experience with Unix/Linux and shell scripting Preferred qualifications, capabilities and skills Good understanding of data modelling challenges with big data Good understanding of Financial data especially in front office investment banking is a major plus Ability to code in Apache Spark using Scala is an added advantage

Posted 5 days ago

Apply

5.0 - 9.0 years

3 - 7 Lacs

Bengaluru

Work from Office

: We are looking for a meticulous and experienced BI Reporting QA professional to lead the team and ensure the accuracy and reliability of our Business Intelligence (BI) reports and data visualizations. The BI Reporting QA plays a critical role in validating and verifying the quality of reports and dashboards, ensuring they provide dependable insights for decision-making. Responsibilities: Design and develop QA strategy for BI by identifying all the relevant tests and validations. Develop and execute comprehensive test plans and test cases for BI reports, dashboards, and data visualizations to identify defects and discrepancies. Verify data accuracy by comparing data sources to report outputs, identifying inconsistencies, anomalies, and data quality issues. Evaluate the performance and responsiveness of reports and dashboards, ensuring they load efficiently and meet performance expectations. Perform regression testing to confirm that report modifications or updates do not introduce new issues or affect existing functionalities. Collaborate with end-users and stakeholders to conduct UAT and ensure that reports meet business requirements and user expectations. Document and track defects and issues, working closely with BI developers to ensure timely resolution. Create and maintain documentation of test cases, results, and validation procedures for reference and reporting. Ensure that BI reports adhere to data governance principles, including data accuracy, data quality, and data security. Manage and maintain test environments, including data sets and configurations, to support effective testing. Required Skills: Proven experience in Power BI reporting quality assurance Proficiency in designing and writing SQL statements for data querying and QA validation Travel industry experience is essential Strong understanding of BI reporting tools and platforms. Proficiency in data validation, data comparison, and data quality assessment. Expertise in implementation of automation in QA processes Preferences: Relevant BI reporting tool certifications (Microsoft Certified: Power BI). Relevant quality assurance certifications (e.g., ISTQB Certified Tester). Qualifications Graduate

Posted 5 days ago

Apply

3.0 - 4.0 years

5 - 6 Lacs

Bengaluru

Work from Office

About the role: Support SFRM management to adhere to various governance standards (e.g. EUA guidelines) and manage operational resilience. Oversee the various aspects of market data licensing, facilitate cross-departmental discussions and manage relationships with various stakeholders. Identify and initiate projects and processes that aid in improving the management of audits and data reviews. Facilitate various SFRM-wide initiatives (e.g. process documentation) and track them to closure. Manage effective communication channels within the team to facilitate easier audits, control reviews etc About the team: The SFRM Governance and Data Management team within Solvency and Financial Risk Management (SFRM) oversees and reports various governance topics. Adherence of the departmental activities to data quality standards, adherence to End User Application guidelines, managing market data licensing agreements are a few examples. Additionally, the team manages master data in the core IT systems to facilitate seamless reporting across multiple dimensions. We rely on strong communication skills and IT capabilities for an efficient implementation of our mandate. About you: Good academic track record in Finance or MBA. 3-4 years of experience in the financial services or insurance sector with a good track record of management reporting. Excellent communication skills and strong management presence. Ability to effectively collaborate with a broad range of people and roles in a diverse cultural environment. Strong project management skills with consulting and can do mindset. Attention to detail and strive to understand concepts from start to finish. About Swiss Re Swiss Re is one of the world s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords: Reference Code: 134590

Posted 5 days ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : GCP Dataflow Good to have skills : Google BigQueryMinimum 3 year(s) of experience is required Educational Qualification : Any Graduate Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and optimize data workflows, ensuring that the data infrastructure supports the organization's analytical needs and business objectives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain robust data pipelines to support data processing and analytics.- Monitor and troubleshoot data workflows to ensure optimal performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in GCP Dataflow.- Good To Have Skills: Experience with Google BigQuery.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud computing concepts and services. Additional Information:- The candidate should have minimum 3 years of experience in GCP Dataflow.- This position is based at our Bengaluru office.- Any Graduate is required. Qualification Any Graduate

Posted 5 days ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Informatica Data Quality Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly to support business operations. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking ways to improve processes and deliver high-quality solutions. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in code reviews to ensure adherence to best practices and standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Informatica Data Quality.- Strong understanding of data profiling and cleansing techniques.- Experience with data integration and transformation processes.- Familiarity with ETL processes and data warehousing concepts.- Ability to troubleshoot and resolve data quality issues efficiently. Additional Information:- The candidate should have minimum 3 years of experience in Informatica Data Quality.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Chennai

Work from Office

About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will collaborate with teams, make team decisions, and provide solutions to problems. Your typical day will involve designing and developing applications, collaborating with multiple teams, and contributing to key decisions. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Design and develop applications- Collaborate with cross-functional teams- Contribute to key decisions Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Bengaluru office- A 15 years full time education is required Qualification 15 years full time education

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies