Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
6 - 11 Lacs
Gurugram
Work from Office
About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners Job Summary: We are seeking a Senior Data Engineer Databricks with a strong development background in Azure Databricks and Python, who will be instrumental in building and optimising scalable data pipelines and solutions across the Azure ecosystem. This role requires hands-on development experience with PySpark , data modelling, and Azure Data Factory. You will collaborate closely with data architects, analysts, and business stakeholders to ensure reliable and high-performance data solutions. Experience Required: 4+ Years Lead/Senior Data Engineer (Microsoft Azure, Databricks, Data Factory, Data Engineer, Data Modelling) Key Responsibilities: Develop and Maintain Data Pipelines: Design, implement, and optimise scalable data pipelines using Azure Databricks (PySpark) for both batch and streaming use cases. Azure Platform Integration: Work extensively with Azure services including Data Factory , ADLS Gen2 , Delta Lake , and Azure Synapse for end-to-end data pipeline orchestration and storage. Data Transformation & Processing: Write efficient, maintainable, and reusable PySpark code for data ingestion, transformation, and validation processes within the Databricks environment. Collaboration: Partner with data architects, analysts, and data scientists to understand requirements and deliver robust, high-quality data solutions. Performance Tuning and Optimisation: Optimise Databricks cluster configurations, notebook performance, and resource consumption to ensure cost-effective and efficient data processing. Testing and Documentation: Implement unit and integration tests for data pipelines. Document solutions, processes, and best practices to enable team growth and maintainability. Security and Compliance: Ensure data governance, privacy, and compliance are upheld across all engineered solutions, following Azure security best practices. Preferred Skills : Strong hands-on experience with Delta Lake , including table management, schema evolution, and implementing ACID-compliant pipelines. Skilled in developing and maintaining Databricks notebooks and jobs for large-scale batch and streaming data processing. Experience writing modular, production-grade PySpark and Python code , including reusable functions and libraries for data transformation. Experience in streaming data ingestion and Structured Streaming in Databricks for near real-time data solutions. Knowledge of performance tuning techniques in Spark including job optimization, caching, and partitioning strategies. Exposure to data quality frameworks and testing practices (e.g., pytest , data validation libraries, custom assertions). Basic understanding of Unity Catalog for managing data governance, access controls, and lineage tracking from a developer s perspective. Familiarity with Power BI - able to structure data models and views in Databricks or Synapse to support BI consumption .
Posted 1 week ago
2.0 - 7.0 years
4 - 9 Lacs
Pune
Work from Office
Our team members are at the heart of everything we do. At Cencora, we are united in our responsibility to create healthier futures, and every person here is essential to us being able to deliver on that purpose. If you want to make a difference at the center of health, come join our innovative company and help us improve the lives of people and animals everywhere. Apply today! Job Details 1. Abstract and digitize Prime Vendor Agreements (PVAs) to support pricing tool data / input 2.Process Health Systems (HS) contracts, Community Retail & Long-Term Care (CRLTC) PVAs, and other contracts across the organization 3.Review and digitize contracts per the Service Level Agreement (SLA) 4.Collaborate with various functional teams regarding data interpretation and entry 5.Leverage the current abstraction methodology and format for determining terms, conditions, fields, and grids for digitization 6.Present to team on decisions and rational behind data input / digitization assumptions 7.Provide status updates to supervisors and other team members 8.Discuss / escalate day-to-day tasks with team supervisor as needed 9.Prepare for quarterly data quality review process / audit 10. Assist and support department managers in preparing for meetings 11. Maintain and promote positive and professional working relationships with associates and management 12. Comply with all appropriate policies, procedures, and regulations 13.Any other duties assigned . WORK EXPERIENCE AND EDUCATION: Bachelor s degree or equivalent in a relevant field 2+ years of experience in legal contract management and data abstraction Experienced in reviewing PVAs, Amendments, and Term Sheets Skilled in using Excel and CLM platforms, especially Icertis Helped standardize abstraction formats for better consistency Maintains confidentiality and data security Strong communication and presentation skills What Cencora offers Benefit offerings outside the US may vary by country and will be aligned to local market practice. The eligibility and effective date may differ for some benefits and for team members covered under collective bargaining agreements. Full time Affiliated Companies Affiliated Companies: CENCORA BUSINESS SERVICES INDIA PRIVATE LIMITED Equal Employment Opportunity Cencora is committed to providing equal employment opportunity without regard to race, color, religion, sex, sexual orientation, gender identity, genetic information, national origin, age, disability, veteran status or membership in any other class protected by federal, state or local law. Cencora is committed to providing reasonable accommodations to individuals with disabilities during the employment process which are consistent with legal requirements. If you wish to request an accommodation while seeking employment, please call 888.692.2272 or email hrsc@cencora.com . We will make accommodation determinations on a request-by-request basis. Messages and emails regarding anything other than accommodations requests will not be returned
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Ahmedabad
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decisionmaking and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Responsibilities Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust endtoend solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory skill sets AWS Data Engineer Preferred skill sets AWS Data Engineer Years of experience required 48 Education qualification B.tech/MBA/MCA Education Degrees/Field of Study required Master of Business Administration, Bachelor of Technology Degrees/Field of Study preferred Required Skills AWS Development, Data Engineering Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Travel Requirements Government Clearance Required?
Posted 1 week ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
Not Applicable Specialism Data, Analytics & AI & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decisionmaking and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC , you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purposeled and valuesdriven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC , we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. & Summary The primary purpose of this role is to translate business requirements and functional specifications into logical program designs and to deliver dashboards, schema, data pipelines, and software solutions. Design, develop, and maintain scalable data pipelines to process and transform large volumes of structured and unstructured data. Build and maintain ETL/ELT workflows for data ingestion from various sources (APIs, databases, files, cloud). Ensure data quality, integrity, and governance across the pipeline. This includes developing, configuring, or modifying data components within various complex business and/or enterprise application solutions in various computing environments. s We are currently seeking Sr data engineer, who can perform data integration to build custom data pipeline, manage data transformation, Performance optimization, Automation, Data Governance & data quality. Mandatory skill sets Must have knowledge, skills and experiences GCP Dataproc Pyspark, SparkSQL, Dataflow Apache beam, Cloud composer, Bigquery, API Management Preferred skill sets Good to have knowledge, skills and experiences Experience in building data pipelines Experience in Software lifecycle tools for CI/CD and version control system such as GIT Familiarity with Agile methodologies is a plus. Years of experience required Experience and Qualifications Experience 4 Years to 12 years NP Immediate to 30 days Location Hyderabad 3 days / week work from client office Education qualification o BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Education Degrees/Field of Study required Master of Business Administration, Bachelor of Technology, Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred Required Skills Structured Query Language (SQL) Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} No
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Kolkata
Work from Office
Responsibilities Position responsibilities and expectations Designing and building analytical /DL/ ML algorithms using Python, R and other statistical tools. Strong data representation and lucid presentation (of analysis/modelling output) using Python, R Markdown, Power Point, Excel etc. Ability to learn new scripting language or analytics platform. Technical Skills required (must have) HandsOn Exposure to Generative AI (Design, development of GenAI application in production) Strong understanding of RAG, Vector Database, Lang Chain and multimodal AI applications. Strong understanding of deploying and optimizing AI application in production. Strong knowledge of statistical and data mining techniques like Linear & Logistic Regression analysis, Decision trees, Bagging, Boosting, Time Series and Nonparametric analysis. Strong knowledge of DL & Neural Network Architectures (CNN, RNN, LSTM, Transformers etc.) Strong knowledge of SQL and R/Python and experience with distribute data/computing tools/IDEs. Experience in advanced Text Analytics (NLP, NLU, NLG). Strong handson experience of endtoend statistical model development and implementation Understanding of LLMOps, ML Ops for scalable ML development. Basic understanding of DevOps and deployment of models into production (PyTorch, TensorFlow etc.). Expert level proficiency algorithm building languages like SQL, R and Python and data visualization tools like Shiny, Qlik, Power BI etc. Exposure to Cloud Platform (Azure or AWS or GCP) technologies and services like Azure AI/ Sage maker/Vertex AI, Auto ML, Azure Index, Azure Functions, OCR, OpenAI, storage, scaling etc. Technical Skills required (Any one or more) Experience in video/ image analytics (Computer Vision) Experience in IoT/ machine logs data analysis Exposure to data analytics platforms like Domino Data Lab, c3.ai, H2O, Alteryx or KNIME Expertise in Cloud analytics platforms (Azure, AWS or Google) Experience in Process Mining with expertise in Celonis or other tools Proven capability in using Generative AI services like OpenAI, Google (Gemini) Understanding of Agentic AI Framework (Lang Graph, Auto gen etc.) Understanding of finetuning for pretrained models like GPT, LLaMA, Claude etc. using LoRA, QLoRA and PEFT technique. Proven capability in building customized models from opensource distributions like Llama, Stable Diffusion Mandatory skill sets AI chatbots, Data structures, GenAI objectoriented programming, IDE, API, LLM Prompts, Streamlit Preferred skill sets AI chatbots, Data structures, GenAI objectoriented programming, IDE, API, LLM Prompts, Streamlit Years of experience required 36 Years Education qualification BE, B. Tech, M. Tech, M. Stat, Ph.D., M.Sc. (Stats / Maths) Education Degrees/Field of Study required Doctor of Philosophy, Bachelor of Engineering, Bachelor of Technology Degrees/Field of Study preferred Required Skills Chatbots, Data Structures, Generative AI Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, C++ Programming Language, Communication, Complex Data Analysis, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Machine Learning, Machine Learning Libraries, Named Entity Recognition, Natural Language Processing (NLP), Natural Language Toolkit (NLTK) {+ 20 more} Travel Requirements Government Clearance Required?
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Hyderabad
Work from Office
YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We: build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. HOW YOU WILL FULFILL YOUR POTENTIAL As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. QUALIFICATIONS A successful candidate will possess the following attributes: A Bachelors or Masters degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as well as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders.
Posted 1 week ago
3.0 - 9.0 years
5 - 11 Lacs
Bengaluru
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purposeled and valuesdriven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above s Job Accountabilities Hands on Experience in Azure Data Components like ADF / Databricks / Azure SQL Good Programming Logic Sense in SQL Good PySpark knowledge for Azure Data Bricks Data Lake and Data Warehouse Concept Understanding Unit and Integration testing understanding Good communication skill to express thoghts and interact with business users Understanding of Data Security and Data Compliance Agile Model Understanding Project Documentation Understanding Certification (Good to have) Domain Knowledge Mandatory skill sets Azure DE, ADB, ADF, ADL Preferred skill sets Azure DE, ADB, ADF, ADL Years of experience required 3 to 9 years Education qualification Graduate Engineer or Management Graduate Education Degrees/Field of Study required Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred Required Skills Microsoft Azure Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Travel Requirements Government Clearance Required?
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Kolkata
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Associate & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decisionmaking and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decisionmaking and automation. At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purposeled and valuesdriven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . Responsibilities Position responsibilities and expectations Designing and building analytical /DL/ ML algorithms using Python, R and other statistical tools. Strong data representation and lucid presentation (of analysis/modelling output) using Python, R Markdown, Power Point, Excel etc. Ability to learn new scripting language or analytics platform. Technical Skills required (must have) HandsOn Exposure to Generative AI (Design, development of GenAI application in production) Strong understanding of RAG, Vector Database, Lang Chain and multimodal AI applications. Strong understanding of deploying and optimizing AI application in production. Strong knowledge of statistical and data mining techniques like Linear & Logistic Regression analysis, Decision trees, Bagging, Boosting, Time Series and Nonparametric analysis. Strong knowledge of DL & Neural Network Architectures (CNN, RNN, LSTM, Transformers etc.) Strong knowledge of SQL and R/Python and experience with distribute data/computing tools/IDEs. Experience in advanced Text Analytics (NLP, NLU, NLG). Strong handson experience of endtoend statistical model development and implementation Understanding of LLMOps, ML Ops for scalable ML development. Basic understanding of DevOps and deployment of models into production (PyTorch, TensorFlow etc.). Expert level proficiency algorithm building languages like SQL, R and Python and data visualization tools like Shiny, Qlik, Power BI etc. Exposure to Cloud Platform (Azure or AWS or GCP) technologies and services like Azure AI/ Sage maker/Vertex AI, Auto ML, Azure Index, Azure Functions, OCR, OpenAI, storage, scaling etc. Technical Skills required (Any one or more) Experience in video/ image analytics (Computer Vision) Experience in IoT/ machine logs data analysis Exposure to data analytics platforms like Domino Data Lab, c3.ai, H2O, Alteryx or KNIME Expertise in Cloud analytics platforms (Azure, AWS or Google) Experience in Process Mining with expertise in Celonis or other tools Proven capability in using Generative AI services like OpenAI, Google (Gemini) Understanding of Agentic AI Framework (Lang Graph, Auto gen etc.) Understanding of finetuning for pretrained models like GPT, LLaMA, Claude etc. using LoRA, QLoRA and PEFT technique. Proven capability in building customized models from opensource distributions like Llama, Stable Diffusion Mandatory skill sets AI chatbots, Data structures, GenAI objectoriented programming, IDE, API, LLM Prompts, Streamlit Preferred skill sets AI chatbots, Data structures, GenAI objectoriented programming, IDE, API, LLM Prompts, Streamlit Years of experience required 36 Years Education qualification BE, B. Tech, M. Tech, M. Stat, Ph.D., M.Sc. (Stats / Maths) Education Degrees/Field of Study required Doctor of Philosophy, Bachelor of Engineering, Bachelor of Technology Degrees/Field of Study preferred Required Skills Chatbots, Data Structures, Generative AI Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, C++ Programming Language, Communication, Complex Data Analysis, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Machine Learning, Machine Learning Libraries, Named Entity Recognition, Natural Language Processing (NLP), Natural Language Toolkit (NLTK) {+ 20 more} Travel Requirements Government Clearance Required?
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
QUALIFICATIONS A successful candidate will possess the following attributes: A Bachelors, Masters or PhD degree in Computer Science, Machine Learning, Mathematics, or a similar field of study. Preferably 7+ years AI/ML industry experience for Bachelor s/Masters, 4+ years for PhD with a focus on Language Models. Strong foundation in machine learning algorithms, including deep learning architectures (e.g., transformers, RNNs, CNNs) Proficiency in Python and relevant libraries/frameworks such as TensorFlow, PyTorch, Hugging Face Transformers, scikit-learn. Demonstrated expertise in GenAI techniques, including but not limited to Retrieval-Augmented Generation (RAG), model fine-tuning, prompt engineering, AI agents, and evaluation techniques. Experience working with embedding models and vector databases. Experience with MLOps practices, including model deployment, containerization (Docker, kubernetes), CI/CD, and model monitoring. Strong verbal and written communication skills. Curiosity, ownership and willingness to work in a collaborative environment. Proven ability to mentor and guide junior engineers. Experience in some of the following is desired and can set you apart from other candidates: Experience with Agentic Frameworks (e.g., Langchain, AutoGen) and their application to real-world problems. Understanding of scalability and performance optimization techniques for real-time inference such as quantization, pruning, and knowledge distillation. Experience with model interpretability techniques. Prior experience in code reviews/ architecture design for distributed systems. Experience with data governance and data quality principles. Familiarity with financial regulations and compliance requirements.
Posted 1 week ago
2.0 - 8.0 years
20 - 25 Lacs
Hyderabad
Work from Office
We have an opportunity to impact your career and provide an adventure where you can push the limits of whats possible. As a Lead Software Engineer at JPMorgan Chase within Corporate Technology, you play a vital role in an agile team dedicated to enhancing, building, and delivering reliable, market-leading technology products in a secure, stable, and scalable manner. As a key technical contributor, you are tasked with implementing essential technology solutions across diverse technical domains, supporting various business functions to achieve the firms strategic goals. Job responsibilities Develop appropriate level designs and ensure consensus from peers where necessary. Collaborate with software engineers and cross-functional teams to design and implement deployment strategies using AWS Cloud and Databricks pipelines. Work with software engineers and teams to design, develop, test, and implement solutions within applications. Engage with technical experts, key stakeholders, and team members to resolve complex problems effectively. Understand leadership objectives and proactively address issues before they impact customers. Design, develop, and maintain robust data pipelines to ingest, process, and store large volumes of data from various sources. Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity using tools like Apache Spark and PySpark. Monitor and optimize the performance of data systems and pipelines. Implement best practices for data storage, retrieval, and processing Maintain comprehensive documentation of data systems, processes, and workflows. Ensure compliance with data governance and security policies Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Formal training or certification in AWS/Databricks with 10+ years of applied experience. Expertise in programming languages such as Python and PySpark. 10+ years of professional experience in designing and implementing data pipelines in a cloud environment. Proficient in design, architecture, and development using AWS Services, Databricks, Spark, Snowflake, etc. Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform. Familiarity with container and container orchestration technologies such as ECS, Kubernetes, and Docker. Ability to troubleshoot common Big Data and Cloud technologies and issues. Practical cloud native experience Preferred qualifications, capabilities, and skills 5+ years of experience in leading and developing data solutions in the AWS cloud. 10+ years of experience in building, implementing, and managing data pipelines using Databricks on Spark or similar cloud technologies We have an opportunity to impact your career and provide an adventure where you can push the limits of whats possible. As a Lead Software Engineer at JPMorgan Chase within Corporate Technology, you play a vital role in an agile team dedicated to enhancing, building, and delivering reliable, market-leading technology products in a secure, stable, and scalable manner. As a key technical contributor, you are tasked with implementing essential technology solutions across diverse technical domains, supporting various business functions to achieve the firms strategic goals. Job responsibilities Develop appropriate level designs and ensure consensus from peers where necessary. Collaborate with software engineers and cross-functional teams to design and implement deployment strategies using AWS Cloud and Databricks pipelines. Work with software engineers and teams to design, develop, test, and implement solutions within applications. Engage with technical experts, key stakeholders, and team members to resolve complex problems effectively. Understand leadership objectives and proactively address issues before they impact customers. Design, develop, and maintain robust data pipelines to ingest, process, and store large volumes of data from various sources. Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity using tools like Apache Spark and PySpark. Monitor and optimize the performance of data systems and pipelines. Implement best practices for data storage, retrieval, and processing Maintain comprehensive documentation of data systems, processes, and workflows. Ensure compliance with data governance and security policies Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Formal training or certification in AWS/Databricks with 10+ years of applied experience. Expertise in programming languages such as Python and PySpark. 10+ years of professional experience in designing and implementing data pipelines in a cloud environment. Proficient in design, architecture, and development using AWS Services, Databricks, Spark, Snowflake, etc. Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform. Familiarity with container and container orchestration technologies such as ECS, Kubernetes, and Docker. Ability to troubleshoot common Big Data and Cloud technologies and issues. Practical cloud native experience Preferred qualifications, capabilities, and skills 5+ years of experience in leading and developing data solutions in the AWS cloud. 10+ years of experience in building, implementing, and managing data pipelines using Databricks on Spark or similar cloud technologies
Posted 1 week ago
7.0 - 9.0 years
20 - 25 Lacs
Bengaluru
Work from Office
You re ready to gain the skills and experience needed to grow within your role and advance your career and we have the perfect Applied AI ML opportunity for you. As a Applied AI ML Associate at JPMorgan Chase within the Commercial and Investment Banking, youll be an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm s business objectives. You will be working with cutting-edge technologies to ensure scalable, reliable, and efficient AI solutions. Job Responsibilities Design and maintain continuous integration and continuous deployment (CI/CD) pipelines to automate the testing, deployment, and updating of machine learning models. Manage and optimize the infrastructure required for running machine learning models in AWS, including cloud services, containerization (e. g. , Docker), and orchestration tools (e. g. , Kubernetes). Implement monitoring and logging solutions to track model performance, detect anomalies, and ensure models are operating as expected in production. Maintain version control for models and data, ensuring traceability and compliance with governance policies and ensure that deployed models adhere to security best practices and comply with relevant regulations and standards. Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Required qualifications, capabilities, and skills Formal Training or certification on Machine Learning concepts and 2+ years applied experience. Strong expertise in deploying and managing machine learning models in production environments Advanced Python Programming Skills including Pandas, Numpy and Scikit- Learn Proficiency in building and maintaining CI/CD pipelines for machine learning workflows. Proficient in all aspects of the Software Development Life Cycle Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated proficiency in software applications and technical processes within a technical discipline (e. g. , cloud, artificial intelligence, machine learning, mobile, etc. ) Expertise in AWS cloud and containerization technologies (e. g. , Docker, Kubernetes). Experience with monitoring and logging tools (e. g. , Prometheus, Grafana, ELK Stack). Excellent problem-solving skills and attention to detail and Strong communication skills to collaborate effectively with cross-functional teams. Hands-on practical experience delivering system design, application development, testing, and operational stability Preferred qualifications, capabilities, and skills Proven experience in deploying and managing machine learning models in production environments. Strong ability to monitor ML models in production, addressing model performance and data quality issues effectively. Working knowledge of security best practices and compliance standards for Machine Learning systems. Experience with infrastructure optimization techniques to enhance performance and efficiency. Development of REST APIs using frameworks such as Flask or FastAPI for seamless integration into business solutions. You re ready to gain the skills and experience needed to grow within your role and advance your career and we have the perfect Applied AI ML opportunity for you. As a Applied AI ML Associate at JPMorgan Chase within the Commercial and Investment Banking, youll be an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm s business objectives. You will be working with cutting-edge technologies to ensure scalable, reliable, and efficient AI solutions. Job Responsibilities Design and maintain continuous integration and continuous deployment (CI/CD) pipelines to automate the testing, deployment, and updating of machine learning models. Manage and optimize the infrastructure required for running machine learning models in AWS, including cloud services, containerization (e. g. , Docker), and orchestration tools (e. g. , Kubernetes). Implement monitoring and logging solutions to track model performance, detect anomalies, and ensure models are operating as expected in production. Maintain version control for models and data, ensuring traceability and compliance with governance policies and ensure that deployed models adhere to security best practices and comply with relevant regulations and standards. Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Required qualifications, capabilities, and skills Formal Training or certification on Machine Learning concepts and 2+ years applied experience. Strong expertise in deploying and managing machine learning models in production environments Advanced Python Programming Skills including Pandas, Numpy and Scikit- Learn Proficiency in building and maintaining CI/CD pipelines for machine learning workflows. Proficient in all aspects of the Software Development Life Cycle Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated proficiency in software applications and technical processes within a technical discipline (e. g. , cloud, artificial intelligence, machine learning, mobile, etc. ) Expertise in AWS cloud and containerization technologies (e. g. , Docker, Kubernetes). Experience with monitoring and logging tools (e. g. , Prometheus, Grafana, ELK Stack). Excellent problem-solving skills and attention to detail and Strong communication skills to collaborate effectively with cross-functional teams. Hands-on practical experience delivering system design, application development, testing, and operational stability Preferred qualifications, capabilities, and skills Proven experience in deploying and managing machine learning models in production environments. Strong ability to monitor ML models in production, addressing model performance and data quality issues effectively. Working knowledge of security best practices and compliance standards for Machine Learning systems. Experience with infrastructure optimization techniques to enhance performance and efficiency. Development of REST APIs using frameworks such as Flask or FastAPI for seamless integration into business solutions.
Posted 1 week ago
1.0 - 5.0 years
3 - 6 Lacs
Gurugram
Work from Office
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, youll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. Global Commercial Services (GCS) serves millions of business customers around the world, from mom-and-pop shops to approximately 60% of the Fortune 500. We are the number one issuer of commercial cards globally, the number one issuer of small business cards in the U.S. and represent approximately 40% of the company s total revenues. Our vision is to be essential to our customers businesses every day. We do that by offering a diverse range of payment and cashflow tools our customers need to run and grow their businesses, from a wide range of traditional card products, to working capital and supply chain financing, to new digital solutions that make it easy for our customers to manage a full range of their financial and payment needs. The Client hierarchy- COE focus is on understanding the client needs, providing consultancy, and execution to provide accurate and compliant data to global commercial clients. The team partners closely with the client, strategic partners, and multiple internal American Express teams Client Management (CLM), Implementation (IM), Central Onboarding Team (COT), Account Consultancy Team (ACT), PA Servicing, Global New Accounts (GNA), Technologies and Operational Excellence (OE). This role is an exciting opportunity to be part of the GCS growth story in a dynamic, intellectually stimulating, open, fun, and collaborative environment. The incumbent in this role will be responsible to onboard program managers of global commercial clients, solve a major client pain point with data quality and drive enterprise requirement of digital products adoption with enhanced data capabilities. Key responsibilities include the following: Onboard clients to manage their programs digitally Build and maintain global reporting for corporate clients Provide consultation to Client Management, Onboarding, Program Administrators, etc. to maintaining online data security per enterprise policies Provide extraordinary customer issue resolution to queries relating to @work tool. (This requires customer centricity to understand each client need as each client has different reporting setup) Other Departmental and administrative work Good to have skills- Data analysis and presentation Report/Dashboard creation and change management Technical troubleshooting experience Consulting on insights and recommendations Minimum Qualifications Graduate with minimum 2 years of analytical experience in a client servicing environment Excellent communication skills both verbal and written Proven ability to learn new skills in a technical environment basic technology background is an added advantage Strong flexibility/adaptability to manage multiple tasks within stringent time frames while working with minimal supervision A team player should possess the skills to work and support team members Good excel skills, ability to perform basic functions like- VLOOKUP, IF, etc. The incumbent should have a performance rating of G3L3 or better in the last review Enterprise Leadership Behaviors Set The Agenda: Define What Winning Looks Like, Put Enterprise Thinking First, Lead with an External Perspective Bring Others With You: Build the Best Team, Seek Provide Coaching Feedback, Make Collaboration Essential Do It The Right Way: Communicate Frequently, Candidly Clearly, Make Decisions Quickly Effectively, Live the Blue Box Values, Great Leadership Demands Courage
Posted 1 week ago
1.0 - 3.0 years
3 - 6 Lacs
Chennai
Work from Office
Why Quvia Founded in 2019, Quvia is a fast-growing, Series A tech startup passionate about making connectivity and digital experiences better for everyone, everywhere. Our industry-first solutions are already addressing major challenges for companies in the aviation and cruise industries and were just getting started. Quvia is headquartered in the greater Miami region, with offices in the UK and India, and remote teams around the world. As an early-stage company, all Quvia employees have the opportunity to make a significant impact on our growth trajectory, and the future of the industries we serve. Quvia is backed by Columbia Capital, a respected venture capital firm founded in 1989 that has raised over $5 Bn of fund commitments. What You Will Do As a Data Engineer , you will lead the design and development of scalable, reliable, and high-performance data systems that power analytics and decision-making across the organization. You will serve as a technical authority in data engineering and work cross-functionally to elevate our data infrastructure, accelerate innovation, and drive strategic outcomes. Your key responsibilities will include: Data Architecture Pipeline Development : Design and build end-to-end data pipelines, including streaming and batch ETL processes, using modern tools and platforms (e.g., Kafka, Airflow, DBT). Data Modeling Warehousing : Develop optimized data models and warehouse schemas to support business intelligence, reporting, and machine learning use cases. Collaboration Communication : Work closely with analytics, data science, DevOps, and product engineering teams to integrate and align data infrastructure with company goals. Data Quality Governance : Implement robust monitoring, testing, and governance practices to ensure data accuracy, consistency, and compliance. Performance Tuning Optimization : Continuously optimize pipeline performance, storage strategies, and query execution for scalability and efficiency. Mentorship Leadership : Mentor junior data engineers, share best practices, and help foster a high-performing, collaborative data team. Innovation Improvement : Drive continuous improvement by evaluating new tools, frameworks, and processes to enhance our data ecosystem. What You Will Need Required Skills Experience Strong programming skills in Python or Java for data pipeline and backend development. Deep understanding of SQL and modern data warehousing practices. Proficiency with at least one major cloud platform (AWS, GCP, Azure). Experience with Kafka for real-time data streaming and ingestion. Hands-on experience with Git , Docker , DBT , Jenkins , and Airflow for version control, containerization, and orchestration. Ability to lead technical projects and collaborate effectively with cross-functional stakeholders. Nice to Have Experience with Click House , including table design, performance tuning, and Merge Tree optimization. Familiarity with Kubernetes for orchestration of containerized applications. What We Offer Deep domain exposure in aviation and maritime industries. Cross-functional collaboration with global teams. Day-one health benefits. Generous paid time off. Performance-based bonuses. A fast-paced, collaborative environment with real ownership and impact. Quvia is an Equal Opportunity Employer. Employment opportunities at Quvia are based upon ones qualifications and capabilities to perform the essential functions of a particular job. All employment opportunities are provided without regard to race, religion, sex (including sexual orientation and transgender status), pregnancy, childbirth or related medical conditions, national origin, age, veteran status, disability, genetic information, or any other characteristic protected by law.
Posted 1 week ago
1.0 - 8.0 years
3 - 10 Lacs
Chennai
Work from Office
As a Business Analyst for Gen AI, you will play a crucial role in bridging the gap between business needs and technological solutions. You will work with cross-functional teams to identify opportunities where artificial intelligence and data-driven insights can enhance business processes and decision-making. Your primary responsibilities will include: Responsibilities: Collaborate with business stakeholders to gather and document requirements for AI-driven projects and initiatives. Analyze and interpret data to extract actionable insights and identify opportunities for AI integration. Work with data scientists and developers to design AI and machine learning solutions that align with business goals. Manage and oversee AI projects from conception to implementation, ensuring timely delivery and meeting project objectives. Establish and enforce data governance best practices to ensure data quality and compliance with regulations. Identify inefficiencies in current business processes and propose AI-driven improvements. Communicate project progress and findings to stakeholders, including non-technical team members. Create comprehensive documentation of project requirements, processes, and outcomes. Conduct testing and validation of AI models to ensure accuracy and reliability. Stay updated with industry trends and advancements in AI and data analytics to suggest innovative solutions. Skillset Requirements: Proficiency in data analysis tools and techniques, including data visualization and statistical analysis. Strong understanding of AI and machine learning concepts and their practical applications. Excellent verbal and written communication skills to bridge the gap between technical and non-technical stakeholders. Experience in project management methodologies and tools. Familiarity with programming languages such as Python, and data manipulation libraries (e.g., Pandas). Understanding of the specific industry or domain in which the AI solutions will be implemented. Strong problem-solving skills to identify business challenges and propose AI-driven solutions. Ability to adapt to rapidly changing technology and business environments. Knowledge of ethical considerations related to AI and data privacy. Collaborative mindset to work effectively with cross-functional teams. Understanding of business operations, objectives, and strategies. Education and Experience: Bachelors degree in a relevant field (e.g., Business, Computer Science, Data Science). Previous experience as a Business Analyst or similar role, preferably in AI or data-driven projects. Relevant certifications in AI, data analysis, or project management (e.g., CBAP, PMP, AI -related certifications). This job description and skillset requirements provide a comprehensive overview of the role of a Business Analyst for Gen AI, highlighting the key responsibilities and qualifications needed for success in this position. Tailor it to your specific organizations needs and requirements as necessary.
Posted 1 week ago
5.0 - 10.0 years
9 - 13 Lacs
Hyderabad
Work from Office
End-to-end impact Build full-stack analytics from lake house pipelines to real-time dashboards. Fail-safe engineering TDD, CI/CD, DAX optimization, Unity Catalog, cluster tuning. Modern stack Databricks, PySpark, Delta Lake, Power BI, Airflow. Mentorship culture Lead code reviews, share best practices, grow as a domain expert. Mission-critical context Help enterprises migrate legacy analytics into cloud-native, governed platforms. Compliance-first mindset Work in HIPAA-aligned environments where precision matters. Requirements Key Responsibilities Build scalable pipelines using SQL, PySpark, Delta Live Tables on Databricks. Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. Deliver robust Power BI solutions dashboards, semantic layers, paginated reports, DAX. Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. Document everything from pipeline logic to RLS rulesin Git-controlled formats. Collaborate cross-functionally to convert product analytics needs into resilient BI assets. Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills 5+ years in analytics engineering, with 3+ in production Databricks/Spark contexts. Advanced SQL (incl. windowing), expert PySpark , Delta Lake , Unity Catalog . Power BI mastery DAX optimization, security rules, paginated reports. SSRS-to-Power BI migration experience (RDL logic replication). Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). Communication skills to bridge technical and business audiences. Nice-to-Have Skills Databricks Data Engineer Associate cert. Streaming pipeline experience (Kafka, Structured Streaming). dbt , Great Expectations , or similar data quality frameworks. BI diversityexperience with Tableau, Looker, or similar platforms.
Posted 1 week ago
2.0 - 7.0 years
4 - 9 Lacs
Mumbai
Work from Office
Manage Collateral Dispute. Perform daily Margin Exchange - Same day Settlement and Exception management. Perform supervisory controls around Collateral exposure. Supervise MTM breaks including data quality and strategic projects. Partner with Middle Offices, Credit risk, VCG, etc. Focus on deep dive and fixing on upstream issues to keep the breaks to minimum. Play a key role in regulatory compliance CFTC, EMIR, NCMR, etc. Improve controls in the process ensure 100% accuracy and compliance to Regulatory rules. Manage any new analysis requirements across multiple stakeholders. Provide regular update to senior management on BAU, projects, etc. Supervise UAT testing. Manage strategic automation projects. Required qualifications, capabilities and skills CA/ MBA with 2 years / Graduate or Post-Graduate with 4 years experience in operations. Familiarity with a global banks process operational environment including management and external reporting is a must. Strong business knowledge i.e. Investment Banking, including OTC product, process and system knowledge. Ability to think and act strategically Deal with day-to-day issues as well as planning and executing projects / initiatives, Ensuring the teams activities support Operations in attaining its strategic goals, Excellent attention to detail, and an ability to know when a deep-dive approach is appropriate. Ability to drive results through a "hands-on" approach. Excellent verbal and written communication skills, and adept at communicating with all levels of the business and technical parts of the organization. Skilled in MS office applications including Outlook, PowerPoint, Excel, Word, Access and Project. Can operate effectively in a dynamic environment with tight deadlines, and can prioritize ones own and team s work to achieve them. Flexibility to work global hours and willing to travel globally, if needed. Preferred qualifications, capabilities and skills Knowledge on CFTC, EMIR, NCMR regulations preferable. Experience on OTC Confirmations, Collateral Management and Reconciliation platforms will be an advantage.
Posted 1 week ago
15.0 - 20.0 years
25 - 30 Lacs
Chennai, Bengaluru
Work from Office
Our client is a global banking firm which provides industry-focused services for clients across geographies. We are currently looking for a seasoned finance leader to manage the capital requirements of the Corporate Investment Banking (CIB) division in Bangalore. Please contact Apoorva Sharma or email your cv directly in word format with Job ID: 15199 to . Apply for this Job Key responsibilities Reviewing, analyzing, and finalizing monthly CIB RWA (Risk-Weighted Assets) and leverage reporting, ensuring accuracy and completeness. Identifying gaps in capital processes and driving corrective actions to enhance data quality and integrity. Designing and maintaining high-quality MIS reports for RWA and leverage on a monthly and daily basis. Engaging with Risk, Operations, Technology, and Finance teams to track, remediate, and log RWA issues. Providing subject matter expertise on PRA regulatory guidelines and Basel 3.0 / 3.1 capital requirements for CIB products. Role requirements 15+ years of experience in Business Finance and Risk Management within the banking or financial services sector. Deep understanding of CIB products across Trade, Markets, and Banking, along with Credit Risk RWA calculations. Proven expertise in UK regulatory (PRA) guidelines and Basel capital frameworks. Excellent communication, presentation, and stakeholder management skills across senior business and risk partners. High proficiency in MS Office, data warehouse tools, and financial reporting systems.
Posted 1 week ago
6.0 - 9.0 years
3 - 4 Lacs
Noida, Mumbai
Work from Office
Job Description Senior Analyst Supply Chain Finance Systems Analytics Purpose of the Role: This role supports the supply chain finance planning process , with a focus on rolling forecast execution, flash reporting, and system data accuracy . It includes working closely with operations, plant finance, and digital tools such as SAC to ensure financial visibility and control. Key Responsibilities: Assist in preparing cost forecasts, budget templates, and flash reports for supply chain Oversee all Supply Chain Finance operations e.g., Rolling Forecast, MI Packs, Business Partnering, monthly report production, financial results and variance reporting commentaries of key business drivers. Ensure timely addressing of all ad-hoc queries with right resolution and managing client expectations around the same. Proactively identifying, addressing and/or escalating any issues and ensuring proper risk-mitigation. Support the business units in effectively maintaining good health of Supply Chain Finance service delivery Success Metrics: Accuracy and timeliness of data loads and reports Quality of flash reporting and variance analysis Responsiveness to business partner requests and planning updates Key Interfaces: Internal: Manager SCF, Plant Controllers, Supply Chain Team External : SAC Support, Digital Enablement Partners Required Experience Skills: 6 9 years of experience in supply chain Finance or operations finance Proficiency in Excel, reporting tools, and financial modelling Experience with process automation and continuous improvement initiatives. Experience in Alteryx/ Adaptive Planning/ SAC and familiarity with IBP tools will be added advantage. Strong attention to detail and data quality Effective communicator and collaborator in cross-functional teams Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary Job Type Regular Finance Planning Performance Management Finance
Posted 1 week ago
5.0 - 10.0 years
30 - 35 Lacs
Hyderabad
Work from Office
We are currently seeking an experienced professional to join our team in the role of Assistant Vice President - Data Modeller. Principal responsibilities The FDS Data Team are seeking to recruit a Data Modeller with a passion for organising and transforming complex Finance data into actionable insights represented within data model structures that are fit for purpose. The role requires a strong analytical mindset, a good understanding of various data modelling techniques and tools with a proven track record. The individual should have exposure of designing and implementing efficient data models that cater to the data sourcing, storage and usage needs of the Finance business and/or Front-to-Back business domains within a global financial institution. Support the design and develop FDS conceptual, logical and application data models as per HSBCs Future State Architecture (Data Asset Strategy) and work across Finance business teams to drive understanding, interpretation, design, and implementation. Support Finance business and change teams to migrate to target state data models and Data Asset delivery, driving improvement on current feeds and data issues. Develop data modelling schemas aligned with Enterprise data models and supporting Finance Data Assets. Contribute to FDS program model development planning and scheduling. Continuously improve FDS data modelling estate adhering to risks, controls, security, and regulatory compliance standards. Advise and support Finance modelling data requirements that support new use case and data changes. Serve as FDS data modelling subject matter expert. Seek opportunities to simplify, automate, rationalise, and improve the efficiency of Finance IT and modelling solutions. Update and maintain the key FDS modelling artefacts, (i.e., Confluence, SharePoint, documents, reports, roadmap, and other domain artefacts). Provide data modelling and technical advice as well as maintain ongoing relationships. Provide feedback in a timely manner to ensure that model development or modification meets the business need. Requirements Minimum of 5 years experience of Data management and modelling solutions working as a Data Modeller within the Financial Services sector is essential; preferably in a Treasury/Finance function and or related front office environment. Good communication skills with the ability to influence and present data models (as well as concepts) to technology and business stakeholders. Good collaboration skills with the ability to demonstrate experience achieving outcomes in a matrixed environment partnering with data modellers from other domains to build and join shared and reusable data assets. Experience of working with Agile and Scrum in a large scalable Agile environment. This should include participation and progress reporting in daily standups. Experience working with leading data modelling tools modelling documentation using tools such as Visual Paradigm, ERwin, PowerDesigner, ER Studio etc. Knowledge of data modelling standards and modelling technical documentation using Entity Relationship Diagrams (ERD) or Unified Modelling language (UML) or BIAN. Results oriented with ability to produce solutions that deliver organisational benefit. Understanding of issue and data quality management, prioritisation, business case development, remediation planning and tactical or strategic solution delivery Exposure with data governance initiatives such as lineage, masking, retention policy, and data quality Strong analytical skills and problem-solving, with the ability to work unsupervised and take ownership for key deliverables.
Posted 1 week ago
5.0 - 10.0 years
22 - 27 Lacs
Hyderabad
Work from Office
Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy. Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration. Develop and optimize complex SQL queries and Python-based data transformation logic. Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes. Automate deployment of data pipelines using CI/CD practices in Azure DevOps. Ensure data quality, security, and compliance with best practices. Monitor and troubleshoot performance issues in data pipelines. Collaborate with cross-functional teams to define data requirements and strategies. Requirements 5+ years of experience in data engineering, working with Azure Databricks, PySpark, and SQL. Hands-on experience with Prophesy for data pipeline development. Proficiency in Python for data processing and transformation. Experience with Apache Airflow for workflow orchestration. Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes. Familiarity with GitHub and Azure DevOps for version control and CI/CD automation. Solid understanding of data modelling, warehousing, and performance optimization. Ability to work in an agile environment and manage multiple priorities effectively. Excellent problem-solving skills and attention to detail. Experience with Delta Lake and Lakehouse architecture. Hands-on experience with Terraform or Infrastructure as Code (IaC). Understanding of machine learning workflows in a data engineering context.
Posted 1 week ago
4.0 - 7.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About Our Company/Team At Oracle Finergy, we are committed to delivering innovative solutions to the Banking, Financial Services, and Insurance (BFSI) sector. Our team of experts leverages proven methodologies and cutting-edge technologies to address the complex financial needs of our clients. We pride ourselves on being a leading provider of end-to-end banking solutions, enhancing operational efficiency, and ensuring technology aligns with our clients business goals. Our mission is to empower financial institutions to succeed in a rapidly changing world. Job Summary As a Microsoft Fabric Data Engineer/Developer, you will play a vital role in designing, developing, and implementing robust and scalable data solutions within the Microsoft Fabric ecosystem. You will collaborate closely with data architects, business stakeholders, and cross-functional teams to transform raw data into actionable insights, driving informed decision-making across the organization. If you are passionate about data engineering, possess a strong technical background, and excel in collaborative environments, we invite you to join our growing data team. Career Level - IC2 Microsoft Fabric Development : Design, develop, and deploy end-to-end data solutions using various components of Microsoft Fabric, including Lakehouse, Data Warehouse, Data Factory, and Data Engineering. Implement and optimize data pipelines for ingestion, transformation, and curation of data from diverse sources (e.g., Azure Data Lake Storage Gen2, on-premises databases, APIs, third-party systems). Develop and optimize data models within Microsoft Fabric, ensuring adherence to best practices for performance, scalability, and data quality. Utilize Power BI for data visualization and reporting, ensuring seamless integration with Fabric data assets. Azure Data Services Integration : Demonstrate strong hands-on experience with core Microsoft Azure data services, including Azure Data Factory (for ETL/ELT orchestration), Azure Databricks (for advanced analytics and processing), and Azure Data Lake Storage Gen2. Integrate Microsoft Fabric solutions with existing Azure data services and other enterprise systems. Data Architecture Governance : Contribute to the design and implementation of robust, scalable, and secure data architectures within the Microsoft Fabric platform. Implement data quality, validation, and reconciliation processes to ensure data integrity and accuracy. Apply data governance best practices, including security, access controls (e.g., role-based access control), and compliance within Fabric and Azure Purview. Documentation Knowledge Sharing : Maintain comprehensive documentation for data architectures, pipelines, data models, and processes. Stay updated with the latest advancements in Microsoft Fabric, Azure data services, and data engineering best practices. Qualifications Skills Mandatory : Bachelors degree in Computer Science, Information Technology, Engineering, or a related field. 4-7 years of professional experience as a Data Engineer, Data Developer, or in a similar role. Hands-on experience with Microsoft Fabric, including its core components (Lakehouse, Data Warehouse, Data Factory, Data Engineering). Strong expertise in Microsoft Azure data services: Azure Data Factory (ADF) Azure Data Lake Storage Gen2 Proven experience in designing, developing, and maintaining scalable data pipelines. Solid understanding of data warehousing concepts, dimensional modeling, and data lakehouse architectures. Proficiency in SQL for data manipulation and querying. Experience with version control systems (e.g., Git, Azure Repos). Strong analytical and problem-solving skills with meticulous attention to detail. Excellent communication skills (written and verbal) and the ability to collaborate effectively with cross-functional teams. Good-to-Have : Certification in Microsoft Azure or Microsoft Fabric. Experience with cloud-based data platforms, such as Amazon Web Services (AWS) or Google Cloud Platform (GCP). Knowledge of data governance frameworks and best practices. Additional Notes Ideal to have some background knowledge around Finance / Investment Banking / Fixed Income / OCIO Business Self-Assessment Questions To help you determine if this role is a good fit, please consider the following questions: 1)Can you describe your experience with Microsoft Fabric and its core components, highlighting specific projects or accomplishments 2)How do you ensure data quality, validation, and reconciliation in your data pipelines, and can you provide an example from a previous project 3)Can you explain your approach to data governance, including security, access controls, and compliance, and how youve applied this in a previous role 4)How do you stay up-to-date with the latest advancements in Microsoft Fabric, Azure data services, and data engineering best practices 5)Can you provide an example of a complex data problem youve solved in the past, highlighting your analytical and problem-solving skills
Posted 1 week ago
4.0 - 10.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Not Applicable Specialism Data, Analytics AI Management Level Senior Associate Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purposeled and valuesdriven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above Summary Looking for an SAP Data Migration Consultant with extensive experience working with SAP Syniti to join a large global brands S/4 HANA program. Responsibilities Proven experience working with Data Migration on S/4 HANA. Extensive hands on experience working with Syniti. Experience with Syniti ADMM, SDR, and Syniti DQ. Endtoend Data Migration experience DMC, LSMW, BAPI, IDOCs. Ability to lead crossfunctional teams. Experience with Deduplication, Data Quality, Data Reconciliation, Preload, and Postload reports. Experience in managing SAP S/4 HANA data migration, implementing an SAP ERP solution, and working with various SAP modules (FICO, MM, PP, PS, SD, SC, and EWM). Understanding of SAP database schema and data loading concepts. Ability to design and implement an SAP technical solution and a data solution. Debugging and LTMOM code development experience. Strong knowledge of MS SQL server programming, ETL tools (Syniti/Backoffice), and Master Data Management. Responsibilities Contribute to crossfunctional teams to deliver largescale SAP implementations. Collaborate with stakeholders to gather requirements and define project scope. Maintain highquality standards throughout the project lifecycle. Analyze and resolve complex technical issues related to SAP S/4 HANA and Syniti ADMM. Mandatory skill sets Minimum 4 years of experience working with Syniti ADMM and BODS. Strong oral and written communication skills. Flexibility to travel onsite for client meetings and project implementations Preferred skill sets Minimum 4 years of experience working with Syniti ADMM and BODS. Strong oral and written communication skills. Flexibility to travel onsite for client meetings and project implementations Years of experience required 4 to 10 years Education qualification Graduate Engineer or Management Graduate Education Degrees/Field of Study required Bachelor Degree, Master Degree Degrees/Field of Study preferred Required Skills SAP Advanced Data Migration and Management (ADMM) Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Travel Requirements Government Clearance Required
Posted 1 week ago
4.0 - 10.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Not Applicable Specialism Data, Analytics AI Management Level Senior Associate Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purposeled and valuesdriven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above Summary Looking for an SAP Data Migration Consultant with extensive experience working with SAP Syniti to join a large global brands S/4 HANA program. Responsibilities Roles Responsibilities Proven experience working with Data Migration on S/4 HANA. Extensive hands on experience working with Syniti. Experience with Syniti ADMM, SDR, and Syniti DQ. Endtoend Data Migration experience DMC, LSMW, BAPI, IDOCs. Ability to lead crossfunctional teams. Experience with Deduplication, Data Quality, Data Reconciliation, Preload, and Postload reports. Experience in managing SAP S/4 HANA data migration, implementing an SAP ERP solution, and working with various SAP modules (FICO, MM, PP, PS, SD, SC, and EWM). Understanding of SAP database schema and data loading concepts. Ability to design and implement an SAP technical solution and a data solution. Debugging and LTMOM code development experience. Strong knowledge of MS SQL server programming, ETL tools (Syniti/Backoffice), and Master Data Management. Responsibilities Contribute to crossfunctional teams to deliver largescale SAP implementations. Collaborate with stakeholders to gather requirements and define project scope. Maintain highquality standards throughout the project lifecycle. Analyze and resolve complex technical issues related to SAP S/4 HANA and Syniti ADMM. Mandatory skill sets Minimum 4 years of experience working with Syniti ADMM and BODS. Strong oral and written communication skills. Flexibility to travel onsite for client meetings and project implementations Preferred skill sets Minimum 4 years of experience working with Syniti ADMM and BODS. Strong oral and written communication skills. Flexibility to travel onsite for client meetings and project implementations Years of experience required 4 to 10 years Education qualification Graduate Engineer or Management Graduate Education Degrees/Field of Study required Bachelor of Engineering, Master Degree Degrees/Field of Study preferred Required Skills SAP Advanced Data Migration and Management (ADMM) Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Travel Requirements Government Clearance Required
Posted 1 week ago
1.0 - 3.0 years
8 - 12 Lacs
Gurugram
Work from Office
Donaldson is committed to solving the world s most complex filtration challenges. Together, we make cool things. As an established technology and innovation leader, we are continuously evolving to meet the filtration needs of our changing world. Join a culture of collaboration and innovation that matters and a chance to learn, effect change, and make meaningful contributions at work and in communities. We are seeking a skilled and motivated Data Engineer II to join the Corporate Technology Data Engineering Team. This role is important for developing and sustaining our data infrastructure, which supports a wide range of RD, sensor-based, and modeling technologies. The Data Engineer II will design and maintain pipelines that enable the use of complex datasets. This position directly empowers faster decision making by building trustworthy data flows and access for engineers and scientists. Primary Role Responsibilities: Develop and maintain data ingestion and transformation pipelines across on-premise and cloud platforms. Develop scalable ETL/ELT pipelines that integrate data from a variety of sources (i.e. form-based entries, SQL databases, Snowflake, SharePoint). Collaborate with data scientists, data analysts, simulation engineers and IT personnel to deliver data engineering and predictive data analytics projects. Implement data quality checks, logging, and monitoring to ensure reliable operations. Follow and maintain data versioning, schema evolution, and governance controls and guidelines. Help administer Snowflake environments for cloud analytics. Work with more senior staff to improve solution architectures and automation. Stay updated with the latest data engineering technologies and trends. Participate in code reviews and knowledge sharing sessions. Participate in and plan new data projects that impact business and technical domains. Required Qualifications Relevant Experience: Bachelor s or master s degree in computer science, data engineering, or related field. 1-3 years of experience in data engineering, ETL/ELT development, and/or backend software engineering. Demonstrated expertise in Python and SQL. Demonstrated experience working with data lakes and/or data warehouses (e.g. Snowflake, Databricks, or similar) Familiarity with source control and development practices (e.g Git, Azure DevOps) Strong problem-solving skills and eagerness to work with cross-functional globalized teams. Preferred Qualifications: Required qualification plus Working experience and knowledge of scientific and RD workflows, including simulation data and LIMS systems. Demonstrated ability to balance operational support and longer-term project contributions. Experience with Java Strong communication and presentation skills. Motivated and self-driven learner Donaldson Company has been made aware that there are several recruiting scams that are targeting job seekers. These scams have attempted to solicit money for job applications and/or collect confidential information, Donaldson will never solicit money during the application or recruiting process. Donaldson only accepts online applications through our Careers | Donaldson Company, Inc. website and any communication from a Donaldson recruiter would be sent using a donaldson.com email address. If you have any questions about the legitimacy of an employment opportunity, please reach out to talentacquisition@donaldson.com to verify that the communication is from Donaldson.
Posted 1 week ago
8.0 - 15.0 years
8 - 13 Lacs
Hyderabad
Work from Office
Key Roles and Responsibilities 1. Stakeholder Collaboration Business Engagement Participate in data quality implementation discussions with business stakeholders. Engage in architecture discussions around the Data Quality framework. Collaborate with O2C, Tax, Trading Partner Finance, Logistics teams to gather business rules. Provide business requirements to BODS/LSMW teams to support data loads. Review and track exceptions related to data quality rules; incorporate logic into tools. Capture sign-offs from key stakeholders at critical project milestones. Identify and drive continuous improvements in data standards, data quality rules, and maintenance processes in collaboration with business and MDM teams. 2. Data Quality Rules Standards Definition Define and document data quality rules based on business and compliance needs. Translate business rules into technical definitions to support tool development. Support the development of rules in data quality (DQ) tools. Validate rule outputs and communicate findings to business stakeholders. 3. Data Profiling, Cleansing Monitoring Create and maintain data quality dashboards and reports. Review customer master data profiling results and suggest cleansing opportunities. Support the design and execution of a Data Quality Framework. Conduct root cause analysis and suggest process/system improvements. Execute pre- and post-validation checks as part of cleansing efforts. 4. Customer Master Data Management (SAP) Manage and maintain customer master data directly in SAP where BODS cannot be used. Ensure accuracy, consistency, and completeness across customer records. Lead initiatives for identifying obsolete records and define deactivation criteria. 5. Data Migration, Integration Tool Support Collaborate with IT teams on data migration, cleansing, and validation activities during SAP projects or enhancements. Analyze business processes and translate them into functional specifications to enhance master data processes. Recommend system/process enhancements to improve data quality and governance. Skills Required Core Experience 10+ years of experience with SAP Customer Master module. Strong knowledge of SAP O2C (Order to Cash) T-codes and fields. Extensive hands-on experience in creating, maintaining, and validating customer master data. Domain Knowledge Familiarity with industry-specific customer processes, including: Credit Management Intercompany Trading Trading Partner Finance Tax Configuration In-depth understanding of customer master lifecycle: creation, change, extension, obsolescence, deletion. Data Quality Governance Strong skills in defining, documenting, and enforcing data quality rules. Experience in data profiling, cleansing, and standardization. Ability to perform root cause analysis and define remediation strategies. Proficiency in documenting data standards and definitions. Tools Technologies SAP ECC / S/4HANA SAP SD / O2C SAP BODS (BusinessObjects Data Services) SAP LSMW recording, field mapping, batch input SAP SE16N, SQVI, and custom reporting Excel, Power BI, SAP IS, Tableau Experience with data quality tools (DQ dashboards, exception reporting) Sap Sd Consultant With 8 Years Experience, Masterdata, Configuration, Migration, Etl
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi