Jobs
Interviews

468 Data Lake Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

7 - 8 Lacs

Bengaluru

Work from Office

Diverse Lynx is looking for Snowflake Developer to join our dynamic team and embark on a rewarding career journey Design and develop data solutions within the Snowflake cloud data platform, including data warehousing, data lake, and data modeling solutions Participate in the design and implementation of data migration strategies Ensure the quality of custom solutions through the implementation of appropriate testing and debugging procedures Provide technical support and troubleshoot issues as needed Stay up-to-date with the latest developments in the Snowflake platform and data warehousing technologies Contribute to the ongoing improvement of development processes and best practices

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Hubli, Mangaluru, Mysuru

Work from Office

Build of interim archive solution for EDI on the Clients Maestrplatform which is a self-service, cloud-based platform on Azure Data Lake Storage. Experience in storage and retrieval of EDI message provided by the cloud-based EDI Seeburger platform.Experience in Azure Data Integration Engineer with expertise in Azure Data Factory, Databricks, Data Lake, Key Vault, Azure Active Directory. Understand security/encryption considerations and options

Posted 2 weeks ago

Apply

7.0 - 12.0 years

15 - 20 Lacs

Mumbai

Work from Office

Hi, We are having an opening for Manager / Senior Manager 2 Data Science at our Mumbai location Job Summary : The role would work as an integral part of the Data & Analytics team at SUN Pharma. It involves leading and managing business critical data science projects and AI/ ML Initiatives. The role requires an understanding of business processes related to Pharma industry and data science and analytics, and the ability to communicate effectively to both technical and functional stakeholders. Position offers a significant opportunity for an all-round business understanding and impact working with the in-house teams as well as business and corporate functions as well as technology consultants and partners Areas of Responsibility : For designated complex and business critical digital and technology based data & AI related projects, Building and expanding the Data Analytics CoE and Data Architecture for Global Use cases Clear understanding of the expected business outcome, business value and specific outputs / deliverables Managing stakeholders and expectations from use of data science and analytics Working in tandem with Data guardians from different functions, in-line with their requirements and vision Monitoring any proof of concept for new requirement, coordinating with involved stakeholders and partners, ensuring timely completeness for the objectives pre-decided Planning benefits realization for Data Science projects and execution strategy Reusability of data streams, wherever applicable, to eliminate duplication in data foot print Managing a team of data engineers, cloud architects, MLOps engineers and Database engineers Structuring delivery team organization, managing the engagement with relevant partner ecosystem Managing CI-CD pipelines and building ML products for internal teams Managing budgets, managing scope, time and quality, measuring and managing change Targeted communication, orchestration and governance System/solution usage and adoption, benefit realization and measurement Educational Qualification : B. Tech / MBA Skills : Knowledge of Data Science, ML, Statistical Techniques, Data Modelling, Data Transformation, Use of Cloud Hyperscaler environment. Use of cloud based tools for automating ML pipelines and deploying cloud based models for end users Ability to understand and interpret complex datasets and formulate means for merging disparate data sources Strong fundamentals in cloud services and data lake architecture. Work involves handling sensitive data and confidential information requiring discretion on the role holder's part. Ability to work under deadlines and comfortable with certain level of ambiguity Ability to multi task, successfully adapt to changes in work priorities. Should be comfortable to operate at a strategic level, and also have an eye for detail. Should be a self motivated Experience : 7-12 years (preferably, IT Services / Pharma Industry Experience, from organizations of repute) Experience in handling data/ data models for different functions

Posted 2 weeks ago

Apply

19.0 - 23.0 years

60 - 75 Lacs

Bengaluru, Delhi / NCR

Hybrid

Preferred candidate profile : Solution Architect Data to lead the design and implementation of scalable, secure, and high-performance data solutions.You will play a key role in defining the data architecture and strategy across enterprise platforms, ensuring alignment with business goals and IT standards. 18+ years of IT experience with Atleast 5 years of worknig as an Architect. Experience working as a Data Architect. Experience architecting reporting and analytics solutions. Expereince architecting AI & ML solutoins. Experience with Databricks.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

32 - 45 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Job Title: Data Architect Location: Bangalore ,Hyderabad,Chennai, Pune, Gurgaon - hybrid- 2/3 days WFO Experience: 8+ years Position Overview: We are seeking a highly skilled and strategic Data Architect to design, build, and maintain the organizations data architecture. The ideal candidate will be responsible for aligning data solutions with business needs, ensuring data integrity, and enabling scalable and efficient data flows across the enterprise. This role requires deep expertise in data modeling, data integration, cloud data platforms, and governance practices. Key Responsibilities: Architectural Design: Define and implement enterprise data architecture strategies, including data warehousing, data lakes, and real-time data systems. Data Modeling: Develop and maintain logical, physical, and conceptual data models to support analytics, reporting, and operational systems. Platform Management: Select and oversee implementation of cloud and on-premises data platforms (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Databricks). Integration & ETL: Design robust ETL/ELT pipelines and data integration frameworks using tools such as Apache Airflow, Informatica, dbt, or native cloud services. Data Governance: Collaborate with stakeholders to implement data quality, data lineage, metadata management, and security best practices. Collaboration: Work closely with data engineers, analysts, software developers, and business teams to ensure seamless and secure data access. Performance Optimization: Tune databases, queries, and storage strategies for performance, scalability, and cost-efficiency. Documentation: Maintain comprehensive documentation for data structures, standards, and architectural decisions. Required Qualifications: Bachelors or master’s degree in computer science, Information Systems, or a related field. 5+ years of experience in data architecture, data engineering, or database development. Strong expertise in data modeling, relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). Experience with modern data platforms and cloud ecosystems (AWS, Azure, or GCP). Hands-on experience with data warehousing solutions and tools (e.g., Snowflake, Redshift, BigQuery). Proficiency in SQL and data scripting languages (e.g., Python, Scala). Familiarity with data privacy regulations (e.g., GDPR, HIPAA) and security standards. Tech Stack AWS Cloud – S3, EC2, EMR, Lambda, IAM, Snowflake DB Databricks Spark/Pyspark, Python Good Knowledge of Bedrock and Mistral AI RAG & NLP LangChain and LangRAG LLMs Anthropic Claude, Mistral, LLaMA etc.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

20 - 30 Lacs

Bengaluru

Remote

Title : Data Lake Implementation Specialist/ Consultant Job Type: Full-time Job Summary: As a Senior Datalake Implementation Specialist/Consultant, you will be responsible for designing and implementing data lake solutions on Alibaba Cloud, utilizing either Alibaba services or PostgreSQL, among other compatible solutions. Your expertise will be crucial in integrating the data lake with Zoho and various legacy systems, including reservation systems and legacy ERPs such as Otalio. You will leverage your strong design, architecture, communication, and problem-solving skills to deliver high-quality solutions that meet our clients' needs in the hospitality sector. Role & responsibilities Lead the design and implementation of data lake solutions on Alibaba Cloud, ensuring scalability, performance, and security. Utilize Alibaba Cloud services or PostgreSQL to create efficient data structure, storage and processing architectures tailored for the hospitality industry. Integrate the data lake with Zoho CRM and other legacy systems, including reservation systems and ERPs like Otalio, to ensure seamless data flow and accessibility. Collaborate with clients to understand their data requirements and provide tailored solutions that enhance data analytics and reporting capabilities. Develop and maintain documentation for data lake architecture, integration processes, and best practices. Conduct training sessions and workshops for client teams to ensure effective utilization of the data lake and associated tools. Provide ongoing support and troubleshooting for clients post-implementation, addressing any issues that arise. Stay updated on the latest trends and technologies in data lake solutions and cloud computing, sharing insights with the team and clients. Travel to client locations in Saudi Arabia as needed to provide on-site support and consultation. Preferred candidate profile Qualifications: Bachelors degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data lake implementation, with a strong focus on Alibaba Cloud solutions or PostgreSQL, or other compatible technologies. Proven experience in integrating data lakes with Zoho and legacy systems, including reservation systems and ERPs like Otalio. Strong design and architecture skills, with the ability to create scalable and efficient data solutions. Excellent communication and interpersonal skills, with a customer-centric approach. Strong analytical and problem-solving skills, with the ability to troubleshoot complex issues. Ability to work independently and collaboratively in a fast-paced environment. Willingness to travel to client locations in Saudi Arabia on demand.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

5 - 13 Lacs

Hyderabad

Hybrid

Summary: Design, develop and implement scalable batch/real time data pipelines (ETLs) to integrate data from a variety of sources into Data Warehouse and Data Lake Design and implement data model changes that align with warehouse dimensional modeling standards. Proficient in Data Lake, Data Warehouse Concepts and Dimensional Data Model. Responsible for maintenance and support of all database environments, design and develop data pipelines, workflow, ETL solutions on both on-prem and cloud-based environments. Design and develop SQL stored procedures, functions, views, and triggers. Design, code, test, document and troubleshoot deliverables. Collaborate with others to test and resolve issues with deliverables. Maintain awareness of and ensure adherence to Zelis standards regarding privacy. Create and maintain Design documents, Source to Target mappings, unit test cases, data seeding. Ability to perform Data Analysis and Data Quality tests and create audit for the ETLs. Perform Continuous Integration and deployment using Azure DevOps and Git. Requirements: 3+ Years Microsoft BI Stack (SSIS, SSRS, SSAS) 3+ Years data engineering experience to include data analysis. 3+ years programming SQL objects (procedures, triggers, views, functions) in SQL Server. Experience optimizing SQL queries. Advanced understanding of T-SQL, indexes, stored procedures, triggers, functions, views, etc. Experience designing and implementing Data Warehouse. Working Knowledge of Azure/AWS Architecture, Data Lake Must be detail oriented. Must work under limited supervision. Must demonstrate good analytical skills as it relates to data identification and mapping and excellent oral communication skills. Must be flexible and able to multi-task and be able to work within deadlines; must be team-oriented, but also be able to work independently. Preferred Skills: Experience working with an ETL tool (DBT preferred) Working Experience designing and developing Azure/AWS Data Factory Pipelines. Working understanding of Columnar MPP Cloud data warehouse using Snowflake. Working knowledge managing data in the Data Lake. Business analysis experience to analyze data to write code and drive solutions. Working knowledge of: Git, Azure DevOps, Agile, Jira and Confluence. Healthcare and/or Payment processing experience. Independence/ Accountability: Requires minimal daily supervision. Receives detailed instruction on new assignments and determines next steps with guidance. Regularly reviews goals and objectives with supervisor. Demonstrates competence in relevant job responsibilities which allows for increasing level of independence. Ability to manage and prioritize multiple tasks. Ability to work under pressure and meet deadlines. Problem Solving: Makes logical suggestions of likely causes of problems and independently suggests solutions. Excellent organizational skills are required to prioritize responsibilities, thus completing work in a timely fashion. Outstanding ability to multiplex tasks as required. Excellent project management and/or business analysis skills. Attention to detail and concern for impact is essential.

Posted 2 weeks ago

Apply

10.0 - 17.0 years

25 - 30 Lacs

Mumbai, Thane

Work from Office

Manage end to end deliveries for data Engineering, EDW and Data Lake platform. Data modelling 3+ Exp in writing complex SQL queries/procedures/Views/Functions and database objects. Minimum 3 years exp required into cloud computing.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Data Governance, Lakehouse architecture, Medallion Architecture, Azure DataBricks, Azure Synapse, Data Lake Storage, Azure Data Factory Intelebee LLC is Looking for: Data Engineer:We are seeking a skilled and hands-on Cloud Data Engineer with 5-8 years of experience to drive end-to-end data engineering solutions. The ideal candidate will have a deep understanding of dimensional modeling, data warehousing (DW), Lakehouse architecture, and the Medallion architecture. This role will focus on leveraging Azure's/AWS ecosystem to build scalable, efficient, and secure data solutions. You will work closely with customers to understand requirements, create technical specifications, and deliver solutions that scale across both on-premise and cloud environments. Key Responsibilities: End-to-End Data Engineering Lead the design and development of data pipelines for large-scale data processing, utilizing Azure/AWS tools such as Azure Data Factory, Azure Synapse, Azure functions, Logic Apps , Azure Databricks, and Data Lake Storage. Tools, AWS Lambda, AWS Glue Develop and implement dimensional modeling techniques and data warehousing solutions for effective data analysis and reporting. Build and maintain Lakehouse and Medallion architecture solutions for streamlined, high-performance data processing. Implement and manage Data Lakes on Azure/AWS, ensuring that data storage and processing is both scalable and secure. Handle large-scale databases (both on-prem and cloud) ensuring high availability, security, and performance. Design and enforce data governance policies for data security, privacy, and compliance within the Azure ecosystem.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Hyderabad

Hybrid

Job Title: Big Data Engineer Experience: 5-9 Years Location: Hyderabad-Hybrid Employment Type: Full-Time Job Summary: We are seeking a skilled Big Data Engineer with 59 years of experience in building and managing scalable data pipelines and analytics solutions. The ideal candidate will have strong expertise in Big Data, Hadoop, Apache Spark, SQL, Hadoop, and Data Lake/Data Warehouse architectures. Experience working with any cloud platform (AWS, Azure, or GCP) is preferred. Top Universities or colleges prefeered.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru, Bellandur

Hybrid

Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 4-6 years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Work from Office

Lead Data Engineer Data Management Job description Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad Role Overview: Accordion is looking for Lead Data Engineer. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in-depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Lead Data Engineer should be able to evaluate existing architectures and recommend way to upgrade and improve the performance of architectures both on-premises and cloud-based solutions. A successful Lead Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Partners with clients to understand their business and create comprehensive business requirements. Develops end-to-end Business Intelligence framework based on requirements including recommending appropriate architecture (on-premises or cloud), analytics and reporting. Works closely with the business and technology teams to guide in solution development and implementation. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Curiosity to learn new tools and technologies to provide futuristic solutions for clients. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. More than 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite.) In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (any one of Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.) Good understanding of Azure (OR) AWS: Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services) or AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

32 - 40 Lacs

Bengaluru

Work from Office

: Job TitleProject & Change Lead, AVP LocationBangalore, India Role Description We are looking for an experienced Business Implementation Change Manager to lead a variety of regional/global change initiatives. Utilizing the tenets of PMI, you will lead and/or support cross-functional initiatives that transform the way we run our operations. If you like to solve complex problems, have a gets things done attitude and are looking for a highly visible dynamic role where your voice is heard and your experience is appreciated, come talk to us! What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Responsible for Business Implementation change management planning, execution and reporting adhering to governance standards ensuring transparency around progress status; Using data to tell the implementation story, maintain risk management controls, monitor, resolve as appropriate and communicate initiatives risks; Collaborate with other departments as required to execute on timelines to meet the strategic goals As part of the larger team, accountable for the delivery and adoption of the global change portfolio including by not limited to business case development/analysis, reporting, measurements and reporting of adoption success measures and continuous improvement. As required, using data to tell the story, participate in Working Group and Steering Committee to achieve the right level of decision making and progress/ transparency, establishing strong partnership and collaborative relationships with various stakeholder groups to remove constraints to adoption success and carry forward to future projects. As required, developing and documenting end-to-end roles and responsibilities, including process flow, operating procedures, required controls, gathering and documenting business requirements (user stories)including liaising with end-users and performing analysis of gathered data, training on new features/functions, supporting hypercare and adoption constraints.. Heavily involved in product development journey Your skills and experience Overall experience of at least 7-10 years providing business implementation management to complex change programs/projects, communicating and driving transformation initiatives using the tenets of PMI in a highly matrixed environment Banking / Finance/ regulated industry experience of which at least 2 years should be in change / transformation space or associated with change/transformation initiatives a plus Knowledge of client lifecycle processes, procedures and experience with KYC data structures / data flows is preferred. Experience working with management reporting is preferred. Bachelors degree How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

32 - 40 Lacs

Jaipur

Work from Office

: Job TitleProject & Change Lead, AVP LocationJaipur, India Role Description We are looking for an experienced Business Implementation Change Manager to lead a variety of regional/global change initiatives. Utilizing the tenets of PMI, you will lead and/or support cross-functional initiatives that transform the way we run our operations. If you like to solve complex problems, have a gets things done attitude and are looking for a highly visible dynamic role where your voice is heard and your experience is appreciated, come talk to us! What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Responsible for Business Implementation change management planning, execution and reporting adhering to governance standards ensuring transparency around progress status; Using data to tell the implementation story, maintain risk management controls, monitor, resolve as appropriate and communicate initiatives risks; Collaborate with other departments as required to execute on timelines to meet the strategic goals As part of the larger team, accountable for the delivery and adoption of the global change portfolio including by not limited to business case development/analysis, reporting, measurements and reporting of adoption success measures and continuous improvement. As required, using data to tell the story, participate in Working Group and Steering Committee to achieve the right level of decision making and progress/ transparency, establishing strong partnership and collaborative relationships with various stakeholder groups to remove constraints to adoption success and carry forward to future projects. As required, developing and documenting end-to-end roles and responsibilities, including process flow, operating procedures, required controls, gathering and documenting business requirements (user stories)including liaising with end-users and performing analysis of gathered data, training on new features/functions, supporting hypercare and adoption constraints.. Heavily involved in product development journey Your skills and experience Overall experience of at least 7-10 years providing business implementation management to complex change programs/projects, communicating and driving transformation initiatives using the tenets of PMI in a highly matrixed environment Banking / Finance/ regulated industry experience of which at least 2 years should be in change / transformation space or associated with change/transformation initiatives a plus Knowledge of client lifecycle processes, procedures and experience with KYC data structures / data flows is preferred. Experience working with management reporting is preferred. Bachelors degree How well support you

Posted 2 weeks ago

Apply

5.0 - 9.0 years

1 - 6 Lacs

Bengaluru

Work from Office

Job Title: Data Engineer Experience: 5-7 Years Location : Bangalore Job Type: Full-Time with NAM Job Summary We are seeking an experienced Data Engineer with 5 to 7 years of experience in building and optimizing data pipelines and architectures on modern cloud data platforms. The ideal candidate will have strong expertise across Google Cloud Platform (GCP), DBT, Snowflake, Apache Airflow, and Data Lake architectures. Key Responsibilities Design, build, and maintain robust, scalable, and efficient ETL/ELT pipelines. Implement data ingestion processes using FiveTran and integrate various structured and unstructured data sources into GCP-based environments. Develop data models and transformation workflows using DBT and manage version-controlled pipelines. Build and manage data storage solutions using Snowflake, optimizing for cost, performance, and scalability. Orchestrate workflows and pipeline dependencies using Apache Airflow. Design and support Data Lake architecture for raw and curated data zones. Collaborate with Data Analysts, Scientists, and Product teams to ensure availability and quality of data. Monitor data pipeline performance, ensure data integrity, and handle error recovery mechanisms. Followbest practices in CI/CD, testing, data governance, and security standards. Required Skills 5 - 7 years of professional experience in data engineering roles. Hands-on experience with GCP services: BigQuery, Cloud Storage, Pub/Sub, Dataflow, Composer, etc. Expertisein FiveTran and experience integrating APIs and external sources. Proficient in writing modular SQL transformations and data modeling using DBT. Deep understanding of Snowflake warehousing: performance tuning, cost optimization, security. Experience with Airflow for pipeline orchestration and DAG management. Familiarity with designing and implementing Data Lake solutions. Proficient in Python and/or SQL. Strong understanding of data governance, data quality frameworks, and DevOps practices. Preferred Qualifications GCP Professional Data Engineer certification is a plus. Experience in agile development environments. Exposure to data catalog tools and data observability platforms. Send profiles to narasimha@nam-it.com Thanks & regards, Narasimha.B Staffing executive NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. +91 9182480146 (India)

Posted 2 weeks ago

Apply

5.0 - 9.0 years

1 - 6 Lacs

Pune, Chennai, Bengaluru

Work from Office

Job Title: Data Engineer Experience: 5-7 Years Location : Bangalore Job Type: Full-Time with NAM Job Summary We are seeking an experienced Data Engineer with 5 to 7 years of experience in building and optimizing data pipelines and architectures on modern cloud data platforms. The ideal candidate will have strong expertise across Google Cloud Platform (GCP), DBT, Snowflake, Apache Airflow, and Data Lake architectures. Key Responsibilities Design, build, and maintain robust, scalable, and efficient ETL/ELT pipelines. Implement data ingestion processes using FiveTran and integrate various structured and unstructured data sources into GCP-based environments. Develop data models and transformation workflows using DBT and manage version-controlled pipelines. Build and manage data storage solutions using Snowflake, optimizing for cost, performance, and scalability. Orchestrate workflows and pipeline dependencies using Apache Airflow. Design and support Data Lake architecture for raw and curated data zones. Collaborate with Data Analysts, Scientists, and Product teams to ensure availability and quality of data. Monitor data pipeline performance, ensure data integrity, and handle error recovery mechanisms. Followbest practices in CI/CD, testing, data governance, and security standards. Required Skills 5 - 7 years of professional experience in data engineering roles. Hands-on experience with GCP services: BigQuery, Cloud Storage, Pub/Sub, Dataflow, Composer, etc. Expertisein FiveTran and experience integrating APIs and external sources. Proficient in writing modular SQL transformations and data modeling using DBT. Deep understanding of Snowflake warehousing: performance tuning, cost optimization, security. Experience with Airflow for pipeline orchestration and DAG management. Familiarity with designing and implementing Data Lake solutions. Proficient in Python and/or SQL. Strong understanding of data governance, data quality frameworks, and DevOps practices. Preferred Qualifications GCP Professional Data Engineer certification is a plus. Experience in agile development environments. Exposure to data catalog tools and data observability platforms. Send profiles to narasimha@nam-it.com Thanks & regards, Narasimha.B Staffing executive NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. +91 9182480146 (India)

Posted 2 weeks ago

Apply

5.0 - 9.0 years

8 - 12 Lacs

Noida

Work from Office

5-9 years In Data Engineering, software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse environment Expert level Hands to the following: Python, SQL PySpark DBT and Apache Airflow DevOps, Jenkins, CI/CD Data Governance and Data Quality frameworks Data Lakes, Data Warehouse AWS services including S3, SNS, SQS, Lambda, EMR, Glue, Athena, EC2, VPC etc. Source code control - GitHub, VSTS etc. Mandatory Competencies Python - Python Database - SQL Data on Cloud - AWS S3 DevOps - CI/CD DevOps - Github ETL - AWS Glue Beh - Communication

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Pune

Work from Office

We are looking for a highly skilled and experienced Data Engineer with over 5 years of experience to join our growing data team. The ideal candidate will be proficient in Databricks, Python, PySpark, and Azure, and have hands-on experience with Delta Live Tables. In this role, you will be responsible for developing, maintaining, and optimizing data pipelines and architectures to support advanced analytics and business intelligence initiatives. You will collaborate with cross-functional teams to build robust data infrastructure and enable data-driven decision-making. Key Responsibilities: .Design, develop, and manage scalable and efficient data pipelines using PySpark and Databricks .Build and optimize Spark jobs for processing large volumes of structured and unstructured data .Integrate data from multiple sources into data lakes and data warehouses on Azure cloud .Develop and manage Delta Live Tables for real-time and batch data processing .Collaborate with data scientists, analysts, and business teams to ensure data availability and quality .Ensure adherence to best practices in data governance, security, and compliance .Monitor, troubleshoot, and optimize data workflows and ETL processes .Maintain up-to-date technical documentation for data pipelines and infrastructure components Qualifications: 5+ years of hands-on experience in Databricks platform development. Proven expertise in Delta Lake and Delta Live Tables. Strong SQL and Python/Scala programming skills. Experience with cloud platforms such as Azure, AWS, or GCP (preferably Azure). Familiarity with data modeling and data warehousing concepts.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Noida

Work from Office

Must be: Bachelors or Masters degree in Computer Science, Information Technology, or a related discipline. 35+ years of experience in SQL Development and Data Engineering . Strong hands-on skills in T-SQL , including complex joins, indexing strategies, and query optimization. Proven experience in Power BI development, including building dashboards, writing DAX expressions, and using Power Query . Should be: At least 1+ year of hands-on experience with one or more components of the Azure Data Platform : Azure Data Factory (ADF) Azure Databricks Azure SQL Database Azure Synapse Analytics Solid understanding of data warehouse architecture , including star and snowflake schemas , and data lake design principles. Familiarity with: Data Lake and Delta Lake concepts Lakehouse architecture Data governance , data lineage , and security controls within Azure

Posted 2 weeks ago

Apply

2.0 - 7.0 years

0 - 1 Lacs

Mumbai

Remote

Data Engineer Company Name: Fluid AI Role Overview: As a Data Engineer, you will be responsible for designing and maintaining the data frameworks that power our Gen-AI products. Youll work closely with engineering, product, and AI research teams to ensure our data models are scalable, secure, and optimized for real-world performance across diverse use cases. This is a hands-on and strategic role, ideal for someone who thrives in fast-paced, innovative environments. Key Responsibilities: Design, implement, and optimize data architectures to support large-scale AI and machine learning systems Collaborate with cross-functional teams to define data models, APIs, and integration flows Architect secure, scalable data pipelines for structured and unstructured data Oversee data governance, access controls, and compliance (GDPR, SOC2, etc.) Select appropriate data storage technologies (SQL/NoSQL/data lakes) for various workloads Work with MLOps and DevOps teams to enable real-time data availability and model serving Evaluate and integrate third-party APIs, datasets, and connectors Contribute to system documentation and data architecture diagrams Support AI researchers with high-quality, well-structured data pipelines Required Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, or a related field 5+ years of experience as a Data Architect, Data Engineer, or in a similar role Expertise in designing cloud-based data architectures (AWS, Azure, GCP) Strong knowledge of SQL, NoSQL, and distributed databases (PostgreSQL, MongoDB, Cassandra, etc.) Experience with big data tools like Spark, Kafka, Airflow, or similar Familiarity with data warehousing tools (Redshift, BigQuery, Snowflake) Solid understanding of data privacy, compliance, and governance best practices Preferred Qualifications: Experience working on AI/ML or Gen AI-related products Proficiency in Python or another scripting language used for data processing Exposure to building APIs for data ingestion and consumption Prior experience supporting enterprise-level SaaS products Strong analytical and communication skills Travel & Documentation Requirement: Candidate must hold a valid passport Willingness to travel overseas for 1 week (as part of client collaboration) Having a valid US visa (e.g., B1/B2, H1B, Green Card, etc.) is a strong advantage Why Join Us: Work on high-impact, cutting-edge Generative AI products Collaborate with some of the best minds in AI, engineering, and product Flexible work culture with global exposure Opportunity to work on deeply technical challenges with real-world impact

Posted 2 weeks ago

Apply

6.0 - 10.0 years

20 - 25 Lacs

Pune

Work from Office

Azure Data Factory, Azure Synapse Analytics, Data Lake Storage Gen2 Blob Storage Docker, Azure DevOps, Airflow Microsoft Purview, PowerBI, Azure ML, Azure Cognitive Services Azure Key Vault, Azure Policy, Log Analytics Design and develop MDM solution

Posted 3 weeks ago

Apply

4.0 - 7.0 years

3 - 6 Lacs

Noida

Work from Office

We are looking for a skilled AWS Data Engineer with 4 to 7 years of experience in data engineering, preferably in the employment firm or recruitment services industry. The ideal candidate should have a strong background in computer science, information systems, or computer engineering. Roles and Responsibility Design and develop solutions based on technical specifications. Translate functional and technical requirements into detailed designs. Work with partners for regular updates, requirement understanding, and design discussions. Lead a team, providing technical/functional support, conducting code reviews, and optimizing code/workflows. Collaborate with cross-functional teams to achieve project goals. Develop and maintain large-scale data pipelines using AWS Cloud platform services stack. Job Strong knowledge of Python/Pyspark programming languages. Experience with AWS Cloud platform services such as S3, EC2, EMR, Lambda, RDS, Dynamo DB, Kinesis, Sagemaker, Athena, etc. Basic SQL knowledge and exposure to data warehousing concepts like Data Warehouse, Data Lake, Dimensions, etc. Excellent communication skills and ability to work in a fast-paced environment. Ability to lead a team and provide technical/functional support. Strong problem-solving skills and attention to detail. A B.E./Master's degree in Computer Science, Information Systems, or Computer Engineering is required. The company offers a dynamic and supportive work environment, with opportunities for professional growth and development. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Noida

Work from Office

We are looking for a skilled Senior Azure Data Engineer with 5 to 10 years of experience to design and implement scalable data pipelines using Azure technologies, driving data transformation, analytics, and machine learning. The ideal candidate will have a strong background in data engineering and proficiency in Python, PySpark, and Spark Pools. Roles and Responsibility Design and implement scalable Databricks data pipelines using PySpark. Transform raw data into actionable insights through data analysis and machine learning. Build, deploy, and maintain machine learning models using MLlib or TensorFlow. Optimize cloud data integration from Azure Blob Storage, Data Lake, and SQL/NoSQL sources. Execute large-scale data processing using Spark Pools and fine-tune configurations for efficiency. Collaborate with cross-functional teams to identify business requirements and develop solutions. Job Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Minimum 5 years of experience in data engineering, with at least 3 years specializing in Azure Databricks, PySpark, and Spark Pools. Proficiency in Python, PySpark, Pandas, NumPy, SciPy, Spark SQL, DataFrames, RDDs, Delta Lake, Databricks Notebooks, and MLflow. Hands-on experience with Azure Data Lake, Blob Storage, Synapse Analytics, and other relevant technologies. Strong understanding of data modeling, data warehousing, and ETL processes. Experience with agile development methodologies and version control systems.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies