Home
Jobs

282 Data Lake Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

#Employment Type: Contract Skills Azure Data Factory SQL Azure Blob Azure Logic Apps

Posted 7 hours ago

Apply

10.0 - 15.0 years

10 - 15 Lacs

Pune

Work from Office

Naukri logo

Provides technical expertise, to include addressing and resolving complex technical issues. Demonstrable experience assessing application workloads and technology landscape for Cloud suitability, develop case and Cloud adoption roadmap Expertise on data ingestion, data loading, Data Lake, bulk processing, transformation using Azure services and migrating on-premises services to various Azure environments. Good experience of a range services from the Microsoft Azure Cloud Platform including Infrastructure and Security related services such as Azure AD, IaaS, Containers, Storage, Networking and Azure Security. Good experience of enterprise solution shaping and Microsoft Azure Cloud architecture development including excellent documentation skills. Good understanding of Azure and AWS cloud service offering covering Compute, Storage, Network, WebApp, Functions, Gateway, Clustering, Key Vault, AD. Design and Develop high performance, scalable and secure cloud native applications on Microsoft Azure along with Azure best practices/recommendations. Design, implement and improve possible automations for cloud environments using native or 3rd party tools like Terraform, Salt, Chef, puppet, Databricks, etc. Creating business cases for transformation and modernization, including analysis of both total cost of ownership and potential cost and revenue impacts of the transformation Advise and engage with the customer executives on their Azure and AWS cloud strategy roadmap, improvements, alignment by bringing in industry best practice/trends and work on further improvements with required business case analysis and required presentations Providing Microsoft Azure architecture collaboration with other technical teams Documentation of solutions (e.g. architecture, configuration and setup). Working within a project management/agile delivery methodology in a leading role as part of a wider team. Provide effective knowledge transfer and upskilling to relevant customer personnel to ensure an appropriate level of future self-sufficiency. Assist in transition of projects to Enterprise Services teams. Skills Required: Strong knowledge of Cloud security standards and principles including Identity and Access management in Azure. essential tohave strong, in-depth and demonstrable hands-on experience with the following technologies: Microsoft Azure and its relevant build, deployment, automation, networking and security technologies in cloud and hybrid environments. Azure stack hub , Azure stack HCI/Hyper-V clusters. Microsoft Azure IaaS , Platform As A Service ( PaaS ) products such as Azure SQL, AppServices, Logic Apps, Functions and other Serverless services Understanding of Microsoft Identity and Access Management products such including Azure AD or AD B2C Microsoft Azure Operational and Monitoring tools, including Azure Monitor, App Insights and Log Analytics Microsoft Windows server, System Centr, Hyper-V and storage spaces Knowledge of PowerShell, Git, ARM templates and deployment automation. Hands-on experience on Azure and AWS Cloud native automation framework to perform automation along with experience on Python, Azure services like Databricks, Data factory, Azure functions, Streamsets etc. Hands-on experience with IAC (Infrastructure as Code), Containers, Kubernetes (AKS), Ansible, Terraform, Docker, Linux Sys Admin (RHEL/Ubuntu/Alpine), Jenkins, building CI/CD pipelines in Azure Devops. Ability to define and design the technical architecture with best suited Azure components ensuring seamless end-end workflow from Data source to Power BI/Portal/Dashboards/UI. Skills Good to Have: Experience in building big data solutions using Azure and AWS services like analysis services, DevOps / Databases like SQL server, CosmosDB, Dynamo DB, Mongo DB and web service integration Possession of either the Developing Microsoft Azure Solutions and Architecting Microsoft Azure certifications.

Posted 8 hours ago

Apply

12.0 - 17.0 years

17 - 22 Lacs

Noida

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice under RMI – Optum Advisory umbrella. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities: Design and implement secure, scalable, and cost-effective cloud data architectures using cloud services such as Azure Data Factory (ADF), Azure Databricks, Azure Storage, Key Vault, Snowflake, Synapse Analytics, MS Fabric/Power BI etc. Define and lead data & cloud strategy, including migration plans, modernization of legacy systems, and adoption of new cloud capabilities Collaborate with clients to understand business requirements and translate them into optimal cloud architecture solutions, balancing performance, security, and cost Evaluate and compare cloud services (e.g., Databricks, Snowflake, Synapse Analytics) and recommend the best-fit solutions based on project needs and organizational goals Lead the full lifecycle of data platform and product implementations, from planning and design to deployment and support Drive cloud migration initiatives, ensuring smooth transition from on-premise systems while engaging and upskilling existing teams Lead and mentor a team of cloud and data engineers, fostering a culture of continuous learning and technical excellence Plan and guide the team in building Proof of Concepts (POCs), exploring new cloud capabilities, and validating emerging technologies Establish and maintain comprehensive documentation for cloud setup processes, architecture decisions, and operational procedures Work closely with internal and external stakeholders to gather requirements, present solutions, and ensure alignment with business objectives Ensure all cloud solutions adhere to security best practices, compliance standards, and governance policies Prepare case studies and share learnings from implementations to build organizational knowledge and improve future projects Building and analyzing data engineering processes and act as an SME to troubleshoot performance issues and suggesting solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions, Maven etc Building test framework for the Databricks notebook jobs for automated testing before code deployment Continuously explore new Azure services and capabilities; assess their applicability to business needs Create detailed documentation for cloud processes, architecture, and implementation patterns Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Identifies solutions to non-standard requests and problems Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 12+ years of overall experience in Data & Analytics engineering 10+ years of solid experience working as an Architect designing data platforms using Azure, Databricks, Snowflake, ADF, Data Lake, Synapse Analytics, Power BI etc. 10+ years of experience working with data platform or product using PySpark and Spark-SQL In-depth experience designing complex Azure architecture for various business needs & ability to come up with efficient design & solutions Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. Experience in leading team and people management Highly proficient and hands-on experience with Azure services, Databricks/Snowflake development etc. Excellent communication and stakeholder management skills Preferred Qualifications: Snowflake, Airflow experience Power BI development experience Eexperience or knowledge of health care concepts – E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes – an enterprise priority reflected in our mission. #NIC External Candidate Application Internal Employee Application

Posted 8 hours ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities: Ingest data from multiple on-prem and cloud data sources using various tools & capabilities in Azure Design and develop Azure Databricks processes using PySpark/Spark-SQL Design and develop orchestration jobs using ADF, Databricks Workflow Analyzing data engineering processes being developed and act as an SME to troubleshoot performance issues and suggest solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions etc Building test framework for the Databricks notebook jobs for automated testing before code deployment Design and build POCs to validate new ideas, tools, and architectures in Azure Continuously explore new Azure services and capabilities; assess their applicability to business needs Create detailed documentation for cloud processes, architecture, and implementation patterns Work with data & analytics team to build and deploy efficient data engineering processes and jobs on Azure cloud Prepare case studies and technical write-ups to showcase successful implementations and lessons learned Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Ensure solutions adhere to security, compliance, and governance standards Monitor and optimize data pipelines and cloud resources for cost and performance efficiency Identifies solutions to non-standard requests and problems Support and maintain the self-service BI warehouse Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 7+ years of overall experience in Data & Analytics engineering 5+ years of experience working with Azure, Databricks, and ADF, Data Lake 5+ years of experience working with data platform or product using PySpark and Spark-SQL Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. In-depth understanding of Azure architecture & ability to come up with efficient design & solutions Highly proficient in Python and SQL Proven excellent communication skills Preferred Qualifications: Snowflake, Airflow experience Power BI development experience Experience or knowledge of health care concepts – E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes – an enterprise priority reflected in our mission. #NIC External Candidate Application Internal Employee Application

Posted 8 hours ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Solution Design & Architecture Implementation & Deployment Technical Leadership & Guidance Client Engagement & Collaboration Performance Monitoring & Optimization Your Profile Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3-8 years of experience in designing, implementing, and managing data solutions. 3-8 years of hands-on experience working with Google Cloud Platform (GCP) data services. Strong expertise in core GCP data services, including BigQuery (Data Warehousing) Cloud Storage (Data Lake) Dataflow (ETL/ELT) Cloud Composer (Workflow Orchestration - Apache Airflow) Pub/Sub and Dataflow (Streaming Data) Cloud Data Fusion (Graphical Data Integration) Dataproc (Managed Hadoop and Spark) Proficiency in SQL and experience with data modeling techniques. Experience with at least one programming language (e.g., Python, Java, Scala). Experience with Infrastructure-as-Code (IaC) tools such as Terraform or Cloud Deployment Manager. Understanding of data governance, security, and compliance principles in a cloud environment. Experience with CI/CD pipelines and DevOps practices. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a collaborative team. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of "22.5 billion.

Posted 9 hours ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

We are seeking an experienced Azure Data Engineer with 36 years of experience for a 6-month remote contract. The candidate will be responsible for developing and supporting IT solutions using technologies like Azure Data Factory, Azure Databricks, Azure Synapse, Python, PySpark, Teradata, and Snowflake. The role involves designing ETL pipelines, developing Databricks notebooks, handling CI/CD pipelines via Azure DevOps, and working on data warehouse modeling and integration. Strong skills in SQL, data lake storage, and deployment/monitoring are required. Prior experience in Power BI and DP-203 certification is a plus. Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 12 hours ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Job Summary We are seeking a skilled and detail-oriented Azure Data Engineer to join our data team. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and solutions on the Microsoft Azure cloud platform. You will collaborate with data analysts, reporting team, and business stakeholders to ensure efficient data availability, quality, and governance. Must have skills: Strong hands on experience with Azure Data Factory , Azure Data Lake Storage , and Azure SQL . Good to have skills: Working knowledge on Databricks, Azure Synapse Analytics, Azure functions, Logic app workflows, Log analytics and Azure DevOps. Roles and Responsibilities Design and implement scalable data pipelines using Azure Data Factory , Azure SQL , Databricks , and other Azure services. Develop and maintain data lakes and data warehouses on Azure. Integrate data from various on-premises and cloud-based sources. Create and manage ETL/ELT processes , ensuring data accuracy and performance. Optimize and troubleshoot data pipelines and workflows. Ensure data security, compliance, and governance. Collaborate with business stakeholders to define data requirements and deliver actionable insights. Monitor and maintain Azure data services performance and cost-efficiency. Design, develop, and maintain SQL Server databases and ETL processes. Write complex SQL queries, stored procedures, functions, and triggers to support application development and data analysis Optimize database performance through indexing, partitioning, and other performance tuning techniques.

Posted 13 hours ago

Apply

8.0 - 13.0 years

40 - 45 Lacs

Pune

Work from Office

Naukri logo

: Job Title Data Lake Lead Cloud Engineer, VP LocationPune, India Role Description As a Lead Cloud Engineer, you will work closely with Data Lake Solution architect and engineering team to design, maintain, and develop innovative solution within scalable GCP cloud environment. This is a leadership position where youll mentor a growing team while driving operational excellence and ensuring Data Lake solution is efficient, secure, and cost-effective. You are highly technical, understands cloud technologies, and understands the complex world of cloud eco-systems and integrations. What well offer you , 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Cloud Infrastructure Management: Take ownership of cloud environments, optimising and maintaining systems to ensure high performance, scalability, and reliability, focusing on GCP services Collaborate with architecture teams to implement cloud-based solutions aligned with business objectives. Ensure that cloud resources are optimized for cost, performance, and security. Automation and Scripting: Design and architect GCP-based Data lake solutions that align with industry best practices and security standards, while developing detailed technical documentation, including system diagrams, deployment procedures, and operational guidelines. Provision and configure GCP resources, including virtual machines, storage accounts, virtual networks, and load balancers, using Infrastructure-as-Code tools like Terraform. Develop and maintain automation scripts for various environments, ensuring smooth and efficient deployment processes. Use Python and other scripting languages to automate workflows, integrate APIs, and manage infrastructure as code (IaC). Leadership & Team Development: Lead and mentor a team of cloud engineers, delegating tasks and ensuring project milestones are achieved on time. Oversee recruitment and onboarding of new team members, fostering a collaborative and high-performing team culture. Report and escalate issues, dependency challenges to senior management and follow though end-to-end until remediation. Collaborate with stakeholders to understand business requirements and translate them into scalable and efficient GCP solutions. Provide ongoing support and guidance to both junior engineers and contractors. Cloud Optimisation and Innovation: Keep up with the latest advancements in cloud technologies, continuously suggesting improvements to existing systems. Identify and drive opportunities for optimizing cloud infrastructure, performance, and cost management. Change & Operations Management: Manage change management processes in live environments, ensuring minimal disruption to service and also complying with the Banks Compliance. Automate routine tasks such as deployments, scaling, and monitoring to improve operational efficiency. Monitor the consumption of resources and ensure they are within agreed budgets and participate in on-call rotations to provide timely response and resolution to critical incidents. Take ownership of deliverables, troubleshoot, and resolve issues. Establish product support procedures, service as final L3 engineering escalation. Your skills and experience Minimum 8+ years of experience with Engineering with 5+ years in one of GCP (preferred), AWS or Azure cloud platforms. In-depth understanding of GCP services and capabilities, including virtual machines, storage, networking, security, and monitoring. Strong expertise in infrastructure as code (IaC) and automation using tools such as Terraform, CloudFormation, or similar. Familiarity with DevOps practices and tools, such as GitLab, DevOps, or Jenkins. Relevant certifications in GCP, Cybersecurity and related field. Experience in a banking or financial services environment. Proven ability to lead and manage teams, including task delegation and workload management. Knowledge of various Security technologies. Strong knowledge in current security threats and corresponding technologies. Experience with change management, ensuring minimal disruption during cloud updates and production releases. Strong communication skills with multi-cultural/global teams, both written and verbal, with a collaborative approach to problem-solving. Experience with business tools including Jira, Confluence, Share point, and Microsoft 365 How well support you . . . . About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 14 hours ago

Apply

5.0 - 7.0 years

13 - 18 Lacs

Pune

Work from Office

Naukri logo

Administrate multiple databases in AWS RDS on Linux and Windows systems including MySQL and MS-SQL (Oracle a plus). You will be involve in Backup and restore databases. Develops and performs structured queries. Required Candidate profile Candidate should have 5-7 years of experience in using AWS, MYSQL,MS-SQL,RDS Oracle,BI tools, Windows and Linux Good to have knowledge of Snowflake and Data lake

Posted 1 day ago

Apply

6.0 - 8.0 years

16 - 20 Lacs

Gurugram

Work from Office

Naukri logo

JOB SUMMARY Define the vision and roadmap for business intelligence team and champion data culture within Axis Max Life. Lead and enable transformation to embrace automation and providing concise & real-time insights. Responsible for delivery of accurate and timely reports / dashboards. Coach and mentor the team to continuously improve skills and capabilities. Lead a team of 3+ professionals, including partners. KEY RESPONSIBILITIES Handle distribution reporting requirements across functions and job families for enabling strategic priorities and performance management. Responsibilities also include supporting CXO reports and dashboards Ensure timely & accurate delivery of reporting across functions and roles. Proactively identify opportunities to automate reporting requirements and build delivery capabilities on digital information assets in liaison with technology function. Integrate analytics with reporting assets. Drive data democratization culture by designing data assets. Collaborate to design and build data products for distribution teams. Partner with data team to build data infrastructure necessary to facilitate efficient analysis, reporting and is relevant with the needs of the organization. Develop, coach and mentor team to become best in class business intelligence team. Education: Masters degree (M. Sc., M. Tech., MBA or any quantitative field) Experience: Financial services exposure is desirable and preferred. Python & SQL. Power BI would be an advantage At least 6-8 years of relevant experience in working in business reporting teams. Key competencies/skills required Demonstrated experience of working with senior leadership / management. Experience in standardizing, streamlining and automating business reporting. Technical proficiency in BI tech stack SQL Server reporting services, SAP BO, Python etc. Have experience with open source BI tools. Well versed with data architecture / data warehousing / data lakes. Domain understanding of BFSI industry. Excellent interpersonal skills, strong verbal and written communication skills.

Posted 1 day ago

Apply

3.0 - 8.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Role Purpose Principal Consultants are expected to have a deep area of consulting expertise, with a good understanding of the clients business landscape and an ability to manage the delivery of consulting solutions that achieve clear business value. The role may have managerial responsibilities in leading a team of consultants and managing quality and internal compliance in business operations. Principal Consultants develop and support closure of sales opportunities through their consulting expertise and client relationships. The Principal Consultant must achieve high personal billability. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETRM systems. Work on data integration projects within the Energy Trading and Risk Management (ETRM) domain. Collaborate with cross-functional teams to integrate data from ETRM trading systems like Allegro, RightAngle, and Endur. Optimize and manage data storage solutions in Data Lake and Snowflake. Develop and maintain ETL processes using Azure Data Factory and Databricks. Write efficient and maintainable code in Python for data processing and analysis. Ensure data quality and integrity across various data sources and platforms. Ensure data accuracy, integrity, and availability across various trading systems. Collaborate with traders, analysts, and IT teams to understand data requirements and deliver robust solutions. Optimize and enhance data architecture for performance and scalability Mandatory Skills: Azure Data Factory (ADF) Data Lake Snowflake/ SQL Python/ pyspark Fast API Databricks Good to have Experience in ETRM domain Streamlit Integration with trading systems like Allegro, RightAngle, Endur Display Strategic Objectives Parameter Description Measure (Select relevant measures/ modify measures after speaking to your Manager) Deliver growth in consulting revenues Support business performance for direct consulting against relevant quarterly/annual targets Lead end-to-end sales cycle for specific pursuits Improve quality of consulting by flawlessly leading/delivering strategic advisory/transformation engagements along with ownership of client expectation management, quality control and delivery assurance, issue management, client insight and value capture, work planning and execution, and effective client communications % Revenue Achievement (actual vs. target) % of Personal Utilisation Achievement (against target) No. of RFI/RFPs responses led/supported No. of strategic advisory and transformation engagements delivered No. of referenceable clients, testimonials Average CSAT, PCSAT across projects Generate Impact Enable pull through business/ impact for Wipro through front end consulting engagements/deal pursuit/client relationships Number and value of downstream opportunities generated/converted for GCG and larger Wipro Grow market positioning Elevate Wipro positioning in existing accounts through thought leadership and actively contributing to clients strategic transformations Lead the development of thought leadership/offerings/assets for the practice to support business growth Eminence and thought leadership demonstrated through content, citations and testimonials Number of white papers authored, evidence of assets like Repeatable IP, Frameworks & Methods authored/contributed Number of senior level thought leadership sessions/ roadshows with clients and industry forums delivered from the front Provide consulting leadership to accounts Generating growth and integration across the consulting services, growing client relationship profile and supporting the achievement of the Wipro-wide account objectives Work with GCP/CCP/GCG Account Lead/Account team to grow consulting service portfolio, ensuring integration of propositions and collaboration across GCG Number of credible business side relationships built in client organizations Number & $ value of integrated consulting deals supported Grow the consulting talent Grow consulting team talent at B3 and below levels in line with business demand and in line with Consulting Competency Framework Meritocracy and ActionsNumber of consultants rewarded/recognized Cross-Skilling - Numbers of reporting consultants worked on joint projects cutting across the different practices within GCG Self Development Min 32 hrs on training in a year. Combination of online and classroom. Build the consulting community Individual contribution to People Development and Collaboration Effectiveness to the level expected of others performing this Role Distinct participation in and demonstration of: Collaboration across GCG - through the contribution to cross-practice offerings, sharing of best practices/industrial/technological expertise, sharing of talent pool Knowledge Management - Number of webinars/knowledge sharing/thought leadership sessions conducted, Number of Assets owned and contributed to Consulting Central

Posted 1 day ago

Apply

8.0 - 10.0 years

27 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Key Responsibilities Design and develop data models to support the organizations data and business intelligence requirements. Collaborate with data architects, data engineers, and stakeholders to ensure data model alignment with business requirements. Optimize and tune data models for performance and scalability. Ensure data accuracy, consistency, and integrity by implementing data quality and governance standards. Participate in data migration and integration projects, ensuring seamless data flow across systems. Technical Skills Expertise in data modeling tools (e.g., ERwin, Power Designer, SQL Developer Data Modeler) Proficiency in SQL and database management systems (e.g., Oracle, SQL Server, Snowflake) Understanding of data warehousing and ETL processes. Familiarity with data governance and data quality management tools and practices.

Posted 1 day ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Naukri logo

We are seeking an experienced Data Architect to join our team , working with our UAE-based client. The ideal candidate should have 8-12 years of hands-on experience in data architecture, with at least 4 years in architectural design and integration. Strong expertise in MS Dynamics and data lake architecture is required, along with proficiency in ETL, data modeling, data integration, and data quality assurance. The candidate should have a strong problem-solving mindset, the ability to handle architectural issues, and experience in troubleshooting. They should also be a proactive contributor and a team player with a flexible attitude. The role requires immediate availability and the ability to work as per UAE timings. Location: Chennai, Hyderabad, Kolkata, Pune, Ahmedabad, Remote

Posted 1 day ago

Apply

3.0 - 8.0 years

5 - 11 Lacs

Pune, Mumbai (All Areas)

Hybrid

Naukri logo

Overview: TresVista is looking to hire an Associate in its Data Intelligence Group team, who will be primarily responsible for managing clients as well as monitor/execute projects both for the clients as well as internal teams. The Associate may be directly managing a team of up to 3-4 Data Engineers & Analysts across multiple data engineering efforts for our clients with varied technologies. They would be joining the current team of 70+ members, which is a mix of Data Engineers, Data Visualization Experts, and Data Scientists. Roles and Responsibilities: Interacting with the client (internal or external) to understand their problems and work on solutions that address their needs Driving projects and working closely with a team of individuals to ensure proper requirements are identified, useful user stories are created, and work is planned logically and efficiently to deliver solutions that support changing business requirements Managing the various activities within the team, strategizing how to approach tasks, creating timelines and goals, distributing information/tasks to the various team members Conducting meetings, documenting, and communicating findings effectively to clients, management and cross-functional teams Creating Ad-hoc reports for multiple internal requests across departments Automating the process using data transformation tools Prerequisites Strong analytical, problem-solving, interpersonal, and communication skills Advanced knowledge of DBMS, Data Modelling along with advanced querying capabilities using SQL Working experience in cloud technologies (GCP/ AWS/Azure/Snowflake) Prior experience in building and deploying ETL/ELT pipelines using CI/CD, and orchestration tools such as Apache Airflow, GCP workflows, etc. Proficiency in Python for building ETL/ELT processes and data modeling Proficiency in Reporting and Dashboards creation using Power BI/Tableau Knowledge in building ML models and leveraging Gen AI for modern architectures. Experience working with version control platforms like GitHub Familiarity with IaC tools like Terraform and Ansible is good to have Stakeholder Management and client communication experience would be preferred Experience in the Financial Services domain will be an added plus Experience in Machine Learning tools and techniques will be good to have Experience 3-7 years Education BTech/MTech/BE/ME/MBA in Analytics Compensation The compensation structure will be as per industry standards

Posted 2 days ago

Apply

2.0 - 4.0 years

10 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities : Design and Build Data Infrastructure : Develop scalable data pipelines and data lake/warehouse solutions for real-time and batch data using cloud and open-source tools. Develop & Automate Data Workflows : Create Python-based ETL/ELT processes for data ingestion, validation, integration, and transformation across multiple sources. Ensure Data Quality & Governance : Implement monitoring systems, resolve data quality issues, and enforce data governance and security best practices. Collaborate & Mentor : Work with cross-functional teams to deliver data solutions, and mentor junior engineers as the team grows. Explore New Tech : Research and implement emerging tools and technologies to improve system performance and scalability.

Posted 2 days ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

BE/B.Tech or equivalent The data modeler designs, implements, and documents data architecture and data modeling solutions, which include the use of relational, dimensional, and NoSQL data bases. These solutions support enterprise information management, business intelligence, machine learning, data science, and other business interests. 7+ years of hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols). Experience with data warehouse, data lake, and enterprise big data platforms in multi- data -center contexts required. Good knowledge of metadata management, data modeling, and related tools (Erwin or ER Studio or others) required. Experience in team management, communication, and presentation. Be responsible for the development of the conceptual, logical, and physical data models, the implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms (SQL/NoSQL). Oversee and govern the expansion of existing data architecture and the optimization of data query performance via best practices. The candidate must be able to work independently and collaboratively. Responsibilities Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL) and data tools (reporting, visualization, analytics, and machine learning). Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Required Skills Required Skills - Data Modeling, Dimensional modeling, Erwin, Data Management, RDBMS, SQL/NoSQL, ETL

Posted 2 days ago

Apply

4.0 - 9.0 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Job Title - Sales Excellence -Client Success - Data Engineering Specialist - CF Management Level :ML9 Location:Open Must have skills:GCP, SQL, Data Engineering, Python Good to have skills:managing ETL pipelines. Job Summary : We are: Sales Excellence. Sales Excellence at Accenture empowers our people to compete, win and grow. We provide everything they need to grow their client portfolios, optimize their deals and enable their sales talent, all driven by sales intelligence. The team will be aligned to the Client Success, which is a new function to support Accentures approach to putting client value and client experience at the heart of everything we do to foster client love. Our ambition is that every client loves working with Accenture and believes were the ideal partner to help them create and realize their vision for the future beyond their expectations. You are: A builder at heart curious about new tools and their usefulness, eager to create prototypes, and adaptable to changing paths. You enjoy sharing your experiments with a small team and are responsive to the needs of your clients. The work: The Center of Excellence (COE) enables Sales Excellence to deliver best-in-class service offerings to Accenture leaders, practitioners, and sales teams. As a member of the COE Analytics Tools & Reporting team, you will help in building and enhancing data foundation for reporting tools and Analytics tool to provide insights on underlying trends and key drivers of the business. Roles & Responsibilities: Collaborate with the Client Success, Analytics COE, CIO Engineering/DevOps team, and stakeholders to build and enhance Client success data lake. Write complex SQL scripts to transform data for the creation of dashboards or reports and validate the accuracy and completeness of the data. Build automated solutions to support any business operation or data transfer. Document and build efficient data model for reporting and analytics use case. Assure the Data Lake data accuracy, consistency, and timeliness while ensuring user acceptance and satisfaction. Work with the Client Success, Sales Excellence COE members, CIO Engineering/DevOps team and Analytics Leads to standardize Data in data lake. Professional & Technical Skills: Bachelors degree or equivalent experience in Data Engineering, analytics, or similar field. At least 4 years of professional experience in developing and managing ETL pipelines. A minimum of 2 years of GCP experience. Ability to write complex SQL and prepare data for dashboarding. Experience in managing and documenting data models. Understanding of Data governance and policies. Proficiency in Python and SQL scripting language. Ability to translate business requirements into technical specification for engineering team. Curiosity, creativity, a collaborative attitude, and attention to detail. Ability to explain technical information to technical as well as non-technical users. Ability to work remotely with minimal supervision in a global environment. Proficiency with Microsoft office tools. Additional Information: Masters degree in analytics or similar field. Data visualization or reporting using text data as well as sales, pricing, and finance data. Ability to prioritize workload and manage downstream stakeholders. About Our Company | AccentureQualification Experience: Minimum 5+ year(s) of experience is required Educational Qualification: Bachelors degree or equivalent experience in Data Engineering, analytics, or similar field

Posted 2 days ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : EPIC Systems Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Support Engineer, you will act as software detectives, providing a dynamic service that identifies and resolves issues within various components of critical business systems. Your typical day will involve collaborating with team members to troubleshoot software problems, analyzing system performance, and ensuring that applications run smoothly to support business operations effectively. You will engage with users to understand their challenges and work diligently to implement solutions that enhance system functionality and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the development and implementation of application support processes to improve efficiency.- Provide training and support to junior team members to enhance their skills and knowledge. Professional & Technical Skills: - Must To Have Skills: Proficiency in EPIC Systems.- Strong analytical skills to diagnose and resolve software issues.- Experience with troubleshooting and debugging applications.- Familiarity with system integration and data flow management.- Ability to communicate technical information effectively to non-technical stakeholders. Additional Information:- The candidate should have minimum 3 years of experience in EPIC Systems.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 days ago

Apply

15.0 - 20.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Cloud Data Architecture Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time educationData ArchitectKemper is seeking a Data Architect to join our team. You will work as part of a distributed team and with Infrastructure, Enterprise Data Services and Application Development teams to coordinate the creation, enrichment, and movement of data throughout the enterprise.Your central responsibility as an architect will be improving the consistency, timeliness, quality, security, and delivery of data as part of Kemper's Data Governance framework. In addition, the Architect must streamline data flows and optimize cost management in a hybrid cloud environment. Your duties may include assessing architectural models and supervising data migrations across IaaS, PaaS, SaaS and on premises systems, as well as data platform selection and, on-boarding of data management solutions that meet the technical and operational needs of the company. To succeed in this role, you should know how to examine new and legacy requirements and define cost effective patterns to be implemented by other teams. You must then be able to represent required patterns during implementation projects. The ideal candidate will have proven experience in cloud (Snowflake, AWS and Azure) architectural analysis and management. ResponsibilitiesDefine architectural standards and guidelines for data products and processes. Assess and document when and how to use existing and newly architected producers and consumers, the technologies to be used for various purposes, and models of selected entities and processes. The guidelines should encourage reuse of existing data products, as well as address issues of security, timeliness, and quality. Work with Information & Insights, Data Governance, Business Data Stewards, and Implementation teams to define standard and ad-hoc data products and data product sets. Work with Enterprise Architecture, Security, and Implementation teams to define the transformation of data products throughout hybrid cloud environments assuring that both functional and non-functional requirements are addressed. This includes the ownership, frequency of movement, the source and destination of each step, how the data is transformed as it moves, and any aggregation or calculations. Working with Data Governance and project teams to model and map data sources, including descriptions of the business meaning of the data, its uses, its quality, the applications that maintain it and the technologies in which it is stored. Documentation of a data source must describe the semantics of the data so that the occasional subtle differences in meaning are understood. Defining integrative views of data to draw together data from across the enterprise. Some views will use data stores of extracted data and others will bring together data in near real time. Solutions must consider data currency, availability, response times and data volumes, etc. Working with modeling and storage teams to define Conceptual, Logical and Physical data views limiting technical debt as data flows through transformations. Investigating and leading participation in POCs of emerging technologies and practices. Leveraging and evolving existing [core] data products and patterns. Communicate and lead understanding of data architectural services across the enterprise. Ensure a focus on data quality by working effectively with data and system stewards. QualificationsBachelors degree in computer science, Computer Engineering, or equivalent experience. A minimum of 3 years experience in a similar role. Demonstrable knowledge of Secure DevOps and SDLC processes. Must have AWS or Azure experience.Experience with Data Vault 2 required. Snowflake a plus.Familiarity of system concepts and tools within an enterprise architecture framework. Including Cataloging, MDM, RDM, Data Lakes, Storage Patterns, etc. Excellent organizational and analytical abilities. Outstanding problem solver. Good written and verbal communication skills. Qualification 15 years full time education

Posted 2 days ago

Apply

12.0 - 15.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Cloud Migration Engineer Project Role Description : Provides assessment of existing solutions and infrastructure to migrate to the cloud. Plan, deliver, and implement application and data migration with scalable, high-performance solutions using private and public cloud technologies driving next-generation business outcomes. Must have skills : Data Warehouse ETL Testing Good to have skills : Risk Analytics Modelling and ReportingMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Migration Engineer, you will provide a comprehensive assessment of existing solutions and infrastructure to facilitate their migration to the cloud. Your typical day will involve planning, delivering, and implementing application and data migration strategies that leverage both private and public cloud technologies. You will focus on creating scalable and high-performance solutions that drive next-generation business outcomes, ensuring that the migration process aligns with organizational goals and enhances operational efficiency. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate knowledge sharing and training sessions to enhance team capabilities.- Monitor and evaluate the performance of migrated applications to ensure optimal functionality.- Responsible for specification, design, Build and Test of the StatistiX business services- Proactively identify the improvement areas and make efforts towards the implementation- Deliver software changes in a reliable and transparent manner, in scope, time and budget- Align with project managers, peers from Clearing & Risk IT, neighboring IT teams as well as IT Operations and control functions proactively where required- Foster an environment of pro-active communication and strive for cross location collaboration and continuous improvement- Work actively on teams and team members personal development, making an effective contribution to increase the teams diversity- Manage project priorities, deadlines and deliverables Professional & Technical Skills: - Must To Have Skills: Proficiency in Data modeling and Data lake stack- Good To Have Skills: Project Management skills, Capital Markets knowledge, Cryptography, Privileged Access Management, Security Information and Event Management, Vulnerability Management- Solid understanding of Data Modelling, Data warehousing and Data platforms design- Solid understanding of Data warehousing and Data platforms design.- Strong understanding of cloud migration strategies and best practices.- Experience with data integration tools and methodologies.- Familiarity with performance tuning and optimization techniques for cloud environments.- Good know-how on Financial Markets. Know-how on Clearing, Trading and Risk business process will be added advantage- Strong influential and communication skills to liaise with team across different demographics - Proficiency in written and spoken English is a must and German know-how will be added advantage. Additional Information:- The candidate should have minimum 12 years of experience in Data Warehouse ETL Testing.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 days ago

Apply

15.0 - 20.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Manufacturing Engineering MES Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Lead, you will be responsible for developing and configuring software systems, either end-to-end or for specific stages of the product lifecycle. Your typical day will involve collaborating with various teams to ensure that the software meets the required specifications and quality standards. You will apply your knowledge of technologies and methodologies to support projects effectively, ensuring that all aspects of the software development process are executed smoothly and efficiently. Engaging with stakeholders, you will gather requirements and provide insights that drive the project forward, while also mentoring team members to enhance their skills and performance. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of milestones. Professional & Technical Skills: - Must To Have Skills: Proficiency in Manufacturing Engineering MES.- Good To Have Skills: Experience with software development methodologies such as Agile or Scrum.- Strong understanding of system integration and data flow within manufacturing environments.- Experience with programming languages relevant to MES development.- Familiarity with database management systems and data analytics tools. Additional Information:- The candidate should have minimum 7.5 years of experience in Manufacturing Engineering MES.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 days ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Azure Data Services Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead data architecture design and implementation.- Optimize data delivery and re-design infrastructure for greater scalability.- Implement data security and privacy measures.- Collaborate with data scientists and analysts to understand data needs. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services.- Strong understanding of cloud-based data solutions.- Experience with data warehousing and data lakes.- Knowledge of SQL and NoSQL databases.- Hands-on experience with data integration tools.- Good To Have Skills: Experience with Azure Machine Learning. Additional Information:- The candidate should have a minimum of 5 years of experience in Microsoft Azure Data Services.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 2 days ago

Apply

12.0 - 15.0 years

5 - 5 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Senior Data Architect - Big Data & Cloud Solutions Experience: 10+ Years Industry: Information Technology / Data Engineering / Cloud Computing Job Summary: We are seeking a highly experienced and visionary Data Architect to lead the design and implementation of scalable, high-performance data solutions. The ideal candidate will have deep expertise in Apache Kafka, Apache Spark, AWS Glue, PySpark, and cloud-native architectures, with a strong background in solution architecture and enterprise data strategy. Key Responsibilities: Design and implement end-to-end data architecture solutions on AWS using Glue, S3, Redshift, and other services. Architect and optimize real-time data pipelines using Apache Kafka and Spark Streaming. Lead the development of ETL/ELT workflows using PySpark and AWS Glue. Collaborate with stakeholders to define data strategies, governance, and best practices. Ensure data quality, security, and compliance across all data platforms. Provide technical leadership and mentorship to data engineers and developers. Evaluate and recommend new tools and technologies to improve data infrastructure. Translate business requirements into scalable and maintainable data solutions. Required Skills & Qualifications: 10+ years of experience in data engineering, architecture, or related roles. Strong hands-on experience with: Apache Kafka (event streaming, topic design, schema registry) Apache Spark (batch and streaming) AWS Glue, S3, Redshift, Lambda, CloudFormation/Terraform PySpark for large-scale data processing Proven experience in solution architecture and designing cloud-native data platforms. Deep understanding of data modeling, data lakes, and data warehousing concepts. Strong programming skills in Python and SQL. Experience with CI/CD pipelines and DevOps practices for data workflows. Excellent communication and stakeholder management skills. Preferred Qualifications: AWS Certified Solutions Architect or Big Data Specialty certification. Experience with data governance tools and frameworks. Familiarity with containerization (Docker, Kubernetes) and orchestration tools (Airflow, Step Functions). Exposure to machine learning pipelines and MLOps is a plus. Required Skills Apache,Pyspark,Aws Cloud,Kafka

Posted 2 days ago

Apply

8.0 - 10.0 years

3 - 6 Lacs

Chennai

Work from Office

Naukri logo

About the Role: Senior Business Intelligence Analyst The BusinessIntelligence Analyst is responsible for collecting and analyzing data frommultiple sources systems, to help organization make better business decisions.This role is crucial in maintaining data quality, compliance, and accessibilitywhile driving data-driven decision-making and reporting for Mind sprintclients. The role requires a combination of OLAM business domain expertise,problem-solving skills, and business acumen. Create,review, validate and manage data as it collected. The person will act ascustodian of data getting generated. Developpolicies and procedures for the collection and analysis of data. Possessanalytical skills to analyze data to derive meaningful insights. Skill togenerate predictive and insightful reports. Build dailyreports and schedule internal weekly and monthly meetings, preparing in advanceto share relevant and beneficial information. Data Ownership: Assume ownership of specific datasets, data dictionaries, metadata, masterdata and ensure data accuracy, completeness, and relevance. Data Integration: Collaborate with system owners,data engineers, domain experts and integration teams to facilitate the smooth integrationof financial data from multiple systems/entities into the financialtransactional and analytical datamarts. Data Quality Assurance: Establish and enforce dataquality standards and policies within the financial domain. Collaborate withdata engineers, analytics, data stewards and data custodians to monitor andimprove data quality. Data Access Control: Control and manage access todata, ensuring appropriate permissions and security measures are in place.Monitor and audit data access to prevent unauthorized use Data Reporting and Analysis: Collaborate withfinance teams to generate accurate and timely financial reports. Perform dataanalysis to identify trends, anomalies, and insights in financial data,supporting financial modelling, forecasting, and predictive decision-making. Collaborate with co-workers andmanagement to implement improvements. Job Qualifications: Masters/bachelors in financeand accounting or related fields. An advanced degree is a plus. Proven experience in financialdata management, data governance, and data analysis. Demonstrated ability to approachcomplex problems with analytical and critical thinking skills. Excellent written and verbalcommunication skills Leadership skills and theability to collaborate effectively with cross-functional teams. Ability to influence andinteract with senior management. Preferred Qualifications & Skills Knowledge in Big Data, DataLake, Azure Data Factory (ADF), Snowflake, DataBricks Synapse, MonteCarlo,Atlin and DevOpS tools like DBT. Agile Project Management Skillswith knowledge of JIRA & Confluence Good understanding of financialconcepts like Balance Sheet, P&L, TB, Direct Costs Management, Fair value,Book Value, Production/Standard costs, Stock Valuations, Ratios, andSustainability Finance. Experience in working with ERPdata especially SAP FI and SAP CO. Strategic mindset and theability to identify opportunities to use data to drive business growth. Youshould be able to think creatively and identify innovative solutions to complexproblems. Required abilities Physical: Other: Work Environment Details: Specific requirements Travel: Vehicle: Work Permit: Other details Pay Rate: Contract Types: Time Constraints: Compliance Related: Union Affiliation:

Posted 2 days ago

Apply

4.0 - 9.0 years

11 - 20 Lacs

Pune

Work from Office

Naukri logo

Role : Sr. Database Engineer Location : Pune Exp : 5-7 years Job Description : Oracle PLSql Developer The Associate shall perform the role of PLSQL Developer and shall be responsible for the following: • Hands on coding in SQL and PLSQL • Create/implement database architecture for new applications and enhancements to existing applications • Hands-on experience in Data Modeling, SSAS, Cubes, query Optimization • Create/implement strategies for partitioning, archiving and maturity models for applications. • Review queries created by other developers for adherence to standards and performance issues • PLSQL, TSQL, SQL Query Optimization, Data Models, Data lakes • Interact with Database, Applications analysts and Business users for estimations. • Do impact analysis of existing applications and suggest best ways of incorporating new requirements Proactively engage in the remediation of software issues related to code quality, security, and/or pattern/frameworks. Interested candidates can share their resume at Neesha1@damcogroup.com

Posted 3 days ago

Apply

Exploring Data Lake Jobs in India

The data lake job market in India is experiencing significant growth as organizations continue to invest in big data technologies to drive business insights and decision-making. Data lake professionals are in high demand across various industries, offering lucrative career opportunities for job seekers with relevant skills and experience.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Delhi/NCR

Average Salary Range

The average salary range for data lake professionals in India varies based on experience levels. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

Typically, a career in data lake progresses from roles such as Data Engineer or Data Analyst to Senior Data Engineer, Data Architect, and eventually to a Data Science Manager or Chief Data Officer. Advancement in this field is often based on gaining experience working with large datasets, implementing data management best practices, and demonstrating strong problem-solving skills.

Related Skills

In addition to expertise in data lake technologies like Apache Hadoop, Apache Spark, and AWS S3, data lake professionals are often expected to have skills in data modeling, data warehousing, SQL, programming languages like Python or Java, and experience with ETL (Extract, Transform, Load) processes.

Interview Questions

  • What is a data lake and how does it differ from a data warehouse? (basic)
  • Explain the components of Hadoop ecosystem and their roles in data processing. (medium)
  • How do you ensure data quality and consistency in a data lake environment? (medium)
  • What are the key challenges of managing metadata in a data lake? (advanced)
  • Can you explain how data partitioning works in Apache Spark? (medium)
  • What are the best practices for optimizing data storage in a data lake? (advanced)
  • Describe a complex data transformation process you implemented in a data lake project. (medium)
  • How do you handle data security and access control in a data lake architecture? (medium)
  • What are the benefits of using columnar storage in a data lake? (basic)
  • Explain the concept of data lineage and its importance in data lake management. (medium)
  • How do you handle schema evolution in a data lake environment? (advanced)
  • What are the differences between batch processing and real-time processing in a data lake? (basic)
  • Can you discuss the role of Apache Hive in data lake analytics? (medium)
  • How do you monitor and troubleshoot performance issues in a data lake cluster? (advanced)
  • What are the key considerations for designing a scalable data lake architecture? (medium)
  • Explain the concept of data lake governance and its impact on data management. (medium)
  • How do you optimize data ingestion processes in a data lake to handle large volumes of data? (medium)
  • Describe a scenario where you had to deal with data quality issues in a data lake project. How did you resolve it? (medium)
  • What are the best practices for data lake security in a cloud environment? (advanced)
  • Can you explain the concept of data catalog and its role in data lake management? (medium)
  • How do you ensure data privacy compliance in a data lake architecture? (medium)
  • What are the advantages of using Apache Flink for real-time data processing in a data lake? (advanced)
  • Describe a successful data lake implementation project you were involved in. What were the key challenges and how did you overcome them? (medium)
  • How do you handle data retention policies in a data lake to ensure data governance and compliance? (medium)
  • What are the key considerations for disaster recovery planning in a data lake environment? (advanced)

Closing Remark

As the demand for data lake professionals continues to rise in India, job seekers should focus on honing their skills in big data technologies and data management practices to stand out in the competitive job market. Prepare thoroughly for interviews by mastering both technical and conceptual aspects of data lake architecture and be confident in showcasing your expertise to potential employers. Good luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies