Jobs
Interviews

490 Data Lake Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 7.0 years

5 - 15 Lacs

Hyderabad, Bengaluru

Work from Office

Job Description - Data Warehouse Senior Engineer / Lead: Location : Bangalore or Hyderabad Responsibilities 1) Own design and development complex data integrations from multiple systems 2) Coordinate with onshore teams to obtain clarity on requirements, scope etc. 3) Be able to develop high quality BI reports that meet the needs of the customer 4) Good communication, interpersonal skills, and team player Qualifications Required 1) Strong knowledge on Azure Data warehousing and integration solutions such as Azure Data factory, Synapse Analytics Analytics 2) Working knowledge on Power Platform - PowerBI, PowerApps, Data verse, Power Automate 3) Working knowledge on Azure app integrations - Logic apps, function apps 4) Good knowledge on Azure data storage solutions - Data Lake, Cosmos DB, Storage accounts, SQL database 5) Strong data modelling experience (snowflake, dimensional etc.) and SQL expertise 6) Strong data analysis skills 7) Knowledge in Microsoft Fabrics Optional Skills 1) Knowledge on other data integration/streaming services (Data bricks, Azure data streaming services, Event grids, Kafka etc.) is a plus. 2) Knowledge on Microsoft Dynamics 365 platform - including working knowledge on export/import data from data verse etc. is a plus. 3) Having a certification on Azure data engineering related aspects is a plus.

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Hyderabad

Work from Office

We are looking for an experienced Azure Data Engineer with 2+ years of hands-on experience in Azure Data Lake and Azure Data Factory. The ideal candidate will have a strong background in connecting data sources to the Data Lake, writing PiSpark SQL codes, and building SSIS packages. Additionally, experience in data architecture, data modeling, and creating visualizations is essential. Key Responsibilities : Work with Azure Data Lake and Azure Data Factory to design, implement, and manage data pipelines. Connect various data sources (applications, databases, etc.) to the Azure Data Lake for storage and processing. Write PiSpark SQL codes and SSIS packages for data retrieval and transformation from different data sources. Design and develop efficient Data Architecture and Data Modeling solutions to support business requirements. Create data visualizations to communicate insights to stakeholders and decision-makers. Optimize data workflows and pipelines for better performance and scalability. Collaborate with cross-functional teams to ensure seamless data integration and delivery. Ensure data integrity, security, and compliance with best practices. Skills and Qualifications : 2+ years of experience working with Azure Data Lake, Azure Data Factory, and related Azure services. Proficiency in writing PiSpark SQL codes for data extraction and transformation. Experience in developing SSIS packages for data integration and automation. Strong understanding of Data Architecture and Data Modeling concepts. Experience in creating effective and insightful data visualizations using tools like Power BI or similar. Familiarity with cloud-based storage and computing concepts and best practices. Strong problem-solving skills with an ability to troubleshoot and optimize data workflows. Ability to collaborate effectively in a team environment and communicate with stakeholders. Preferred Qualifications : Certifications in Azure (e.g., Azure Data Engineer or similar) would be a plus. Experience with other Azure tools like Azure Synapse, Databricks, etc.

Posted 1 month ago

Apply

14.0 - 24.0 years

35 - 55 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Hybrid

About the role We are seeking a Sr. Practice Manager with Insight , you will be involved in different phases related to Software Development Lifecycle including Analysis, Design, Development and Deployment. We will count on you to be proficient in Software Design and Development, data modelling, data processing and data visualization. Along the way, you will get to: Help customers leverage existing data resources, implement new technologies and tooling to enable data science and data analytics Track the performance of our resources and related capabilities Experience mentoring and managing other data engineers and ensuring data engineering best practices are being followed. Constantly evolve and scale our capabilities along with the growth of the business and needs of our customers Be Ambitious : This opportunity is not just about what you do today but also about where you can go tomorrow. As a Practice Manager, you are positioned for swift advancement within our organization through a structured career path. When you bring your hunger, heart, and harmony to Insight, your potential will be met with continuous opportunities to upskill, earn promotions, and elevate your career. What were looking for Sr. Practice Manager with: Total of 14+ yrs of relevant experience, atleast 5-6 years in people management, managing 20+ team. Minimum 12 years of experience in Data technology. Experience in Data Warehouse and excellent command in SQL, data modeling and ETL development. Hands-on experience in SQL Server, Microsoft Azure (Data Factory, Data Lake, Data Bricks) Experience in MSBI (SSRS, SSIS, SSAS), writing queries and stored procedures. (Good to have) Experienced using Power BI, MDX, DAX, MDS, DQS. (Good to have) Experience developing design related to Predictive Analytics model Ability to handle performance improvement tasks & data archiving. Proficient in relevant provisioning of Azure resources, forecasting hardware usage, and managing to a budget.

Posted 1 month ago

Apply

1.0 - 5.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Coeo are trusted data management and analytics experts, delivering technology strategy and support for business The team have deep technical and commercial experience working with Microsoft Data Services to help our clients optimise their costs and maximise the benefits from their investments in these technologies Coeo have a strong emphasis on consulting skills, and we expect our team members to be customer facing and have a growth mind-set This role sits within our consulting team and we have clear expectations that our team members understand the importance of personal utilisation and have the ability to spot opportunities within our clients that can be passed back to our business development team. Coeo has been established for over 14 years and has exclusively focused on Microsoft technologies Our mission is to help our clients predict their future through the better use of data, technology, people and processes To do this our business has always focused on: Managed Services Database Consultancy Data Engineering and Analytics Consultancy Adoption and Change Management There has never been a more exciting time to join us, were a fast-growing professional services and managed services business Due to consistent growth, it is enabling us to expand out our project management and delivery teams. Role Overview: We are looking for an experienced Data Engineer with a strong background in SQL Server, SSIS, and Data Warehousing This role will involve developing and optimizing ETL pipelines, designing data models, and delivering scalable, high-performance data solutions that support analytics and business intelligence. Key Responsibilities: Design and maintain ETL processes using SQL Server Integration Services (SSIS). Work with SQL Server to create and optimize queries and stored procedures. Build and manage data warehouses to support reporting and analytics. Develop scalable data pipelines to support business needs. Collaborate with stakeholders to gather requirements and deliver data solutions. Monitor and optimize database and pipeline performance. Implement data management and ETL best practices. Required Skills: Strong expertise in SQL Server and SSIS. In-depth understanding of Data Warehousing and data modeling concepts. Ability to design and optimize stored procedures, functions, and complex queries. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications: Familiarity with cloud platforms (Azure, AWS, GCP) and data lake architecture. Experience with big data tools (Hadoop, Spark) is a plus. Mentoring or leadership experience is a bonus. Education & Experience: Bachelor's degree in Computer Science or a related field. Several years of hands-on experience in SQL-based data engineering. Additional Information: Hybrid working with flexible office visits in Hyderabad. Competitive compensation package with benefits such as healthcare, Gym pass, and more. Supportive and inclusive culture with career progression opportunities. Apply via our Careers page or visit our LinkedIn, Facebook, and Twitter profiles for more about Coeo. Diversity and Inclusion: Coeo is an equal opportunity employer committed to diversity and inclusion All qualified applicants will be considered.

Posted 1 month ago

Apply

12.0 - 20.0 years

22 - 37 Lacs

Bengaluru

Hybrid

12+ yrs of experience in Data Architecture Strong in Azure Data Services & Databricks, including Delta Lake & Unity Catalog Experience in Azure Synapse, Purview, ADF, DBT, Apache Spark,DWH,Data Lakes, NoSQL,OLTP NP-Immediate sachin@assertivebs.com

Posted 1 month ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Good to have skills required : Cloud, SQL , data analysis skills Location : Pune - Kharadi - WFO - 3 days/week. Job Description : We are seeking a highly skilled and experienced Python Lead to join our team. The ideal candidate will have strong expertise in Python coding and development, along with good-to-have skills in cloud technologies, SQL, and data analysis. Key Responsibilities : - Lead the development of high-quality, scalable, and robust Python applications. - Collaborate with cross-functional teams to define, design, and ship new features. - Ensure the performance, quality, and responsiveness of applications. - Develop RESTful applications using frameworks like Flask, Django, or FastAPI. - Utilize Databricks, PySpark SQL, and strong data analysis skills to drive data solutions. - Implement and manage modern data solutions using Azure Data Factory, Data Lake, and Data Bricks. Mandatory Skills : - Proven experience with cloud platforms (e.g. AWS) - Strong proficiency in Python, PySpark, R, and familiarity with additional programming languages such as C++, Rust, or Java. - Expertise in designing ETL architectures for batch and streaming processes, database technologies (OLTP/OLAP), and SQL. - Experience with the Apache Spark, and multi-cloud platforms (AWS, GCP, Azure). - Knowledge of data governance and GxP data contexts; familiarity with the Pharma value chain is a plus. Good to Have Skills : - Experience with modern data solutions via Azure. - Knowledge of principles summarized in the Microsoft Cloud Adoption Framework. - Additional expertise in SQL and data analysis. Educational Qualifications : Bachelor's/Master's degree or equivalent with a focus on software engineering. If you are a passionate Python developer with a knack for cloud technologies and data analysis, we would love to hear from you. Join us in driving innovation and building cutting-edge solutions! Apply Insights Follow-up Save this job for future reference Did you find something suspicious? Report Here! Hide This Job? Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 1 month ago

Apply

9.0 - 14.0 years

25 - 40 Lacs

Bengaluru

Hybrid

Greetings from tsworks Technologies India Pvt . We are hiring for Sr. Data Engineer - Snowflake with AWS If you are interested, please share your CV to mohan.kumar@tsworks.io Position: Senior Data Engineer Experience: 9+ Years Location: Bengaluru, India (Hybrid) Mandatory Required Qualification Strong proficiency in AWS data services such as S3 buckets, Glue and Glue Catalog, EMR, Athena, Redshift, DynamoDB, Quick Sight, etc. Strong hands-on experience building Data Lake-House solutions on Snowflake, and using features such as streams, tasks, dynamic tables, data masking, data exchange etc. Hands-on experience using scheduling tools such as Apache Airflow, DBT, AWS Step Functions and data governance products such as Collibra Expertise in DevOps and CI/CD implementation Excellent Communication Skills In This Role, You Will Design, implement, and manage scalable and efficient data architecture on the AWS cloud platform. Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Perform complex data transformations and processing using PySpark (AWS Glue, EMR or Databricks), Snowflake's data processing capabilities, or other relevant tools. Hands-on experience working with Data Lake solutions such as Apache Hudi, Delta Lake or Iceberg. Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs. Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions. Integrate data from various sources, both internal and external, ensuring data quality and consistency. Skills & Knowledge Bachelor's degree in computer science, Engineering, or a related field. 9 + Years of experience in Information Technology, designing, developing and executing solutions. 4+ Years of hands-on experience in designing and executing data solutions on AWS and Snowflake cloud platforms as a Data Engineer. Strong proficiency in AWS services such as Glue, EMR, Athena, Databricks, with file formats such as Parquet and Avro. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Hands-on experience in handling real-time data streams from Kafka or Kinesis is required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Knowledge of data quality, governance, and security best practices. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. AWS and Snowflake Certifications are preferred.

Posted 1 month ago

Apply

2.0 - 5.0 years

5 - 11 Lacs

Hyderabad, Pune

Work from Office

What youll do on a typical day: Data Visualization: Design, develop, and maintain interactive data visualizations and reports using looker. Data Modeling: Create and optimize data models to support business requirements. Data Integration: Integrate looker reports into other applications for enhanced business capabilities. Performance Optimization: Monitor and optimize the performance of looker reports and dashboards. Collaboration: Work with business stakeholders to understand their data visualization and business intelligence needs. Continuously improve technical design patterns, workflows, and tools—defining and enforcing standards when necessary to sustain the platform’s effectiveness and sustainability, experimenting with and promoting adoption of new tools and approaches when appropriate Security: Implement different security measures on data and ensure compliance with data governance policies. Documentation: Document processes and methodologies used in developing reporting solutions. What you need to succeed at XPO: At a minimum, you’ll need: Qualification: Bachelor’s / master’s degree in computer science, Information Technology, or a related field. Experience: 3 plus years of experience in data analysis, data visualization, and business intelligence using BI Tools (Looker, Power BI, Tableau, etc.), Technical Skills: Proficiency in writing SQL queries, solid understanding of Data Warehouse and data modeling concepts. Analytical Skills: Strong analytical and problem-solving skills. Communication: Excellent communication and teamwork skills. Experience with Cloud Platforms such as Google Cloud Platform and Google Big Query Experience with programming languages like python, R etc. Understanding of version controlling tools GitHub, SVN, TFS etc. Google Cloud Platform or Looker certification a big plus Be part of something big.

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 15 Lacs

Kochi

Work from Office

Job Summary: We are looking for a seasoned Data Engineer with 58 years of experience, specializing in Microsoft Fabric. The ideal candidate will play a key role in designing, building, and optimizing scalable data pipelines and models. You will work closely with analytics and business teams to drive data integration, ensure quality, and support data-driven decision-making in a modern cloud environment. Key Responsibilities: Design, develop, and optimize end-to-end data pipelines using Microsoft Fabric (Data Factory, Dataflows Gen2). Create and maintain data models , semantic models , and data marts for analytical and reporting purposes. Develop and manage SQL-based ETL processes , integrating various structured and unstructured data sources. Collaborate with BI developers and analysts to develop Power BI datasets, dashboards, and reports. Implement robust data integration solutions across diverse platforms and sources (on-premises, cloud). Ensure data integrity, quality, and governance through automated validation and error handling mechanisms. Work with business stakeholders to understand data requirements and translate them into technical specifications. Optimize data workflows for performance and cost-efficiency in a cloud-first architecture. Provide mentorship and technical guidance to junior data engineers. Required Skills: Strong hands-on experience with Microsoft Fabric , including Dataflows Gen2, Pipelines, and OneLake. Proficiency in Power BI , including building reports, dashboards, and working with semantic models. Solid understanding of data modeling techniques : star schema, snowflake, normalization/denormalization. Deep experience with SQL , stored procedures, and query optimization. Experience in data integration from diverse sources such as APIs, flat files, databases, and streaming data. Knowledge of data governance , lineage , and data catalog capabilities within the Microsoft ecosystem.

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 15 Lacs

Bengaluru

Work from Office

Job Position : Python Lead Total Exp Required : 6+ years Relevant Exp Required : around 5 Mandatory skills required : Strong Python coding and development Good to have skills required : Cloud, SQL , data analysis skills Location : Pune - Kharadi - WFO - 3 days/week. Job Description : We are seeking a highly skilled and experienced Python Lead to join our team. The ideal candidate will have strong expertise in Python coding and development, along with good-to-have skills in cloud technologies, SQL, and data analysis. Key Responsibilities : - Lead the development of high-quality, scalable, and robust Python applications. - Collaborate with cross-functional teams to define, design, and ship new features. - Ensure the performance, quality, and responsiveness of applications. - Develop RESTful applications using frameworks like Flask, Django, or FastAPI. - Utilize Databricks, PySpark SQL, and strong data analysis skills to drive data solutions. - Implement and manage modern data solutions using Azure Data Factory, Data Lake, and Data Bricks. Mandatory Skills : - Proven experience with cloud platforms (e.g. AWS) - Strong proficiency in Python, PySpark, R, and familiarity with additional programming languages such as C++, Rust, or Java. - Expertise in designing ETL architectures for batch and streaming processes, database technologies (OLTP/OLAP), and SQL. - Experience with the Apache Spark, and multi-cloud platforms (AWS, GCP, Azure). - Knowledge of data governance and GxP data contexts; familiarity with the Pharma value chain is a plus. Good to Have Skills : - Experience with modern data solutions via Azure. - Knowledge of principles summarized in the Microsoft Cloud Adoption Framework. - Additional expertise in SQL and data analysis. Educational Qualifications : Bachelor's/Master's degree or equivalent with a focus on software engineering. If you are a passionate Python developer with a knack for cloud technologies and data analysis, we would love to hear from you. Join us in driving innovation and building cutting-edge solutions!

Posted 1 month ago

Apply

7.0 - 12.0 years

18 - 22 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a talented and hands-on Azure Engineer to join our team. The ideal candidate will have significant experience working on Azure, as well as a solid background in cloud data engineering, data pipelines, and analytics solutions. You will be responsible for designing, building, and managing scalable data architectures, enabling seamless data integration, and leveraging advanced analytics capabilities to drive business insights. Primary Responsibilities Azure Platform Implementation: Develop, manage, and optimize data pipelines using AML workspace on Azure Design and implement end-to-end data processing workflows, leveraging Databricks notebooks and jobs for data transformation, modeling, and analysis Build and maintain scalable data models in Databricks using Apache Spark for big data processing Integrate Databricks with other Azure services, including Azure Data Lake, Azure Synapse, and Azure Blob Storage Data Engineering & ETL Development: Design and implement robust ETL/ELT pipelines to ingest, transform, and load large volumes of data Optimize data processing jobs for performance, reliability, and scalability Use Apache Spark and other Databricks features to process structured, semi-structured, and unstructured data efficiently Azure Cloud Architecture: Work with Azure cloud services to design and deploy cloud-based data solutions Architect and implement data lakes, data warehouses, and analytics solutions within the Azure ecosystem Ensure security, compliance, and governance best practices for cloud-based data solutions Collaboration & Analytics: Collaborate with data scientists, analysts, and business stakeholders to deliver actionable insights Build advanced analytics models and solutions using Databricks, leveraging Python, SQL, and Spark-based technologies Provide guidance and technical expertise to other teams on best practices for working with Databricks and Azure Performance Optimization & Monitoring: Monitor and optimize the performance of data pipelines and Databricks jobs Troubleshoot and resolve performance and reliability issues within the data engineering pipelines Ensure high availability, fault tolerance, and efficient resource utilization on Databricks Continuous Improvement: Stay up-to-date with the latest features of Databricks, Azure, and related technologies Continuously improve data architectures, pipelines, and processes for better performance and scalability Propose and implement innovative solutions to meet evolving business needs Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 10+ years of hands-on experience with Azure ecosystem Solid experience with cloud-based data engineering, particularly with Azure services (Azure Data Lake, Azure Synapse, Azure Blob Storage, etc.) Experience with Databricks notebooks and managing Databricks environments Hands-on experience with data storage technologies (Data Lake, Data Warehouse, Blob Storage) Solid knowledge of SQL and Python for data processing and transformation Familiarity with cloud infrastructure management on Azure and using Azure DevOps for CI/CD Solid understanding of data modeling, data warehousing, and data lake architectures Expertise in building and managing ETL/ELT pipelines using Apache Spark, Databricks, and other related technologies Proficiency in Apache Spark (PySpark, Scala, SQL) Proven solid problem-solving skills with a proactive approach to identifying and addressing issues Proven ability to communicate complex technical concepts to non-technical stakeholders Proven excellent collaboration skills to work effectively with cross-functional teams Preferred Qualifications Certifications in Azure (Azure Data Engineer, Azure Solutions Architect) Experience with advanced analytics techniques, including machine learning and AI, using Databricks Experience with other big data processing frameworks or platforms Experience with data governance and security best practices in cloud environments Knowledge of DevOps practices and CI/CD pipelines for cloud environments At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

4.0 - 7.0 years

10 - 14 Lacs

Gurugram

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities As a Senior Data Engineering Analyst, you will be instrumental in driving our data initiatives and enhancing our data infrastructure to support strategic decision-making and business operations. You will lead the design, development, and optimization of complex data pipelines and architectures, ensuring the efficient collection, storage, and processing of large volumes of data from diverse sources. Leveraging your advanced expertise in data modeling and database management, you will ensure that our data systems are scalable, reliable, and optimized for high performance A core aspect of your role will involve developing and maintaining robust ETL (Extract, Transform, Load) processes to facilitate seamless data integration and transformation, thereby supporting our analytics and reporting efforts. You will implement best practices in data warehousing and data lake management, organizing and structuring data to enable easy access and analysis for various stakeholders across the organization. Ensuring data quality and integrity will be paramount; you will establish and enforce rigorous data validation and cleansing procedures to maintain high standards of accuracy and consistency within our data repositories In collaboration with cross-functional teams, including data scientists, business analysts, and IT professionals, you will gather and understand their data requirements, delivering tailored technical solutions that align with business objectives. Your ability to communicate complex technical concepts to non-technical stakeholders will be essential in fostering collaboration and ensuring alignment across departments. Additionally, you will mentor and provide guidance to junior data engineers and analysts, promoting a culture of continuous learning and professional growth within the data engineering team Take a proactive role in performance tuning and optimization of our data systems, identifying and resolving bottlenecks to enhance efficiency and reduce latency. Staying abreast of the latest advancements in data engineering technologies and methodologies, you will recommend and implement innovative solutions that drive our data capabilities forward. Your strategic input will be invaluable in planning and executing data migration and integration projects, ensuring seamless transitions between systems with minimal disruption to operations Maintaining comprehensive documentation of data processes, architectural designs, and technical specifications will be a key responsibility, supporting knowledge sharing and maintaining organizational standards. You will generate detailed reports on data quality, system performance, and the effectiveness of data engineering initiatives, providing valuable insights to inform strategic decisions. Additionally, you will oversee data governance protocols, ensuring compliance with relevant data protection regulations and industry standards, thereby safeguarding the integrity and security of our data assets Leadership and expertise will contribute significantly to the enhancement of our data infrastructure, enabling the organization to leverage data-driven insights for sustained growth and competitive advantage. By fostering innovation, ensuring data excellence, and promoting best practices, you will play a critical role in advancing our data engineering capabilities and supporting the overall success of the business Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field Experience5+ years in data engineering, data analysis, or a similar role with a proven track record Technical Skills: Advanced proficiency in SQL and experience with relational databases (Oracle, MySQL, SQL Server) Expertise in ETL processes and tools Solid understanding of data modeling, data warehousing, and data lake architectures Proficiency in programming languages such as Python or Java Familiarity with cloud platforms (Azure Platform) and their data services Knowledge of data governance principles and data protection regulations (GDPR, HIPAA, CCPA) Soft Skills: Proven excellent analytical and problem-solving abilities Solid communication and collaboration skills Leadership experience and the ability to mentor junior team members Proven proactive mindset with a commitment to continuous learning and improvement Preferred Qualifications Relevant certifications Experience with version control systems (Git) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

3.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.

Posted 1 month ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Years Of Exp - 5-12 Yrs Location - PAN India OFSAA Data Modeler Experience in design, build ,customize OFSAA Data model ,Validation of data model Excellent knowledge in Data model guidelines for Staging. processing and Reporting tables. Knowledge on Data model Support on configuring the UDPs, subtype and supertype relationship enhancements Experience on OFSAA platform (OFSAAI) with one or more of following OFSAA modules: o OFSAA Financial Solution Data Foundation - (Preferred) o OFSA Data Integrated Hub - Optional Good in SQL and PL/SQL. Strong in Data Warehouse Principles, ETL/Data Flow tools. Should have excellent Analytical and Communication skills. OFSAA Integration SME - DIH/Batch run framework Experience in ETL process, familiar with OFSAA. DIH setup in EDS, EDD, T2T, etc. Familiar with different seeded tables, SCD, DIM, hierarchy, look ups, etc Worked with FSDF in knowing the STG, CSA, FACT table structures Have working with different APIs and out of box connectors, etc. Familiar with Oracle patching and SR

Posted 1 month ago

Apply

7.0 - 12.0 years

10 - 18 Lacs

Bengaluru

Hybrid

Job Goals Design and implement resilient data pipelines to ensure data reliability, accuracy, and performance. Collaborate with cross-functional teams to maintain the quality of production services and smoothly integrate data processes. Oversee the implementation of common data models and data transformation pipelines, ensuring alignement to standards. Drive continuous improvement in internal data frameworks and support the hiring process for new Data Engineers. Regularly engage with collaborators to discuss considerations and manage the impact of changes. Support architects in shaping the future of the data platform and help land new capabilities into business-as-usual operations. Identify relevant emerging trends and build compelling cases for adoption, such as tool selection. Ideal Skills & Capabilities A minimum of 6 years of experience in a comparable Data Engineer position is required. Data Engineering Expertise: Proficiency in designing and implementing resilient data pipelines, ensuring data reliability, accuracy, and performance, with practical knowledge of modern cloud data technology stacks (AZURE) Technical Proficiency: Experience with Azure Data Factory and Databricks , and skilled in Python , Apache Spark , or other distributed data programming frameworks. Operational Knowledge: In-depth understanding of data concepts, data structures, modelling techniques, and provisioning data to support varying consumption needs, along with accomplished ETL/ELT engineering skills. Automation & DevOps: Experience using DevOps toolchains for managing CI/CD and an automation-first mindset in building solutions, including self-healing and fault-tolerant methods. Data Management Principles: Practical application of data management principles such as security and data privacy, with experience handling sensitive data through techniques like anonymisation/tokenisation/pseudo-anonymisation.

Posted 1 month ago

Apply

5.0 - 10.0 years

13 - 23 Lacs

Pune, Bengaluru, Delhi / NCR

Hybrid

Role & responsibilities Prefers Immediate Joiners Please complete the mandatory questions below to proceed further Job Summary: The Azure Data Engineer (Standard) is a senior-level role responsible for designing and implementing complex data processing solutions on the Azure platform. They work with other data engineers and architects to develop scalable, reliable, and efficient data pipelines that meet business requirements. Core Skills - Proficiency in Azure data services such as Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage. - Experience with ETL (Extract, Transform, Load) processes and data integration. - Strong SQL and database querying skills. - Familiarity with data modeling and database design. Preferred candidate profile

Posted 2 months ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

JD: • Good experience in Apache Iceberg, Apache Spark, Trino • Proficiency in SQL and data modeling • Experience with open Data Lakehouse using Apache Iceberg • Experience with Data Lakehouse architecture with Apache Iceberg and Trino

Posted 2 months ago

Apply

4.0 - 5.0 years

10 - 14 Lacs

Bengaluru

Work from Office

About the Role - We are seeking a highly skilled and experienced Senior Data Scientist to join our data science team. - As a Senior Data Scientist, you will play a critical role in driving data-driven decision making across the organization by developing and implementing advanced analytical solutions. - You will leverage your expertise in data science, machine learning, and statistical analysis to uncover insights, build predictive models, and solve complex business challenges. Key Responsibilities - Develop and implement statistical and machine learning models (e.g., regression, classification, clustering, time series analysis) to address business problems. - Analyze large and complex datasets to identify trends, patterns, and anomalies. - Develop predictive models for forecasting, churn prediction, customer segmentation, and other business outcomes. - Conduct A/B testing and other experiments to optimize business decisions. - Communicate data insights effectively through visualizations, dashboards, and presentations. - Develop and maintain interactive data dashboards and reports. - Present findings and recommendations to stakeholders in a clear and concise manner. - Work with data engineers to design and implement data pipelines and data warehousing solutions. - Ensure data quality and integrity throughout the data lifecycle. - Develop and maintain data pipelines for data ingestion, transformation, and loading. - Stay up-to-date with the latest advancements in data science, machine learning, and artificial intelligence. - Research and evaluate new technologies and tools to improve data analysis and modeling capabilities. - Explore and implement new data science techniques and methodologies. - Collaborate effectively with data engineers, business analysts, product managers, and other stakeholders. - Communicate technical information clearly and concisely to both technical and non-technical audiences. Qualifications Essential - 4+ years of experience as a Data Scientist or in a related data science role. - Strong proficiency in statistical analysis, machine learning algorithms, and data mining techniques. - Experience with programming languages like Python (with libraries like scikit-learn, pandas, NumPy) or R. - Experience with data visualization tools (e.g., Tableau, Power BI). - Experience with data warehousing and data lake technologies. - Excellent analytical, problem-solving, and communication skills. - Master's degree in Statistics, Mathematics, Computer Science, or a related field

Posted 2 months ago

Apply

5.0 - 6.0 years

8 - 13 Lacs

Hyderabad

Work from Office

About the Role - We are seeking a highly skilled and experienced Senior Azure Databricks Engineer to join our dynamic data engineering team. - As a Senior Azure Databricks Engineer, you will play a critical role in designing, developing, and implementing data solutions on the Azure Databricks platform. - You will be responsible for building and maintaining high-performance data pipelines, transforming raw data into valuable insights, and ensuring data quality and reliability. Key Responsibilities - Design, develop, and implement data pipelines and ETL/ELT processes using Azure Databricks. - Develop and optimize Spark applications using Scala or Python for data ingestion, transformation, and analysis. - Leverage Delta Lake for data versioning, ACID transactions, and data sharing. - Utilize Delta Live Tables for building robust and reliable data pipelines. - Design and implement data models for data warehousing and data lakes. - Optimize data structures and schemas for performance and query efficiency. - Ensure data quality and integrity throughout the data lifecycle. - Integrate Azure Databricks with other Azure services (e.g., Azure Data Factory, Azure Synapse Analytics, Azure Blob Storage). - Leverage cloud-based data services to enhance data processing and analysis capabilities. Performance Optimization & Troubleshooting - Monitor and analyze data pipeline performance. - Identify and troubleshoot performance bottlenecks. - Optimize data processing jobs for speed and efficiency. - Collaborate effectively with data engineers, data scientists, data analysts, and other stakeholders. - Communicate technical information clearly and concisely. - Participate in code reviews and contribute to the improvement of development processes. Qualifications Essential - 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Azure Databricks. - Strong proficiency in Python and SQL. - Expertise in Apache Spark and its core concepts (RDDs, DataFrames, Datasets). - In-depth knowledge of Delta Lake and its features (e.g., ACID transactions, time travel). - Experience with data warehousing concepts and ETL/ELT processes. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Bachelor's degree in Computer Science, Computer Engineering, or a related field.

Posted 2 months ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Data Platform Engineer to build scalable infrastructure for data ingestion, processing, and analysis. Key Responsibilities: Architect distributed data systems. Enable data discoverability and quality. Develop data tooling and platform APIs. Required Skills & Qualifications: Experience with Spark, Kafka, and Delta Lake. Proficiency in Python, Scala, or Java. Familiar with cloud-based data platforms. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 2 months ago

Apply

6.0 - 11.0 years

20 - 25 Lacs

Noida

Work from Office

Position Overview: Working with the Finance Systems Manager, the role will ensure that ERP system is available and fit for purpose. The ERP Systems Developer will be developing the ERP system, providing comprehensive day-to-day support, training and develop the current ERP System for the future. Key Responsibilities: As a Sr. DW BI Developer, the candidate will participate in the design / development / customization and maintenance of software applications. As a DW BI Developer, the person should analyse the different applications/Products, design and implement DW using best practices. Rich data governance experience, data security, data quality, provenance / lineage. The candidate will also be maintaining a close working relationship with the other application stakeholders. Experience of developing secured and high-performance web application(s) Knowledge of software development life-cycle methodologies e.g. Iterative, Waterfall, Agile, etc. Designing and architecting future releases of the platform. Participating in troubleshooting application issues. Jointly working with other teams and partners handling different aspects of the platform creation. Tracking advancements in software development technologies and applying them judiciously in the solution roadmap. Ensuring all quality controls and processes are adhered to. Planning the major and minor releases of the solution. Ensuring robust configuration management. Working closely with the Engineering Manager on different aspects of product lifecycle management. Demonstrate the ability to independently work in a fast-paced environment requiring multitasking and efficient time management. Required Skills and Qualifications: End to end Lifecyle of Data warehousing, DataLakes and reporting Experience with Maintaining/Managing Data warehouses. Responsible for the design and development of a large, scaled-out, real-time, high performing Data Lake / Data Warehouse systems (including Big data and Cloud). Strong SQL and analytical skills. Experience in Power BI, Tableau, Qlikview, Qliksense etc. Experience in Microsoft Azure Services. Experience in developing and supporting ADF pipelines. Experience in Azure SQL Server/ Databricks / Azure Analysis Services Experience in developing tabular model. Experience in working with APIs. Minimum 2 years of experience in a similar role Experience with data warehousing, data modelling. Strong experience in SQL 2-6 years of total experience in building DW/BI systems Experience with ETL and working with large-scale datasets. Proficiency in writing and debugging complex SQLs. Prior experience working with global clients. Hands on experience with Kafka, Flink, Spark, SnowFlake, Airflow, nifi, Oozie, Pig, Hive,Impala Sqoop. Storage like HDFS , Object Storage (S3 etc), RDBMS, MPP and Nosql DB. Experience with distributed data management, data sfailover,luding databases (Relational, NoSQL, Big data, data analysis, data processing, data transformation, high availability, and scalability) Experience in end-to-end project implementation in Cloud (Azure / AWS / GCP) as a DW BI Developer Rich data governance experience, data security, data quality, provenance / lineagHive, Impalaerstanding of industry trends and products in dataops , continuous intelligence , Augmented analytics , and AI/ML. Prior experience of working in cloud like Azure, AWS and GCP Prior experience of working with Global Clients

Posted 2 months ago

Apply

10.0 - 16.0 years

25 - 27 Lacs

Chennai

Work from Office

We at Dexian India, are looking to hire a Cloud Data PM with over 10 years of hands-on experience in AWS/Azure, DWH, and ETL. The role is based in Chennai with a shift from 2.00pm to 11.00pm IST. Key qualifications we seek in candidates include: - Solid understanding of SQL and data modeling - Proficiency in DWH architecture, including EDW/DM concepts and Star/Snowflake schema - Experience in designing and building data pipelines on Azure Cloud stack - Familiarity with Azure Data Explorer, Data Factory, Data Bricks, Synapse Analytics, Azure Fabric, Azure Analysis Services, and Azure SQL Datawarehouse - Knowledge of Azure DevOps and CI/CD Pipelines - Previous experience managing scrum teams and working as a Scrum Master or Project Manager on at least 2 projects - Exposure to on-premise transactional database environments like Oracle, SQL Server, Snowflake, MySQL, and/or Postgres - Ability to lead enterprise data strategies, including data lake delivery - Proficiency in data visualization tools such as Power BI or Tableau, and statistical analysis using R or Python - Strong problem-solving skills with a track record of deriving business insights from large datasets - Excellent communication skills and the ability to provide strategic direction to technical and business teams - Prior experience in presales, RFP and RFI responses, and proposal writing is mandatory - Capability to explain complex data solutions clearly to senior management - Experience in implementing, managing, and supporting data warehouse projects or applications - Track record of leading full-cycle implementation projects related to Business Intelligence - Strong team and stakeholder management skills - Attention to detail, accuracy, and ability to meet tight deadlines - Knowledge of application development, APIs, Microservices, and Integration components Tools & Technology Experience Required: - Strong hands-on experience in SQL or PLSQL - Proficiency in Python - SSIS or Informatica (Mandatory one of the tools) - BI: Power BI, or Tableau (Mandatory one of the tools)

Posted 2 months ago

Apply

2.0 - 5.0 years

2 - 4 Lacs

Mumbai, Mumbai Suburban, Mumbai (All Areas)

Work from Office

Role & responsibilities 3 to 4+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud and Data Governance domains. • Take ownership of the technical aspects of implementing data pipeline & migration requirements, ensuring that the platform is being used to its fullest potential through designing and building applications around business stakeholder needs. • Interface directly with stakeholders to gather requirements and own the automated end-to-end data engineering solutions. • Implement data pipelines to automate the ingestion, transformation, and augmentation of both structured, unstructured, real-time data, and provide best practices for pipeline operations • Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers. Implement Data Governance best practices. • Create and maintain clear documentation on data models/schemas as well as transformation/validation rules. • Implement tools that help data consumers to extract, analyze, and visualize data faster through data pipelines. • Implement data security, privacy, and compliance protocols to ensure safe data handling in line with regulatory requirements. • Optimize data workflows and queries to ensure low latency, high throughput, and cost efficiency. • Leading the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for batch ETL's. • Work directly with our internal product/technical teams to ensure that our technology infrastructure is seamlessly and effectively integrated • Migrate current data applications & pipelines to Cloud leveraging technologies in future Preferred candidate profile • Graduate with Engineering Degree (CS/Electronics/IT) / MCA / MCS or equivalent with substantial data engineering experience. • 3+ years of recent hands-on experience with a modern programming language (Scala, Python, Java) is required; Spark/ Pyspark is preferred. • Experience with configuration management and version control apps (ie: Git) and experience working within a CI/CD framework is a plus. • 3+ years of recent hands-on SQL programming experience in a Big Data environment is required. • Working knowledge of PostgreSQL, RDBMS, NoSQL and columnar databases. • Experience developing and maintaining ETL applications and data pipelines using big data technologies is required; Apache Kafka, Spark, Airflow experience is a must. • Knowledge of API and microservice integration with applications. • Experience with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). • Experience building data solutions for Power BI and Web visualization applications. • Experience with Cloud is a plus. • Experience in managing multiple projects and stakeholders with excellent communication and interpersonal skills. • Ability to develop and organize high-quality documentation. • Superior analytical skills and a strong sense of ownership in your work. • Collaborate with data scientists on several projects. Contribute to development and support of analytics including AI/ML. • Ability to thrive in a fast-paced environment, and to manage multiple, competing priorities simultaneously. • Prior Energy & Utilities industry experience is a big plus. Experience (Min. Max. in yrs.): 3+ years of core/relevant experience Location: Mumbai (Onsite)

Posted 2 months ago

Apply

2.0 - 4.0 years

6 - 11 Lacs

Pune

Hybrid

What’s the role all about? As a BI Developer, you’ll be a key contributor to developing Reports in a multi-region, multi-tenant SaaS product. You’ll collaborate with the core R&D team to build high-performance Reports to serve the use cases of several applications in the suite. How will you make an impact? Take ownership of the software development lifecycle, including design, development, unit testing, and deployment, working closely with QA teams. Ensure that architectural concepts are consistently implemented across the product. Act as a product expert within R&D, understanding the product’s requirements and its market positioning. Work closely with cross-functional teams (Product Managers, Sales, Customer Support, and Services) to ensure successful product delivery. Design and build Reports for given requirements. Create design documents, test cases for the reports Develop SQL to address the adhoc report requirements, conduct analyses Create visualizations and reports as per the requirements Execute unit testing, functional & performance testing and document the results Conduct peer reviews and ensure quality is met at all stages Have you got what it takes? Bachelor/Master of Engineering Degree in Computer Science, Electronic Engineering or equivalent from reputed institute 2-4 years of BI report development experience Expertise in SQL & any cloud-based databases. Would be able to work with any DB to write SQL for any business needs. Experience in any BI tools like Tableau, Power BI, MicroStrategy etc.. Experience working in enterprise Data warehouse/ Data Lake system Strong knowledge of Analytical Data base and schemas Development experience building solutions that leverage SQL and NoSQL databases. Experience/Knowledge of Snowflake an advantage. In-depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework Experience working in functional testing, Performance testing etc.. Experience with public cloud infrastructure and technologies such as AWS/Azure/GCP etc Experience working in Continuous Integration and Delivery practices using industry standard tools such as Jenkins Experience working in an Agile methodology development environment and using work item management tools like JIRA What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager Role Type: Individual Contributor

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies