Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
10 - 14 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities As a Senior Data Engineering Analyst, you will be instrumental in driving our data initiatives and enhancing our data infrastructure to support strategic decision-making and business operations. You will lead the design, development, and optimization of complex data pipelines and architectures, ensuring the efficient collection, storage, and processing of large volumes of data from diverse sources. Leveraging your advanced expertise in data modeling and database management, you will ensure that our data systems are scalable, reliable, and optimized for high performance A core aspect of your role will involve developing and maintaining robust ETL (Extract, Transform, Load) processes to facilitate seamless data integration and transformation, thereby supporting our analytics and reporting efforts. You will implement best practices in data warehousing and data lake management, organizing and structuring data to enable easy access and analysis for various stakeholders across the organization. Ensuring data quality and integrity will be paramount; you will establish and enforce rigorous data validation and cleansing procedures to maintain high standards of accuracy and consistency within our data repositories In collaboration with cross-functional teams, including data scientists, business analysts, and IT professionals, you will gather and understand their data requirements, delivering tailored technical solutions that align with business objectives. Your ability to communicate complex technical concepts to non-technical stakeholders will be essential in fostering collaboration and ensuring alignment across departments. Additionally, you will mentor and provide guidance to junior data engineers and analysts, promoting a culture of continuous learning and professional growth within the data engineering team Take a proactive role in performance tuning and optimization of our data systems, identifying and resolving bottlenecks to enhance efficiency and reduce latency. Staying abreast of the latest advancements in data engineering technologies and methodologies, you will recommend and implement innovative solutions that drive our data capabilities forward. Your strategic input will be invaluable in planning and executing data migration and integration projects, ensuring seamless transitions between systems with minimal disruption to operations Maintaining comprehensive documentation of data processes, architectural designs, and technical specifications will be a key responsibility, supporting knowledge sharing and maintaining organizational standards. You will generate detailed reports on data quality, system performance, and the effectiveness of data engineering initiatives, providing valuable insights to inform strategic decisions. Additionally, you will oversee data governance protocols, ensuring compliance with relevant data protection regulations and industry standards, thereby safeguarding the integrity and security of our data assets Leadership and expertise will contribute significantly to the enhancement of our data infrastructure, enabling the organization to leverage data-driven insights for sustained growth and competitive advantage. By fostering innovation, ensuring data excellence, and promoting best practices, you will play a critical role in advancing our data engineering capabilities and supporting the overall success of the business Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field Experience5+ years in data engineering, data analysis, or a similar role with a proven track record Technical Skills: Advanced proficiency in SQL and experience with relational databases (Oracle, MySQL, SQL Server) Expertise in ETL processes and tools Solid understanding of data modeling, data warehousing, and data lake architectures Proficiency in programming languages such as Python or Java Familiarity with cloud platforms (Azure Platform) and their data services Knowledge of data governance principles and data protection regulations (GDPR, HIPAA, CCPA) Soft Skills: Proven excellent analytical and problem-solving abilities Solid communication and collaboration skills Leadership experience and the ability to mentor junior team members Proven proactive mindset with a commitment to continuous learning and improvement Preferred Qualifications Relevant certifications Experience with version control systems (Git) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 month ago
3.0 - 7.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.
Posted 1 month ago
5.0 - 8.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Years Of Exp - 5-12 Yrs Location - PAN India OFSAA Data Modeler Experience in design, build ,customize OFSAA Data model ,Validation of data model Excellent knowledge in Data model guidelines for Staging. processing and Reporting tables. Knowledge on Data model Support on configuring the UDPs, subtype and supertype relationship enhancements Experience on OFSAA platform (OFSAAI) with one or more of following OFSAA modules: o OFSAA Financial Solution Data Foundation - (Preferred) o OFSA Data Integrated Hub - Optional Good in SQL and PL/SQL. Strong in Data Warehouse Principles, ETL/Data Flow tools. Should have excellent Analytical and Communication skills. OFSAA Integration SME - DIH/Batch run framework Experience in ETL process, familiar with OFSAA. DIH setup in EDS, EDD, T2T, etc. Familiar with different seeded tables, SCD, DIM, hierarchy, look ups, etc Worked with FSDF in knowing the STG, CSA, FACT table structures Have working with different APIs and out of box connectors, etc. Familiar with Oracle patching and SR
Posted 1 month ago
7.0 - 12.0 years
10 - 18 Lacs
Bengaluru
Hybrid
Job Goals Design and implement resilient data pipelines to ensure data reliability, accuracy, and performance. Collaborate with cross-functional teams to maintain the quality of production services and smoothly integrate data processes. Oversee the implementation of common data models and data transformation pipelines, ensuring alignement to standards. Drive continuous improvement in internal data frameworks and support the hiring process for new Data Engineers. Regularly engage with collaborators to discuss considerations and manage the impact of changes. Support architects in shaping the future of the data platform and help land new capabilities into business-as-usual operations. Identify relevant emerging trends and build compelling cases for adoption, such as tool selection. Ideal Skills & Capabilities A minimum of 6 years of experience in a comparable Data Engineer position is required. Data Engineering Expertise: Proficiency in designing and implementing resilient data pipelines, ensuring data reliability, accuracy, and performance, with practical knowledge of modern cloud data technology stacks (AZURE) Technical Proficiency: Experience with Azure Data Factory and Databricks , and skilled in Python , Apache Spark , or other distributed data programming frameworks. Operational Knowledge: In-depth understanding of data concepts, data structures, modelling techniques, and provisioning data to support varying consumption needs, along with accomplished ETL/ELT engineering skills. Automation & DevOps: Experience using DevOps toolchains for managing CI/CD and an automation-first mindset in building solutions, including self-healing and fault-tolerant methods. Data Management Principles: Practical application of data management principles such as security and data privacy, with experience handling sensitive data through techniques like anonymisation/tokenisation/pseudo-anonymisation.
Posted 1 month ago
5.0 - 10.0 years
13 - 23 Lacs
Pune, Bengaluru, Delhi / NCR
Hybrid
Role & responsibilities Prefers Immediate Joiners Please complete the mandatory questions below to proceed further Job Summary: The Azure Data Engineer (Standard) is a senior-level role responsible for designing and implementing complex data processing solutions on the Azure platform. They work with other data engineers and architects to develop scalable, reliable, and efficient data pipelines that meet business requirements. Core Skills - Proficiency in Azure data services such as Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage. - Experience with ETL (Extract, Transform, Load) processes and data integration. - Strong SQL and database querying skills. - Familiarity with data modeling and database design. Preferred candidate profile
Posted 1 month ago
4.0 - 9.0 years
10 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
JD: • Good experience in Apache Iceberg, Apache Spark, Trino • Proficiency in SQL and data modeling • Experience with open Data Lakehouse using Apache Iceberg • Experience with Data Lakehouse architecture with Apache Iceberg and Trino
Posted 1 month ago
4.0 - 5.0 years
10 - 14 Lacs
Bengaluru
Work from Office
About the Role - We are seeking a highly skilled and experienced Senior Data Scientist to join our data science team. - As a Senior Data Scientist, you will play a critical role in driving data-driven decision making across the organization by developing and implementing advanced analytical solutions. - You will leverage your expertise in data science, machine learning, and statistical analysis to uncover insights, build predictive models, and solve complex business challenges. Key Responsibilities - Develop and implement statistical and machine learning models (e.g., regression, classification, clustering, time series analysis) to address business problems. - Analyze large and complex datasets to identify trends, patterns, and anomalies. - Develop predictive models for forecasting, churn prediction, customer segmentation, and other business outcomes. - Conduct A/B testing and other experiments to optimize business decisions. - Communicate data insights effectively through visualizations, dashboards, and presentations. - Develop and maintain interactive data dashboards and reports. - Present findings and recommendations to stakeholders in a clear and concise manner. - Work with data engineers to design and implement data pipelines and data warehousing solutions. - Ensure data quality and integrity throughout the data lifecycle. - Develop and maintain data pipelines for data ingestion, transformation, and loading. - Stay up-to-date with the latest advancements in data science, machine learning, and artificial intelligence. - Research and evaluate new technologies and tools to improve data analysis and modeling capabilities. - Explore and implement new data science techniques and methodologies. - Collaborate effectively with data engineers, business analysts, product managers, and other stakeholders. - Communicate technical information clearly and concisely to both technical and non-technical audiences. Qualifications Essential - 4+ years of experience as a Data Scientist or in a related data science role. - Strong proficiency in statistical analysis, machine learning algorithms, and data mining techniques. - Experience with programming languages like Python (with libraries like scikit-learn, pandas, NumPy) or R. - Experience with data visualization tools (e.g., Tableau, Power BI). - Experience with data warehousing and data lake technologies. - Excellent analytical, problem-solving, and communication skills. - Master's degree in Statistics, Mathematics, Computer Science, or a related field
Posted 1 month ago
5.0 - 6.0 years
8 - 13 Lacs
Hyderabad
Work from Office
About the Role - We are seeking a highly skilled and experienced Senior Azure Databricks Engineer to join our dynamic data engineering team. - As a Senior Azure Databricks Engineer, you will play a critical role in designing, developing, and implementing data solutions on the Azure Databricks platform. - You will be responsible for building and maintaining high-performance data pipelines, transforming raw data into valuable insights, and ensuring data quality and reliability. Key Responsibilities - Design, develop, and implement data pipelines and ETL/ELT processes using Azure Databricks. - Develop and optimize Spark applications using Scala or Python for data ingestion, transformation, and analysis. - Leverage Delta Lake for data versioning, ACID transactions, and data sharing. - Utilize Delta Live Tables for building robust and reliable data pipelines. - Design and implement data models for data warehousing and data lakes. - Optimize data structures and schemas for performance and query efficiency. - Ensure data quality and integrity throughout the data lifecycle. - Integrate Azure Databricks with other Azure services (e.g., Azure Data Factory, Azure Synapse Analytics, Azure Blob Storage). - Leverage cloud-based data services to enhance data processing and analysis capabilities. Performance Optimization & Troubleshooting - Monitor and analyze data pipeline performance. - Identify and troubleshoot performance bottlenecks. - Optimize data processing jobs for speed and efficiency. - Collaborate effectively with data engineers, data scientists, data analysts, and other stakeholders. - Communicate technical information clearly and concisely. - Participate in code reviews and contribute to the improvement of development processes. Qualifications Essential - 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Azure Databricks. - Strong proficiency in Python and SQL. - Expertise in Apache Spark and its core concepts (RDDs, DataFrames, Datasets). - In-depth knowledge of Delta Lake and its features (e.g., ACID transactions, time travel). - Experience with data warehousing concepts and ETL/ELT processes. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Bachelor's degree in Computer Science, Computer Engineering, or a related field.
Posted 1 month ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Platform Engineer to build scalable infrastructure for data ingestion, processing, and analysis. Key Responsibilities: Architect distributed data systems. Enable data discoverability and quality. Develop data tooling and platform APIs. Required Skills & Qualifications: Experience with Spark, Kafka, and Delta Lake. Proficiency in Python, Scala, or Java. Familiar with cloud-based data platforms. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies
Posted 1 month ago
6.0 - 11.0 years
20 - 25 Lacs
Noida
Work from Office
Position Overview: Working with the Finance Systems Manager, the role will ensure that ERP system is available and fit for purpose. The ERP Systems Developer will be developing the ERP system, providing comprehensive day-to-day support, training and develop the current ERP System for the future. Key Responsibilities: As a Sr. DW BI Developer, the candidate will participate in the design / development / customization and maintenance of software applications. As a DW BI Developer, the person should analyse the different applications/Products, design and implement DW using best practices. Rich data governance experience, data security, data quality, provenance / lineage. The candidate will also be maintaining a close working relationship with the other application stakeholders. Experience of developing secured and high-performance web application(s) Knowledge of software development life-cycle methodologies e.g. Iterative, Waterfall, Agile, etc. Designing and architecting future releases of the platform. Participating in troubleshooting application issues. Jointly working with other teams and partners handling different aspects of the platform creation. Tracking advancements in software development technologies and applying them judiciously in the solution roadmap. Ensuring all quality controls and processes are adhered to. Planning the major and minor releases of the solution. Ensuring robust configuration management. Working closely with the Engineering Manager on different aspects of product lifecycle management. Demonstrate the ability to independently work in a fast-paced environment requiring multitasking and efficient time management. Required Skills and Qualifications: End to end Lifecyle of Data warehousing, DataLakes and reporting Experience with Maintaining/Managing Data warehouses. Responsible for the design and development of a large, scaled-out, real-time, high performing Data Lake / Data Warehouse systems (including Big data and Cloud). Strong SQL and analytical skills. Experience in Power BI, Tableau, Qlikview, Qliksense etc. Experience in Microsoft Azure Services. Experience in developing and supporting ADF pipelines. Experience in Azure SQL Server/ Databricks / Azure Analysis Services Experience in developing tabular model. Experience in working with APIs. Minimum 2 years of experience in a similar role Experience with data warehousing, data modelling. Strong experience in SQL 2-6 years of total experience in building DW/BI systems Experience with ETL and working with large-scale datasets. Proficiency in writing and debugging complex SQLs. Prior experience working with global clients. Hands on experience with Kafka, Flink, Spark, SnowFlake, Airflow, nifi, Oozie, Pig, Hive,Impala Sqoop. Storage like HDFS , Object Storage (S3 etc), RDBMS, MPP and Nosql DB. Experience with distributed data management, data sfailover,luding databases (Relational, NoSQL, Big data, data analysis, data processing, data transformation, high availability, and scalability) Experience in end-to-end project implementation in Cloud (Azure / AWS / GCP) as a DW BI Developer Rich data governance experience, data security, data quality, provenance / lineagHive, Impalaerstanding of industry trends and products in dataops , continuous intelligence , Augmented analytics , and AI/ML. Prior experience of working in cloud like Azure, AWS and GCP Prior experience of working with Global Clients
Posted 1 month ago
10.0 - 16.0 years
25 - 27 Lacs
Chennai
Work from Office
We at Dexian India, are looking to hire a Cloud Data PM with over 10 years of hands-on experience in AWS/Azure, DWH, and ETL. The role is based in Chennai with a shift from 2.00pm to 11.00pm IST. Key qualifications we seek in candidates include: - Solid understanding of SQL and data modeling - Proficiency in DWH architecture, including EDW/DM concepts and Star/Snowflake schema - Experience in designing and building data pipelines on Azure Cloud stack - Familiarity with Azure Data Explorer, Data Factory, Data Bricks, Synapse Analytics, Azure Fabric, Azure Analysis Services, and Azure SQL Datawarehouse - Knowledge of Azure DevOps and CI/CD Pipelines - Previous experience managing scrum teams and working as a Scrum Master or Project Manager on at least 2 projects - Exposure to on-premise transactional database environments like Oracle, SQL Server, Snowflake, MySQL, and/or Postgres - Ability to lead enterprise data strategies, including data lake delivery - Proficiency in data visualization tools such as Power BI or Tableau, and statistical analysis using R or Python - Strong problem-solving skills with a track record of deriving business insights from large datasets - Excellent communication skills and the ability to provide strategic direction to technical and business teams - Prior experience in presales, RFP and RFI responses, and proposal writing is mandatory - Capability to explain complex data solutions clearly to senior management - Experience in implementing, managing, and supporting data warehouse projects or applications - Track record of leading full-cycle implementation projects related to Business Intelligence - Strong team and stakeholder management skills - Attention to detail, accuracy, and ability to meet tight deadlines - Knowledge of application development, APIs, Microservices, and Integration components Tools & Technology Experience Required: - Strong hands-on experience in SQL or PLSQL - Proficiency in Python - SSIS or Informatica (Mandatory one of the tools) - BI: Power BI, or Tableau (Mandatory one of the tools)
Posted 1 month ago
2.0 - 5.0 years
2 - 4 Lacs
Mumbai, Mumbai Suburban, Mumbai (All Areas)
Work from Office
Role & responsibilities 3 to 4+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud and Data Governance domains. • Take ownership of the technical aspects of implementing data pipeline & migration requirements, ensuring that the platform is being used to its fullest potential through designing and building applications around business stakeholder needs. • Interface directly with stakeholders to gather requirements and own the automated end-to-end data engineering solutions. • Implement data pipelines to automate the ingestion, transformation, and augmentation of both structured, unstructured, real-time data, and provide best practices for pipeline operations • Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers. Implement Data Governance best practices. • Create and maintain clear documentation on data models/schemas as well as transformation/validation rules. • Implement tools that help data consumers to extract, analyze, and visualize data faster through data pipelines. • Implement data security, privacy, and compliance protocols to ensure safe data handling in line with regulatory requirements. • Optimize data workflows and queries to ensure low latency, high throughput, and cost efficiency. • Leading the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for batch ETL's. • Work directly with our internal product/technical teams to ensure that our technology infrastructure is seamlessly and effectively integrated • Migrate current data applications & pipelines to Cloud leveraging technologies in future Preferred candidate profile • Graduate with Engineering Degree (CS/Electronics/IT) / MCA / MCS or equivalent with substantial data engineering experience. • 3+ years of recent hands-on experience with a modern programming language (Scala, Python, Java) is required; Spark/ Pyspark is preferred. • Experience with configuration management and version control apps (ie: Git) and experience working within a CI/CD framework is a plus. • 3+ years of recent hands-on SQL programming experience in a Big Data environment is required. • Working knowledge of PostgreSQL, RDBMS, NoSQL and columnar databases. • Experience developing and maintaining ETL applications and data pipelines using big data technologies is required; Apache Kafka, Spark, Airflow experience is a must. • Knowledge of API and microservice integration with applications. • Experience with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). • Experience building data solutions for Power BI and Web visualization applications. • Experience with Cloud is a plus. • Experience in managing multiple projects and stakeholders with excellent communication and interpersonal skills. • Ability to develop and organize high-quality documentation. • Superior analytical skills and a strong sense of ownership in your work. • Collaborate with data scientists on several projects. Contribute to development and support of analytics including AI/ML. • Ability to thrive in a fast-paced environment, and to manage multiple, competing priorities simultaneously. • Prior Energy & Utilities industry experience is a big plus. Experience (Min. Max. in yrs.): 3+ years of core/relevant experience Location: Mumbai (Onsite)
Posted 1 month ago
2.0 - 4.0 years
6 - 11 Lacs
Pune
Hybrid
What’s the role all about? As a BI Developer, you’ll be a key contributor to developing Reports in a multi-region, multi-tenant SaaS product. You’ll collaborate with the core R&D team to build high-performance Reports to serve the use cases of several applications in the suite. How will you make an impact? Take ownership of the software development lifecycle, including design, development, unit testing, and deployment, working closely with QA teams. Ensure that architectural concepts are consistently implemented across the product. Act as a product expert within R&D, understanding the product’s requirements and its market positioning. Work closely with cross-functional teams (Product Managers, Sales, Customer Support, and Services) to ensure successful product delivery. Design and build Reports for given requirements. Create design documents, test cases for the reports Develop SQL to address the adhoc report requirements, conduct analyses Create visualizations and reports as per the requirements Execute unit testing, functional & performance testing and document the results Conduct peer reviews and ensure quality is met at all stages Have you got what it takes? Bachelor/Master of Engineering Degree in Computer Science, Electronic Engineering or equivalent from reputed institute 2-4 years of BI report development experience Expertise in SQL & any cloud-based databases. Would be able to work with any DB to write SQL for any business needs. Experience in any BI tools like Tableau, Power BI, MicroStrategy etc.. Experience working in enterprise Data warehouse/ Data Lake system Strong knowledge of Analytical Data base and schemas Development experience building solutions that leverage SQL and NoSQL databases. Experience/Knowledge of Snowflake an advantage. In-depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework Experience working in functional testing, Performance testing etc.. Experience with public cloud infrastructure and technologies such as AWS/Azure/GCP etc Experience working in Continuous Integration and Delivery practices using industry standard tools such as Jenkins Experience working in an Agile methodology development environment and using work item management tools like JIRA What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 month ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 1 month ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be proficient in Databricks Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptoms Manages problems that require involvement of others to solve Reaches sound decisions quickly Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Roles Responsibilities Provides innovative and cost effective solution using databricks Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Learn adapt quickly to new Technologies as per the business need Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability Skills The Candidate must have 710 yrs of experience in databricks delta lake Hands on experience on Azure Experience on Python scripting Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Knowledge of Azure architecture and design Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 1 month ago
12.0 - 16.0 years
35 - 60 Lacs
Pune
Work from Office
Job Summary 15+ years of experience in Teamcenter Solution Architect and end to end solution implementation Medical device exp will be added advantages. Experience in Active Workspace Customization Experience in Server-Side Customization Responsibilities Experience in BMIDE Workflow ACL BOM Management and Configuration Experience in Teamcenter Integration (T4S T4EA SOA SOAP and REST services etc. ) Experience in Automation Testing for Teamcenter Experience in Data lake and Reports (Microservice..) Should be having expertise in ITK RAC and AWC customization Should be having in depth knowledge in Teamcenter Data Model. Should be having experience in Document Management BOM Management Configuration Management and Change Management. Should be familiar with developing Workflow Handlers REST API SOA Certifications Required Certification in Teamcenter will be added advantage
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, Seeking a Cloud Monitoring Specialist to set up observability and real-time monitoring in cloud environments. Key Responsibilities: Configure logging and metrics collection. Set up alerts and dashboards using Grafana, Prometheus, etc. Optimize system visibility for performance and security. Required Skills & Qualifications: Familiar with ELK stack, Datadog, New Relic, or Cloud-native monitoring tools. Strong troubleshooting and root cause analysis skills. Knowledge of distributed systems. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
5.0 - 10.0 years
15 - 25 Lacs
Pune, Ahmedabad
Work from Office
Job Role: Data Engineer Senior About the Job: We are looking for a highly skilled Data Engineer with experience in working on complex software projects and strong problem-solving skills. Role Overview: Description: As a Senior Data Engineer, you manage and develop the solutions in close alignment with various business and Spoke stakeholders. You are responsible for the implementation of the IT governance guidelines. Tasks Create and manage data pipeline architecture for data ingestion, pipeline setup and data curation Experience working with and creating cloud data solutions Assemble large, complex data sets that meet functional/non-functional business requirements Implement the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Pyspark, SQL and AWS big data-technologies Build analytics tools that use the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics Manipulate data at scale: getting data in a ready-to-use state in close alignment with various business and Spoke stakeholders What We Offer: CAXSOL is committed to fostering talent and offering exceptional career development opportunities in an innovative and collaborative environment. Our benefits package reflects our dedication to our team's well-being, providing competitive compensation, and a supportive work culture. Interested in more? We can't wait to hear from you!
Posted 2 months ago
10.0 - 17.0 years
35 - 45 Lacs
Bengaluru
Work from Office
Dear Candidate, Greetings!! Please find thr details Location: Bangalore Notice period: 0 -30 days Experience: 10-15 Years Mode of work: Work from Office Database architect India: Deep database expertise. Expert in architecting data lake solutions Deep knowledge in designing and architecting Database systems Hands on with Apache services like Arrow, Parquet, Spark, etc Hands on experience with databases (postgres, redis ), Apache services ( Parquet, Arrow etc ) Deep Data Lake expertise Long term maintenance and support of this DB system Mandatory skills: Data lake & Apache Arrow Questions to check and share Questions Answer Do you have 7-8 years of Architecting & Designing DB solutions? Have you designed and implemented a data lake architecture? Do you have hands-on experience using Apache Parquet for data storage optimization? Have you ever used Apache Arrow for in-memory data processing or analytics acceleration? Do you have experience in implementing data ingestion pipelines - either through Batch or streaming? Have you been responsible for performance tuning in large-scale data lake environments (e.g., partitioning, file sizing)? Role & responsibilities Preferred candidate profile
Posted 2 months ago
4.0 - 5.0 years
3 - 7 Lacs
Mumbai, Pune, Chennai
Work from Office
Job Category: IT Job Type: Full Time Job Location: Bangalore Chennai Mumbai Pune Exp:- 4 to 5 Years Location:- Pune/Mumbai/Bangalore/Chennai JD : Azure Data Engineer with QA: Must Have - Azure Data Bricks, Azure Data Factory, Spark SQL Years - 4-5 years of development experience in Azure Data Bricks Strong experience in SQL along with performing Azure Data bricks Quality Assurance. Understand complex data system by working closely with engineering and product teams Develop scalable and maintainable applications to extract, transform, and load data in various formats to SQL Server, Hadoop Data Lake or other data storage locations. Kind Note: Please apply or share your resume only if it matches the above criteria.
Posted 2 months ago
3.0 - 5.0 years
10 - 15 Lacs
Pune
Work from Office
About the Role: Data Engineer Core Responsibilities: The candidate is expected to lead one of the key analytics areas end-to-end. This is a pure hands-on role. Ensure the solutions built meet the required best practices and coding standards. Ability to adapt to any new technology if the situation demands. Requirement gathering with the business and getting this prioritized in the sprint cycle. Should be able to take end-to-end responsibility of the assigned task Ensure quality and timely delivery. Preference and Experience- Strong at PySpark, Python, and Java fundamentals Good understanding of Data Structure Good at SQL queries/optimization Strong fundamentals of OOP programming Good understanding of AWS Cloud, Big Data. Nice to have Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB Academic qualifications- Must be a Technical Graduate B. Tech / M. Tech – Tier 1/2 colleges.
Posted 2 months ago
2.0 - 4.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Overview As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. Responsibilities Act as a subject matter expert across different digital projects. Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 7+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). B Tech/BA/BS in Computer Science, Math, Physics, or other technical fields.
Posted 2 months ago
12.0 - 20.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Title Senior Software Engineer Experience 12-20 Years Location Bangalore : Strong knowledge & hands-on experience in AWS Data Bricks Nice to have Worked in hp eco system (FDL architecture) Technically strong to help the team on any technical issues they face during the execution. Owns the end-to-end technical deliverables Hands on data bricks + SQL knowledge Experience in AWS S3, Redshift, EC2 and Lambda services Extensive experience in developing and deploying Bigdata pipelines Experience in Azure data lake Strong hands on in SQL development / Azure SQL and in-depth understanding of optimization and tuning techniques in SQL with Redshift Development in Notebooks (like Jupyter, DataBricks, Zeppelin etc) Development experience in Spark Experience in scripting language like python and any other programming language Roles and Responsibilities Candidate must have hands on experience in AWS Data Databricks Good development experience using Python/Scala, Spark SQL and Data Frames Hands-on with Databricks, Data Lake and SQL knowledge is a must. Performance tuning, troubleshooting, and debugging SparkTM Process Skills: Agile – Scrum Qualification: Bachelor of Engineering (Computer background preferred)
Posted 2 months ago
2.0 - 5.0 years
3 - 7 Lacs
Pune
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role Use Design thinking and a consultative approach to conceive cutting edge technology solutions for business problems, mining core Insights as a service model Engage with project activities across the Information lifecycle, often related to paradigms like - Building & managing Business data lakes and ingesting data streams to prepare data , Developing machine learning and predictive models to analyse data , Visualizing data , Empowering Information consumers with agile Data Models that enable Self-Service BI , Specialize in Business Models and architectures across various Industry verticals Participate in business requirements / functional specification definition, scope management, data analysis and design, in collaboration with both business stakeholders and IT teams , Document detailed business requirements, develop solution design and specifications. Support and coordinate system implementations through the project lifecycle working with other teams on a local and global basis Work closely with the solutions architecture team to define the target detailed solution to deliver the business requirements. Your Profile B.E. / B.Tech. + MBA (Systems / Data / Data Science/ Analytics / Finance) with a good academic background Strong communication, facilitation, relationship-building, presentation, and negotiation skills Consultant must have a flair for storytelling and be able to present interesting insights from the data. Consultant should have good Soft skills like good communication, proactive, self-learning skills etc Consultants are expected to be flexible to the dynamically changing needs of the industry. Must have good exposure to Database management systems, Good to have knowledge about big data ecosystem like Hadoop. Hands on with SQL and good knowledge of noSQL based databases. Good to have working knowledge of R/Python language. Exposure to / Knowledge about one of the cloud ecosystems Google / AWS/ Azure What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Cloud Architect to design and oversee scalable, secure, and cost-efficient cloud solutions. Great for architects who bridge technical vision with business needs. Key Responsibilities: Design cloud-native solutions using AWS, Azure, or GCP Lead cloud migration and transformation projects Define cloud governance, cost control, and security strategies Collaborate with DevOps and engineering teams for implementation Required Skills & Qualifications: Deep expertise in cloud architecture and multi-cloud environments Experience with containers, serverless, and microservices Proficiency in Terraform, CloudFormation, or equivalent Bonus: Cloud certification (AWS/Azure/GCP Architect) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France