Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 - 20.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved throughout the development process. Your role will be pivotal in driving innovation and efficiency within the application development lifecycle, fostering a collaborative environment that encourages creativity and problem-solving. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Experience with cloud computing platforms and services.- Strong understanding of application development methodologies.- Ability to design and implement scalable solutions.- Familiarity with data integration and ETL processes. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
2.0 - 7.0 years
17 - 20 Lacs
Hyderabad
Work from Office
Job Title - Data Eng, Mgmt. & Governance - Analyst S&C Global Network Management Level: 11 Location: Hyderabad Must have skills: Proficiency and hands-on experience in Data Engineering technologies like Python, R, SQL, Spark, Pyspark, Databricks, Hadoop etc. Good to have skills: Exposure to Retail, Banking, Healthcare projects and knowledge of PowerBI & PowerApps is an added advantage. Job Summary : As a Data Operations Analyst, you would be responsible to ensure our esteemed business is fully supported in using the business-critical AI enabled applications. This involves solving day-to-day application issues, business queries, addressing adhoc data requests to ensure clients can extract maximum value for the AI applications. WHATS IN IT FOR YOU An opportunity to work on high-visibility projects with top clients around the globe. Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners, and business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge, and capabilities. Opportunity to thrive in a culture that is committed to accelerating equality for all. Engage in boundaryless collaboration across the entire organization. Roles & Responsibilities: Monitor and maintain pre-processing pipelines, model execution batches and validation of model outputs. In case of deviations or model degradation, take up detailed root cause analysis and implement permanent fixes. Debug issues related to data loads, batch pipeline, application functionality including special handling of data/batch streams. As a Data Operations Analyst, you would be working on initial triaging of code related defects/issues, provide root cause analysis and implement code fix for permanent resolution of the defect. Design, build, test and deploy small to medium size enhancements that deliver value to business and enhance application availability and usability. Responsible for sanity testing of use cases as part of pre-deployment and post-production activities. Primarily responsible for Application availability and stability by remediating application issues/bugs or other vulnerabilities. Data Operations Analysts evolve to become Subject Matter Experts as they mature in servicing the applications. Professional & Technical Skills: Proven experience (2+ years) in working as per the above job description is required. Experience/Education on Statistics, Data Science, Applied Mathematics, Business Analytics, Computer Science, Information Systems is preferable. Proven experience (2+ years) in working as per the above job description is required. Exposure to Retail, Banking, Healthcare projects is added advantage. Proficiency and hands-on experience in Data Engineering technologies like Python, R, SQL, Spark, Pyspark, Databricks, Hadoop etc. Ability to work with large data sets and present findings/insights to key stakeholders; Data management using databases like SQL. Experience with any of the cloud platforms like AWS, Azure, or Google Cloud for deploying and scaling language models. Experience with any of the data visualization tools like Tableau, Qlikview, and Spotfire is good. Knowledge on PowerBI & PowerApps is an added advantage. Excellent analytical and problem-solving skills, with a data-driven mindset. Proficient in Excel, MS Word, PowerPoint, etc. Ability to solve complex business problems and deliver client delight. Strong writing skills to build points of view on current industry trends. Good Client handling skills; able to demonstrate thought leadership & problem-solving skills. Additional Information: - The ideal candidate will possess a strong educational background in computer science or a related field. This position is based at our Hyderabad office. About Our Company | AccentureQualification Experience: Minimum 2 years of experience is required Educational Qualification: Bachelors or masters degree in any engineering stream or MCA.
Posted 5 days ago
2.0 - 5.0 years
5 - 9 Lacs
Mumbai
Work from Office
Project Role : Database Administrator Project Role Description : Administer, develop, test, or demonstrate databases. Perform many related database functions across one or more teams or clients, including designing, implementing and maintaining new databases, backup/recovery and configuration management. Install database management systems (DBMS) and provide input for modification of procedures and documentation used for problem resolution and day-to-day maintenance. Must have skills : Cloud Data Architecture Good to have skills : Data & AI Solution Architecture, Modern Data IntegrationMinimum 10+ year(s) of experience is required Educational Qualification : Should have completed Graduation from reputed College/University Project Role :Database Administrator Project Role Description :Administer, develop, test, or demonstrate databases. Perform many related database functions across one or more teams or clients, including designing, implementing and maintaining new databases, backup/recovery and configuration management. Install database management systems (DBMS) and provide input for modification of procedures and documentation used for problem resolution and day-to-day maintenance. Must have Skills :Cloud Data ArchitectureGood to Have Skills :Data & AI Solution Architecture, Modern Data IntegrationJob :Key Responsibilities :CPG/Retail industry backgroundStrong understanding of Modern Data Engineering StackHands on experience with Big Data and data streamingStrong knowledge in architecture and design patternsExperience in designing of architectural roadmaps, rapid POC developmentKnowledge of SQL , NoSQL and Graph databasesKnowledge on data modeling, SCD types, data miningExperience with unified query engines , ETL tools , Data virtualization etcWide knowledge on modern data engineering design principl Technical Experience :Primary skills :System design , Modern Data engineering, Big DataExperience developing applications with high volume transactionsPassion for data and analytics, a philosophy of using technology to help solve business problems.Strong verbal and written communicationAbility to Manage teamWillingness to travel to client sites in India based on the need. Professional Attributes :- Must have performed in client facing roles- Strong Communication skills - leadership skills- Team handling skills- Analytical skills- Presentation skills- Ability to work under pressure Educational Qualification:Should have completed Graduation from reputed College/UniversityAdditional Info :Industry Experience CPG, Auto, Retail, Manufacturing Qualification Should have completed Graduation from reputed College/University
Posted 5 days ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationMust Have; Azure Data Factory SQL Azure Data Lake Storage Data Engineering Solutions Structured and Unstructured Data Processing Data Modelling, Analysis, Design, Development, and Documentation Cloud Computing CI/CD DevOps Good to Have (Minimum of 3) Azure DevOps PowerApps Azure Function App Databricks PowerBI (advanced knowledge) Python Airflow DAG Infra deployments (Azure, Bicep) Networking & security Scheduling for Orchestration of Workflows Agile Frameworks (Scrum) Qualification 15 years full time education
Posted 5 days ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Senior Data Engineer, you bring over 10 years of experience in applying advanced concepts and technologies in production environments.Your expertise and skills make you an ideal candidate to lead and deliver cutting-edge data solutions.Expertise Extensive hands-on experience with Azure Databricks and modern data architecture principles. In-depth understanding of Lakehouse and Medallion Architectures and their practical applications. Advanced knowledge of Delta Lake, including data storage, schema evolution, and ACID transactions. Comprehensive expertise in working with Parquet files, including handling challenges and designing effective solutions. Working experience with the Unity Catalog Knowledge on Azure Cloud services Working experience with Azure DevOpsSkills Proficiency in writing clear, maintainable, and modular code using Python and PySpark. Advanced SQL expertise, including query optimization and performance tuning.Tools Experience with Infrastructure as Code (IaC), preferably using Terraform. Proficiency in CI/CD pipelines, with a strong preference for Azure DevOps. Familiarity with Azure Data Factory for seamless data integration and orchestration. Hands-on experience with Apache Airflow for workflow automation. Automation skills using PowerShell. Nice to have/Basic knowledge of Lakehouse Apps and frameworks like Angular.js, Node.js, or React.js.You Also. Possess excellent communication skills, making technical concepts accessible to non-technical stakeholders. Nice to have API knowledge Nice to have Data Modelling knowledge Are open to discussing and tackling challenges with a collaborative mindset. Enjoy teaching, sharing knowledge, and mentoring team members to foster growth. Thrive in multidisciplinary Scrum teams, collaborating effectively to achieve shared goals. Have a solid foundation in software development, enabling you to bridge gaps between development and data engineering. Demonstrate a strong drive for continuous improvement and learning new technologies. Take full ownership of the build, run, and change processes, ensuring solutions are reliable and scalable. Embody a positive, proactive mindset, fostering teamwork and mutual support within your team. Qualification 15 years full time education
Posted 5 days ago
2.0 - 5.0 years
9 - 15 Lacs
Noida
Work from Office
Job Description: Data Engineer (Climate & Water Resources) Position Overview We are seeking a highly skilled Data Engineer with experience in climate-related and water resources data management. The ideal candidate will develop, optimize, and maintain data pipelines for hydrological, meteorological, and climate datasets. This role involves working with large geospatial and time-series data, supporting climate risk assessments, flood modeling, and water resource management projects. Key Responsibilities Automate data collection from sources such as NOAA, ECMWF, IMD, NASA, and regional hydrological agencies. Process and clean large datasets, ensuring data integrity, accuracy, and accessibility. Handle large-scale geospatial and time-series datasets related to rainfall, river discharge, groundwater levels, and climate models. Work with climate projection datasets (e.g., CMIP6, ERA5) for impact assessments and forecasting. Develop efficient storage solutions using cloud platforms (AWS, GCP, or Azure). Support hydrological and hydraulic modeling teams by structuring input data for flood simulations, drought analysis, and climate impact assessments. Collaborate with data scientists to build machine learning models for climate risk assessment. Implement scalable data processing workflows using Python, SQL, and distributed computing tools. Optimize data workflows for efficient processing and visualization. Work closely with hydrologists, climate scientists, and engineers to understand data requirements. Document data pipelines, metadata, and workflows for reproducibility and transparency. Required Qualifications & Skills Education & Experience Bachelors or Masters degree in Computer Science, Data Science, Environmental Engineering, Hydrology, Climate Science, or a related field. 3-5 years of experience in data engineering, preferably in climate science, hydrology, or environmental domains. Technical Skills Strong programming skills in Python, SQL, or R . Experience with big data processing tools (Apache Spark, Dask, or Hadoop). Familiarity with climate and hydrological datasets (HECRAS, HEC HMS, SWAT, GFS). Experience with geospatial data tools (GDAL, PostGIS, GeoPandas). Knowledge of cloud platforms (AWS S3, GCP BigQuery, Azure Data Lake). Preferred Skills Experience in climate risk modeling and statistical analysis of environmental data. Familiarity with APIs for climate data access (Copernicus, NASA Earthdata). Understanding of hydrological models and remote sensing datasets .
Posted 5 days ago
5.0 - 10.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Azure Data Services Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead data architecture design and implementation.- Optimize data delivery and re-design infrastructure for greater scalability.- Implement data security and privacy measures.- Collaborate with data scientists and analysts to understand data needs. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services.- Strong understanding of cloud-based data solutions.- Experience with data warehousing and data lakes.- Knowledge of SQL and NoSQL databases.- Hands-on experience with data integration tools.- Good To Have Skills: Experience with Azure Machine Learning. Additional Information:- The candidate should have a minimum of 5 years of experience in Microsoft Azure Data Services.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 5 days ago
8.0 - 13.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Project Role : BI Engineer Project Role Description : Develop, migrate, deploy, and maintain data-driven insights and knowledge engine interfaces that drive adoption and decision making. Integrate security and data privacy protection. Must have skills : SAS Analytics Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a SAS BASE & MACROS, you will be responsible for building and designing scalable and open Business Intelligence (BI) architecture to provide cross-enterprise visibility and agility for business innovation. You will create industry and function data models used to build reports and dashboards, ensuring seamless integration with Accentures Data and AI framework to meet client needs. Roles & Responsibilities1.Data Engineer to lead or drive the migration of legacy SAS data preparation jobs to a modern Python-based data engineering framework. 2. Should have deep experience in both SAS and Python, strong knowledge of data transformation workflows, and a solid understanding of database systems and ETL best practices.3.Should analyze existing SAS data preparation and data feed scripts and workflows to identify logic and dependencies.4.Should translate and re-engineer SAS jobs into scalable, efficient Python-based data pipelines.5.Collaborate with data analysts, scientists, and engineers to validate and test converted workflows.6.Optimize performance of new Python workflows and ensure data quality and consistency.7.Document migration processes, coding standards, and pipeline configurations.8.Integrate new pipelines with google cloud platforms as required.9.Provide guidance and support for testing, validation, and production deployment Professional & Technical Skills: - Must To Have Skills: Proficiency in SAS Base & Macros- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have 8+ years of exp with min 3 years of exp in SAS or python Data engineering Qualification 15 years full time education
Posted 5 days ago
8.0 - 13.0 years
14 - 19 Lacs
Coimbatore
Work from Office
Project Role : BI Architect Project Role Description : Build and design scalable and open Business Intelligence (BI) architecture to provide cross-enterprise visibility and agility for business innovation. Create industry and function data models used to build reports and dashboards. Ensure the architecture and interface seamlessly integrates with Accentures Data and AI framework, meeting client needs. Must have skills : SAS Base & Macros Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a SAS BASE & MACROS, you will be responsible for building and designing scalable and open Business Intelligence (BI) architecture to provide cross-enterprise visibility and agility for business innovation. You will create industry and function data models used to build reports and dashboards, ensuring seamless integration with Accentures Data and AI framework to meet client needs. Roles & Responsibilities1.Data Engineer to lead or drive the migration of legacy SAS data preparation jobs to a modern Python-based data engineering framework. 2. Should have deep experience in both SAS and Python, strong knowledge of data transformation workflows, and a solid understanding of database systems and ETL best practices.3.Should analyze existing SAS data preparation and data feed scripts and workflows to identify logic and dependencies.4.Should translate and re-engineer SAS jobs into scalable, efficient Python-based data pipelines.5.Collaborate with data analysts, scientists, and engineers to validate and test converted workflows.6.Optimize performance of new Python workflows and ensure data quality and consistency.7.Document migration processes, coding standards, and pipeline configurations.8.Integrate new pipelines with google cloud platforms as required.9.Provide guidance and support for testing, validation, and production deployment Professional & Technical Skills: - Must To Have Skills: Proficiency in SAS Base & Macros- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have 8+ years of exp with min 3 years of exp in SAS or python Data engineering Qualification 15 years full time education
Posted 5 days ago
5.0 - 7.0 years
0 - 0 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Experience: 5 to 6 years of Kafka development with KSQL programming experience. Preferred Location: Bengaluru, India Contractual - 1 year Job Description: We are seeking a highly skilled and experienced Data Engineer to join our team. The ideal candidate will have a strong background in working with on-prem environments and hands-on experience with Kafka Debezium. This role requires expertise in configuring Kafka Brokers, Topics, Sink connectors etc. Knowledge on Kafka UI Projects is a plus. Responsibilities: Work in an on-prem environment to manage and maintain data architecture. • Configure and manage Kraft / Zookeeper. Perform Kafka Topic Configuration, create and test topics, and validate data flow from source to Kafka using sample tables. Ingest data from different databases. Deploy sink connectors and map Kafka topics to ODS tables with upsert logic. • Setting up Kafka High Availability. Manage Kafka through Provectus. • Demonstrate knowledge about Kubernetes and virtual machines. Preferred Skills: Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Ability to work independently and manage multiple tasks.
Posted 5 days ago
8.0 - 13.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Project Role : BI Architect Project Role Description : Build and design scalable and open Business Intelligence (BI) architecture to provide cross-enterprise visibility and agility for business innovation. Create industry and function data models used to build reports and dashboards. Ensure the architecture and interface seamlessly integrates with Accentures Data and AI framework, meeting client needs. Must have skills : SAS Base & Macros Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a SAS BASE & MACROS, you will be responsible for building and designing scalable and open Business Intelligence (BI) architecture to provide cross-enterprise visibility and agility for business innovation. You will create industry and function data models used to build reports and dashboards, ensuring seamless integration with Accentures Data and AI framework to meet client needs. Roles & Responsibilities1.Data Engineer to lead or drive the migration of legacy SAS data preparation jobs to a modern Python-based data engineering framework. 2. Should have deep experience in both SAS and Python, strong knowledge of data transformation workflows, and a solid understanding of database systems and ETL best practices.3.Should analyze existing SAS data preparation and data feed scripts and workflows to identify logic and dependencies.4.Should translate and re-engineer SAS jobs into scalable, efficient Python-based data pipelines.5.Collaborate with data analysts, scientists, and engineers to validate and test converted workflows.6.Optimize performance of new Python workflows and ensure data quality and consistency.7.Document migration processes, coding standards, and pipeline configurations.8.Integrate new pipelines with google cloud platforms as required.9.Provide guidance and support for testing, validation, and production deployment Professional & Technical Skills: - Must To Have Skills: Proficiency in SAS Base & Macros- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have 8+ years of exp with min 3 years of exp in SAS or python Data engineering Qualification 15 years full time education
Posted 5 days ago
3.0 - 8.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :Data Engineering Sr. Advisor demonstrates expertise in data engineering technologies with the focus on engineering, innovation, strategic influence and product mindset. This individual will act as key contributor of the team to design, build, test and deliver large-scale software applications, systems, platforms, services or technologies in the data engineering space. This individual will have the opportunity to work directly with partner IT and business teams, owning and driving major deliverables across all aspects of software delivery.The candidate will play a key role in automating the processes on Databricks and AWS. They collaborate with business and technology partners in gathering requirements, develop and implement. The individual must have strong analytical and technical skills coupled with the ability to positively influence on delivery of data engineering products. The applicant will be working in a team that demands innovation, cloud-first, self-service-first, and automation-first mindset coupled with technical excellence. The applicant will be working with internal and external stakeholders and customers for building solutions as part of Enterprise Data Engineering and will need to demonstrate very strong technical and communication skills.Delivery Intermediate delivery skills including the ability to deliver work at a steady, predictable pace to achieve commitments, decompose work assignments into small batch releases and contribute to tradeoff and negotiation discussions.Domain Expertise Demonstrated track record of domain expertise including the ability to understand technical concepts necessary to do the job effectively, demonstrate willingness, cooperation, and concern for business issues and possess in-depth knowledge of immediate systems worked on.Problem Solving Proven problem solving skills including debugging skills, allowing you to determine source of issues in unfamiliar code or systems and the ability to recognize and solve repetitive problems rather than working around them, recognize mistakes using them as learning opportunities and break down large problems into smaller, more manageable ones& Responsibilities:The candidate will be responsible to deliver business needs end to end from requirements to development into production.Through hands-on engineering approach in Databricks environment, this individual will deliver data engineering toolchains, platform capabilities and reusable patterns.The applicant will be responsible to follow software engineering best practices with an automation first approach and continuous learning and improvement mindset.The applicant will ensure adherence to enterprise architecture direction and architectural standards.The applicant should be able to collaborate in a high-performing team environment, and an ability to influence and be influenced by others.Experience Required:More than 12 years of experience in software engineering, building data engineering pipelines, middleware and API development and automationMore than 3 years of experience in Databricks within an AWS environmentData Engineering experienceExperience Desired:Expertise in Agile software development principles and patternsExpertise in building streaming, batch and event-driven architectures and data pipelinesPrimary Skills: Cloud-based security principles and protocols like OAuth2, JWT, data encryption, hashing data, secret management, etc.Expertise in Big data technologies such as Spark, Hadoop, Databricks, Snowflake, EMR, GlueGood understanding of Kafka, Kafka Streams, Spark Structured streaming, configuration-driven data transformation and curationExpertise in building cloud-native microservices, containers, Kubernetes and platform-as-a-service technologies such as OpenShift, CloudFoundryExperience in multi-cloud software-as-a-service products such as Databricks, SnowflakeExperience in Infrastructure-as-Code (IaC) tools such as terraform and AWS cloudformationExperience in messaging systems such as Apache ActiveMQ, WebSphere MQ, Apache Artemis, Kafka, AWS SNSExperience in API and microservices stack such as Spring Boot, Quarkus, Expertise in Cloud technologies such as AWS Glue, Lambda, S3, Elastic Search, API Gateway, CloudFrontExperience with one or more of the following programming and scripting languages Python, Scala, JVM-based languages, or JavaScript, and ability to pick up new languagesExperience in building CI/CD pipelines using Jenkins, Github ActionsStrong expertise with source code management and its best practicesProficient in self-testing of applications, unit testing and use of mock frameworks, test-driven development (TDD)Knowledge on Behavioral Driven Development (BDD) approachAdditional Skills: Ability to perform detailed analysis of business problems and technical environmentsStrong oral and written communication skillsAbility to think strategically, implement iteratively and estimate financial impact of design/architecture alternativesContinuous focus on an on-going learning and development Qualification 15 years full time education
Posted 5 days ago
4.0 - 9.0 years
20 - 25 Lacs
Gurugram
Work from Office
Job Title - S&C Global Network - AI - Healthcare Analytics - Consultant Management Level: 9-Team Lead/Consultant Location: Bangalore/Gurgaon Must-have skills: R,Phython,SQL,Spark,Tableau ,Power BI Good to have skills: Ability to leverage design thinking, business process optimization, and stakeholder management skills. Job Summary : This role involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. Roles & Responsibilities: Provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. WHATS IN IT FOR YOU An opportunity to work on high-visibility projects with top Pharma clients around the globe. Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners, and business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge, and capabilities. Opportunity to thrive in a culture that is committed to accelerating equality for all. Engage in boundaryless collaboration across the entire organization. What you would do in this role Support delivery of small to medium-sized teams to deliver consulting projects for global clients. Responsibilities may include strategy, implementation, process design, and change management for specific modules. Work with the team or as an Individual contributor on the project assigned which includes a variety of skills to be utilized from Data Engineering to Data Science Provide Subject matter expertise in various sub-segments of the LS industry. Develop assets and methodologies, point-of-view, research, or white papers for use by the team and the larger community. Acquire new skills that have utility across industry groups. Support strategies and operating models focused on some business units and assess likely competitive responses. Also, assess implementation readiness and points of greatest impact. Co-lead proposals, and business development efforts and coordinate with other colleagues to create consensus-driven deliverables. Execute a transformational change plan aligned with the clients business strategy and context for change. Engage stakeholders in the change journey and build commitment to change. Make presentations wherever required to a known audience or client on functional aspects of his or her domain. Who are we looking for Bachelors or Masters degree in Statistics, Data Science, Applied Mathematics, Business Analytics, Computer Science, Information Systems, or other Quantitative field. Proven experience (4+ years) in working on Life Sciences/Pharma/Healthcare projects and delivering successful outcomes. Excellent understanding of Pharma data sets commercial, clinical, RWE (Real World Evidence) & EMR (Electronic medical records) Leverage ones hands on experience of working across one or more of these areas such as real-world evidence data, R&D clinical data, digital marketing data. Hands-on experience with handling Datasets like Komodo, RAVE, IQVIA, Truven, Optum etc. Hands-on experience in building and deployment of Statistical Models/Machine Learning including Segmentation & predictive modeling, hypothesis testing, multivariate statistical analysis, time series techniques, and optimization. Proficiency in Programming languages such as R, Python, SQL, Spark, etc. Ability to work with large data sets and present findings/insights to key stakeholders; Data management using databases like SQL. Experience with any of the cloud platforms like AWS, Azure, or Google Cloud for deploying and scaling language models. Experience with any of the Data Visualization tools like Tableau, Power BI, Qlikview, Spotfire is good to have. Excellent analytical and problem-solving skills, with a data-driven mindset. Proficient in Excel, MS Word, PowerPoint, etc. Ability to solve complex business problems and deliver client delight. Strong writing skills to build points of view on current industry trends. Good communication, interpersonal, and presentation skills Professional & Technical Skills: - Relevant experience in the required domain. - Strong analytical, problem-solving, and communication skills. - Ability to work in a fast-paced, dynamic environment. Additional Information: - Opportunity to work on innovative projects. - Career growth and leadership exposure. About Our Company | Accenture Qualification Experience: 4-8 Years Educational Qualification:Bachelors or Masters degree in Statistics, Data Science, Applied Mathematics, Business Analytics, Computer Science, Information Systems, or other Quantitative field.
Posted 5 days ago
4.0 - 9.0 years
17 - 22 Lacs
Bengaluru
Work from Office
Job Title - S&C Global Network - AI - Healthcare Analytics - Consultant Management Level: 9-Team Lead/Consultant Location: Bangalore/Gurgaon Must-have skills: R,Phython,SQL,Spark,Tableau ,Power BI Good to have skills: Ability to leverage design thinking, business process optimization, and stakeholder management skills. Job Summary : This role involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. Roles & Responsibilities: Provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. WHATS IN IT FOR YOU An opportunity to work on high-visibility projects with top Pharma clients around the globe. Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners, and business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge, and capabilities. Opportunity to thrive in a culture that is committed to accelerating equality for all. Engage in boundaryless collaboration across the entire organization. Professional & Technical Skills: - Relevant experience in the required domain. - Strong analytical, problem-solving, and communication skills. - Ability to work in a fast-paced, dynamic environment. What you would do in this role Support delivery of small to medium-sized teams to deliver consulting projects for global clients. Responsibilities may include strategy, implementation, process design, and change management for specific modules. Work with the team or as an Individual contributor on the project assigned which includes a variety of skills to be utilized from Data Engineering to Data Science Provide Subject matter expertise in various sub-segments of the LS industry. Develop assets and methodologies, point-of-view, research, or white papers for use by the team and the larger community. Acquire new skills that have utility across industry groups. Support strategies and operating models focused on some business units and assess likely competitive responses. Also, assess implementation readiness and points of greatest impact. Co-lead proposals, and business development efforts and coordinate with other colleagues to create consensus-driven deliverables. Execute a transformational change plan aligned with the clients business strategy and context for change. Engage stakeholders in the change journey and build commitment to change. Make presentations wherever required to a known audience or client on functional aspects of his or her domain. Who are we looking for Bachelors or Masters degree in Statistics, Data Science, Applied Mathematics, Business Analytics, Computer Science, Information Systems, or other Quantitative field. Proven experience (4+ years) in working on Life Sciences/Pharma/Healthcare projects and delivering successful outcomes. Excellent understanding of Pharma data sets commercial, clinical, RWE (Real World Evidence) & EMR (Electronic medical records) Leverage ones hands on experience of working across one or more of these areas such as real-world evidence data, R&D clinical data, digital marketing data. Hands-on experience with handling Datasets like Komodo, RAVE, IQVIA, Truven, Optum etc. Hands-on experience in building and deployment of Statistical Models/Machine Learning including Segmentation & predictive modeling, hypothesis testing, multivariate statistical analysis, time series techniques, and optimization. Proficiency in Programming languages such as R, Python, SQL, Spark, etc. Ability to work with large data sets and present findings/insights to key stakeholders; Data management using databases like SQL. Experience with any of the cloud platforms like AWS, Azure, or Google Cloud for deploying and scaling language models. Experience with any of the Data Visualization tools like Tableau, Power BI, Qlikview, Spotfire is good to have. Excellent analytical and problem-solving skills, with a data-driven mindset. Proficient in Excel, MS Word, PowerPoint, etc. Ability to solve complex business problems and deliver client delight. Strong writing skills to build points of view on current industry trends. Good communication, interpersonal, and presentation skills Additional Information: - Opportunity to work on innovative projects. - Career growth and leadership exposure. About Our Company | Accenture Qualification Experience: 4-8 Years Educational Qualification: Bachelors or Masters degree in Statistics, Data Science, Applied Mathematics, Business Analytics, Computer Science, Information Systems, or other Quantitative field.
Posted 5 days ago
3.0 - 6.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 5 days ago
5.0 - 7.0 years
10 - 14 Lacs
Mumbai
Work from Office
Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 5 days ago
6.0 - 9.0 years
9 - 13 Lacs
Kolkata
Work from Office
Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 5 days ago
5.0 - 7.0 years
5 - 9 Lacs
Kolkata
Work from Office
We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.
Posted 5 days ago
0.0 - 5.0 years
10 - 20 Lacs
Mumbai
Work from Office
• Data engineering - Source data from internal system sources and external (public/private) sources, using data pipelines or automations • Data science & risk modelling - Develop risk models / algorithm for calculating credit scoring of clients, credit loss, simulating stress scenarios and LLM models to aide credit risk assessment • Risk Reporting & Visualization - Develop risk reports and dashboards for credit risk officers to analyse and monitor risks in their portfolio • Data governance - Organise and document data attributes, process and procedures • Infrastructure management - Collaborate with internal technology teams to analyse impact on data due to infrastructure changes • To understanding trade Life Cycle – work closely with key departments across the organization, including Accounting, Trade Finance, Operations, and Deal Desk (amongst others) to understand data throughout the trade life cycle Bachelor’s degree in computer science, Finance, Mathematics, Statistics, or related field 3-5 years of experience in data engineering, data science, or similar technical role Proficiency in SQL and database management systems Strong programming skills in Python, R, or similar languages for statistical analysis and machine learning Experience with machine learning frameworks (scikit-learn, TensorFlow, PyTorch) Knowledge of statistical modeling, econometrics, and risk management methodologies Experience with ETL/ELT tools and data pipeline frameworks Familiarity with Large Language Models and Natural Language Processing techniques Experience with data visualization tools (Tableau, Power BI, or similar) Excellent English verbal and written communication Key Responsibilities: Data Engineering: Design, build, and maintain robust data pipelines to source data from internal system sources and external public/private data providers Develop automated data ingestion processes to ensure timely and accurate data flow Implement data quality controls and monitoring systems to maintain data integrity Data Science & Risk Modelling: Develop and maintain sophisticated risk models and algorithms for calculating client credit scores Build predictive models for credit loss estimation and expected credit loss calculations Design and implement stress testing scenarios to evaluate portfolio resilience under adverse conditions Create and deploy Large Language Model (LLM) solutions to enhance credit risk assessment processes Risk Reporting & Visualization: Develop comprehensive risk reports and interactive dashboards for credit risk officers Create visualization tools that enable effective portfolio risk analysis and monitoring Data governance: Organise and document data attributes, process and procedures Infrastructure management: Collaborate with internal technology teams to analyse impact on data due to infrastructure changes Trade Life Cycle: Work closely with key departments across the organization, including Accounting, Trade Finance, Operations, and Deal Desk (amongst others) to understand data throughout the trade life cycle Key Relationships Global Credit team, risk technology, data scientist, Other internal departments (accounts, treasury, trade finance, operations etc) Global Credit team, Head of Operation Risk Management, Other internal departments (accounts, treasury, trade finance, operations etc)
Posted 5 days ago
10.0 years
19 - 23 Lacs
Hyderabad
Work from Office
When our values align, there's no limit to what we can achieve. At Parexel, we all share the same goal - to improve the world's health. From clinical trials to regulatory, consulting, and market access, every clinical development solution we provide is underpinned by something special - a deep conviction in what we do. Each of us, no matter what we do at Parexel, contributes to the development of a therapy that ultimately will benefit a patient. We take our work personally, we do it with empathy and we're committed to making a difference. Senior Clinical Data Engineer provides expertise for the conduct of clinical trials, might act as an internal subject matter expert in specific areas providing technical support and expert advice, and works independently to support various activities related to electronic data, and/or the applications/systems within eClinical technologies. In addition, the Senior Clinical Data Engineer will serve as a Lead role on projects, and liaise with sponsors, Data Management Lead and other functional areas as required. General areas of responsibility also include: Aggregating applicable data from all sources and devices, managing external data, programming offline listings, trend analysis, data review, data transfers. Furthermore, responsibilities will include developing standards and libraries (e.g. SAS macros, templates or Programs) to drive efficiencies within the group. All tasks should be performed in accordance with corporate quality standards, SOPs/Work Instructions/Guidelines, ICH-GCP and/or other international regulatory requirements. Key Accountabilities: Manage Projects & Technology: Lead and implement the setup of Data Receipt Agreements with vendors by working with cross functional teams. Programming and setup of Import procedures to allow the ingestion of data either using SAS or alternative technology (e.g. “Workbench”). Programming of reconciliation checks to ensure appropriate transfer of data. Programming of offline listings and custom reports to allow better insights to all external data. Aggrege data across all sources. Handling Missing Values, reading raw data files, creating data structures, handling programming errors, accessing, and managing data, appending and concatenating SAS datasets. Review of data using created outputs with aim of providing insights to study teams and clients. Accountable for first time quality on all deliverables. Provide input into and negotiate electronic data timelines. Ensure that timelines are adhered to by: Actively assume activities on a project as required. Monitor project resourcing and identify changes in scope. Assist project teams in the resolution of problems encountered in the conduct of their daily work to ensure first- time quality. Provide technical support and advice to the internal team. Coordinate and lead a programming team to successful completion of a study within given timelines and budget. Manage the deployment of DM technology used for creation of offline listings (e.g. Workbench, R). Documentation: Maintain all supporting documentation for studies in accordance with SOPs/Guidelines/Work Instructions to ensure traceability and regulatory compliance. This includes the documentation of any deviations and dissemination of these to the rest of the project teams. Support Initiatives: Participate in the creation of standards, either through tools (e.g. SAS Macros), libraries or processes, as required for GDO to ensure efficient, effective and optimal processes. Develop, improve and implement project specific tools, including, but not limited to standard project directories and subdirectories, document file names and status reports that result in improved efficiencies. Act as a mentor and/or SME: Provide relevant training to staff. Provide mentorship to staff and project teams as appropriate. Assist project teams in the resolution of problems encountered in the conduct of their daily work. Assist in providing technical solutions to internal or external client enquires. Maintain and expand local and international regulatory knowledge within the clinical industry. Support Business Development: Support Bid defense meetings. Skills: Strong ability to lead and collaborate with global teams and work independently. Motivate/guide virtual teams across multiple time zones and cultures to work effectively. Strong interpersonal, oral and written communication skills using concise phrasing tailored for the audience with a diplomatic approach. Swift understanding of new systems and processes. function in an evolving technical environment. A flexible attitude with respect to work assignments and new learning; ability to adjust rapidly to changing environments. Customer focus to interact professionally and respectfully within Parexel and all external colleagues to build rapport and trust. Commitment to first time quality, including a methodical and accurate approach to work activities. Proficient presentation skills. Time management and prioritization skills in order to meet objectives and timelines. Proven problem-solving skills including capability to make appropriate decisions in ambiguous situations, ability to conduct root cause analyses. Ownership and accountability relative to Key Accountabilities in Job Description. Good business awareness/business development skills (including financial awareness). Ability to create, maintain and define strategies to improve the efficiency of running a clinical trial. Demonstrate commitment to refine quality processes. Demonstrated application of CRS concepts to achieve best practice and promote continuous improvement. Excellent analytical skills. Tenacity to work in an innovative environment. Ability to travel as required. Written and oral fluency in English. Knowledge and Experience: Demonstrated expertise in R programming, with substantial hands-on experience in professional settings. Knowledge of SOPs/Guidelines/Work Instructions/System Life Cycle methodologies, ICHGCP and any other applicable local and international regulations such as 21 CFR Part 11 and proven practical application. Experience working with at least two systems used to aggregate data within the Clinical Trial process (e.g. SAS, Workbench, Elluminate). Strong experience in clinical research industry or similar field is required. Education: Bachelor’s degree (or equivalent) in a relevant science discipline is preferred or equivalent work experience.
Posted 5 days ago
5.0 - 10.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Responsibilities : - Design, develop, and maintain scalable and efficient ETL/ELT pipelines using appropriate tools and technologies. - Develop and optimize complex SQL queries for data extraction, transformation, and loading. - Implement data quality checks and validation processes to ensure data integrity. - Automate data pipelines and workflows for efficient data processing. - Integrate data from diverse sources, including databases, APIs, and flat files. - Manage and maintain data warehouses and data lakes. - Implement data modeling and schema design. - Ensure data security and compliance with relevant regulations. - Provide data support for BI and reporting tools (Microstrategy, PowerBI, Tableau, Jaspersoft, etc.). - Collaborate with BI developers to ensure data availability and accuracy. - Optimize data queries and performance for reporting applications. - Provide technical guidance and mentorship to junior data engineers. - Lead code reviews and ensure adherence to coding standards and best practices. - Contribute to the development of technical documentation and knowledge sharing. - Design and implement data solutions on cloud platforms (AWS preferred). - Utilize AWS data integration technologies such as Airflow and Glue. - Manage and optimize cloud-based data infrastructure. - Develop data processing applications using Python, Java, or Scala. - Implement data transformations and algorithms using programming languages. - Identify and resolve complex data-related issues. - Proactively seek opportunities to improve data processes and technologies. -Stay up-to-date with the latest data engineering trends and technologies. Requirements : Experience : - 5 to 10 years of experience in Business Intelligence and Data Engineering. - Proven experience in designing and implementing ETL/ELT processes. - Expert-level proficiency in SQL (advanced/complex queries). - Strong understanding of ETL concepts and experience with ETL/ Data Integration tools (Informatica, ODI, Pentaho, etc.). - Familiarity with one or more reporting tools (Microstrategy, PowerBI, Tableau, Jaspersoft, etc.). - Knowledge of Python and cloud infrastructure (AWS preferred). - Experience with AWS data integration technologies (Airflow, Glue). - Programming experience in Java or Scala. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Proven ability to take initiative and be innovative. - Ability to work independently and as part of a team. Education : - B.Tech / M.Tech / MCA (Must-Have).
Posted 5 days ago
6.0 - 8.0 years
22 - 25 Lacs
Bengaluru
Work from Office
We are looking for energetic, self-motivated and exceptional Data engineers to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with a star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into databases from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business processes and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have the ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance
Posted 5 days ago
5.0 - 10.0 years
12 - 22 Lacs
Gurugram
Remote
Requirements: Bachelor's degree in Computer Science, Information Technology, or a related field. A master's degree is a plus. Proven experience as a Data Engineer or in a similar role, with a focus on ETL processes and database management. Proficiency in the Microsoft Azure data management suite (MSSQL, Azure Databricks , PowerBI , Data factories, Azure cloud monitoring, etc.) and Python scripting. Strong knowledge of SQL and experience with database management systems Strong development skills in python and pyspark . Experience with data warehousing solutions and data mart creation. Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications: Good to have Databricks Certified data engineer associate or professional. Understanding of data modeling and data architecture principles. Experience with data governance and data security best practices.
Posted 5 days ago
5.0 - 8.0 years
6 - 10 Lacs
Pune
Work from Office
Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.
Posted 5 days ago
2.0 - 5.0 years
18 - 21 Lacs
Hyderabad
Work from Office
Overview Annalect is currently seeking a data engineer to join our technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design, and development of software products as well as research and evaluation of new technical solutions. Responsibilities Design, build, test and deploy scalable and reusable systems that handle large amounts of data. Collaborate with product owners and data scientists to build new data products. Ensure data quality and reliability Qualifications Experience designing and managing data flows. Experience designing systems and APIs to integrate data into applications. 4+ years of Linux, Bash, Python, and SQL experience 2+ years using Spark and other frameworks to process large volumes of data. 2+ years using Parquet, ORC, or other columnar file formats. 2+ years using AWS cloud services, esp. services that are used for data processing e.g. Glue, Dataflow, Data Factory, EMR, Dataproc, HDInsights , Athena, Redshift, BigQuery etc. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane