Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 4.0 years
4 - 6 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be part Researchs Semantic Graph Team is seeking a qualified individual to design, build, and maintain solutions for scientific data that drive business decisions for Research. The successful candidate will construct scalable and high-performance data engineering solutions for extensive scientific datasets and collaborate with Research partners to address their data requirements. The ideal candidate should have experience in the pharmaceutical or biotech industry, leveraging their expertise in semantics, taxonomies, and linked data principles to ensure data harmonization and interoperability. Additionally, this individual should demonstrate robust technical skills, proficiency with data engineering technologies, and a thorough understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain semantic data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve [complex] data-related challenges Adhere to standard processes for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain comprehensive documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. T Basic Qualifications and Experience: Doctorate Degree OR Masters degree with 2- 4years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 4- 6years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 7- 9 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 4+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with data technologies and platforms, such as Databricks, workflow orchestration, performance tuning on big data processing. Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Experience with system administration skills, such as managing Linux and Windows servers, configuring network infrastructure, and automating tasks with shell scripting. Examples include setting up and maintaining virtual machines, troubleshooting server issues, and ensuring data security through regular updates and backups. Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining user documentation in Confluence Understanding of data governance frameworks, tools, and standard processes Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 1 month ago
6.0 - 10.0 years
2 - 6 Lacs
Pune
Work from Office
Req ID: 323909 We are currently seeking a Data Ingest Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Job DutiesThe Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. This is a position within the Ingestion team of the DRIFT data ecosystem. The focus is on ingesting data in a timely , complete, and comprehensive fashion while using the latest technology available to Citi. The ability to leverage new and creative methods for repeatable data ingestion from a variety of data sources while always questioning "is this the best way to solve this problem" and "am I providing the highest quality data to my downstream partners" are the questions we are trying to solve. Responsibilities: "¢ Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements "¢ Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards "¢ Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint "¢ Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation "¢ Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals "¢ Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions "¢ Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary "¢ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Minimum Skills Required"¢ 6-10 years of relevant experience in Apps Development or systems analysis role "¢ Extensive experience system analysis and in programming of software applications "¢ Application Development using JAVA, Scala, Spark "¢ Familiarity with event driven applications and streaming data "¢ Experience with Confluent Kafka, HDFS, HIVE, structured and unstructured database systems (SQL and NoSQL) "¢ Experience with various schema and data types -> JSON, AVRO, Parquet, etc. "¢ Experience with various ELT methodologies and formats -> JDBC, ODBC, API, Web hook, SFTP, etc. "¢ Experience working within the Agile and version control tool sets (JIRA, Bitbucket, Git, etc.) "¢ Ability to adjust priorities quickly as circumstances dictate "¢ Demonstrated leadership and project management skills "¢ Consistently demonstrates clear and concise written and verbal communication
Posted 1 month ago
10.0 - 15.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Req ID: 323226 We are currently seeking a Digital Solution Architect Sr. Advisor to join our team in Bengaluru, India, Karntaka (IN-KA), India (IN). Key Responsibilities: Design data platform architectures (data lakes, lakehouses, DWH) using modern cloud-native tools (e.g., Databricks, Snowflake, BigQuery, Synapse, Redshift). Architect data ingestion, transformation, and consumption pipelines using batch and streaming methods. Enable real-time analytics and machine learning through scalable and modular data frameworks. Define data governance models, metadata management, lineage tracking, and access controls. Collaborate with AI/ML, application, and business teams to identify high-impact use cases and optimize data usage. Lead modernization initiatives from legacy data warehouses to cloud-native and distributed architectures. Enforce data quality and observability practices for mission-critical workloads. Required Skills: 10+ years in data architecture, with strong grounding in modern data platforms and pipelines. Deep knowledge of SQL/NoSQL, Spark, Delta Lake, Kafka, ETL/ELT frameworks. Hands-on experience with cloud data platforms (AWS, Azure, GCP). Understanding of data privacy, security, lineage, and compliance (GDPR, HIPAA, etc.). Experience implementing data mesh/data fabric concepts is a plus. Expertise in technical solutions writing and presenting using tools such as Word, PowerPoint, Excel, Visio etc. High level of executive presence to be able to articulate the solutions to CXO level executives. Preferred Qualifications: Certifications in Snowflake, Databricks, or cloud-native data platforms. Exposure to AI/ML data pipelines, MLOps, and real-time data applications. Familiarity with data visualization and BI tools (Power BI, Tableau, Looker, etc.).
Posted 1 month ago
4.0 - 7.0 years
10 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Must-Have Qualifications: AWS Expertise: Strong hands-on experience with AWS data services including Glue, Redshift, Athena, S3, Lake Formation, Kinesis, Lambda, Step Functions, EMR , and CloudWatch . ETL/ELT Engineering: Deep proficiency in designing robust ETL/ELT pipelines with AWS Glue (PySpark/Scala), Python, dbt , or other automation frameworks. Data Modeling: Advanced knowledge of dimensional (Star/Snowflake) and normalised data modeling, optimised for Redshift and S3-based lakehouses . Programming Skills: Proficient in Python, SQL, and PySpark , with automation and scripting skills for data workflows. Architecture Leadership: Demonstrated experience leading large-scale AWS data engineering projects across teams and domains. Pre-sales & Consulting: Proven experience working with clients, responding to technical RFPs, and designing cloud-native data solutions. Advanced PySpark Expertise: Deep hands-on experience in writing optimized PySpark code for distributed data processing, including transformation pipelines using DataFrames , RDDs , and Spark SQL , with a strong grasp of lazy evaluation , catalyst optimizer , and Tungsten execution engine . Performance Tuning & Partitioning: Proven ability to debug and optimize Spark jobs through custom partitioning strategies , broadcast joins , caching , and checkpointing , with proficiency in tuning executor memory , shuffle configurations , and leveraging Spark UI for performance diagnostics in large-scale data workloads (>TB scale).
Posted 1 month ago
4.0 - 6.0 years
15 - 25 Lacs
Noida
Work from Office
We are looking for a highly experienced Senior Data Engineer with deep expertise in Snowflake to lead efforts in optimizing the performance of our data warehouse to enable faster, more reliable reporting. You will be responsible for improving query efficiency, data pipeline performance, and overall reporting speed by tuning Snowflake environments, optimizing data models, and collaborating with Application development teams. Roles and Responsibilities Analyze and optimize Snowflake data warehouse performance to support high-volume, complex reporting workloads. Identify bottlenecks in SQL queries, ETL/ELT pipelines, and data models impacting report generation times. Implement performance tuning strategies including clustering keys, materialized views, result caching, micro-partitioning, and query optimization. Collaborate with BI teams and business analysts to understand reporting requirements and translate them into performant data solutions. Design and maintain efficient data models (star schema, snowflake schema) tailored for fast analytical querying. Develop and enhance ETL/ELT processes ensuring minimal latency and high throughput using Snowflake’s native features. Monitor system performance and proactively recommend architectural improvements and capacity planning. Establish best practices for data ingestion, transformation, and storage aimed at improving report delivery times. Experience with Unistore will be an added advantage
Posted 1 month ago
4.0 - 6.0 years
20 - 25 Lacs
Noida
Work from Office
Technical Requirements SQL (Advanced level): Strong command of complex SQL logic, including window functions, CTEs, pivot/unpivot, and be proficient in stored procedure/SQL script development. Experience writing maintainable SQL for transformations. Python for ETL : Ability to write modular and reusable ETL logic using Python. Familiarity with JSON manipulation and API consumption. ETL Pipeline Development : Experienced in developing ETL/ELT pipelines, data profiling, validation, quality/health check, error handling, logging and notifications, etc. Nice-to-Have Skills Knowledge of CI/CD practices for data workflows. Key Responsibilities Collaborate with analysts and data architects to develop and test ETL pipelines using SQL and Python in Data Brick and Yellowbrick. Perform related data quality checks and implement validation frameworks. Optimize queries for performance and cost-efficiency Technical Requirements SQL (Advanced level): Strong command of complex SQL logic, including window functions, CTEs, pivot/unpivot, and be proficient in stored procedure/SQL script development. Experience writing maintainable SQL for transformations. Python for ETL : Ability to write modular and reusable ETL logic using Python. Familiarity with JSON manipulation and API consumption. ETL Pipeline Development : Experienced in developing ETL/ELT pipelines, data profiling, validation, quality/health check, error handling, logging and notifications, etc. Nice-to-Have Skills: Experiences with AWS Redshift, Databrick and Yellow brick, Knowledge of CI/CD practices for data workflows. Roles and Responsibilities Leverage expertise in AWS Redshift, PostgreSQL, Databricks, and Yellowbrick to design and implement scalable data solutions. Partner with data analysts and architects to build and test robust ETL pipelines using SQL and Python. Develop and maintain data validation frameworks to ensure high data quality and reliability. Optimize database queries to enhance performance and ensure cost-effective data processing.
Posted 1 month ago
9.0 - 14.0 years
10 - 20 Lacs
Chennai, Bengaluru, Mumbai (All Areas)
Hybrid
JD: Snowflake Implementer : Designing, implementing, and managing Snowflake data warehouse solutions, ensuring data integrity, and optimizing performance for clients or internal teams. Strong SQL skills: Expertise in writing, optimizing, and troubleshooting SQL queries. Experience with data warehousing: Understanding of data warehousing concepts, principles, and best practices. Knowledge of ETL /ELT technologies: Experience with tools and techniques for data extraction, transformation, and loading. Experience with data modeling: Ability to design and implement data models that meet business requirements. Familiarity with cloud platforms: Experience with cloud platforms like AWS, Azure, or GCP (depending on the specific Snowflake environment). Problem-solving and analytical skills: Ability to identify, diagnose, and resolve technical issues. Communication and collaboration skills: Ability to work effectively with cross-functional teams. Experience with Snowflake (preferred): Prior experience with Snowflake is highly desirable. Certifications (preferred): Snowflake certifications (e.g., Snowflake Data Engineer, Snowflake Database Administrator) can be a plus. Role & responsibilities Preferred candidate profile
Posted 1 month ago
7.0 - 9.0 years
19 - 20 Lacs
Bengaluru
Hybrid
Hi all, We are hiring for the role Cloud Data Engineer Experience: 7 - 9 years Location: Bangalore Notice Period: Immediate - 15 Days Skills: Overall, 7 to 9 years of experience in cloud data and analytics platforms such as AWS, Azure, or GCP • Including 3+ years experience with Azure cloud Analytical tools is a must • Including 5+ years of experience working with data & analytics concepts such as SQL, ETL, ELT, reporting and report building, data visualization, data lineage, data importing & exporting, and data warehousing • Including 3+ years of experience working with general IT concepts such as integrations, encryption, authentication & authorization, batch processing, real-time processing, CI/CD, automation • Advanced knowledge of cloud technologies and services, specifically around Azure Data Analytics tools o Azure Functions (Compute) o Azure Blob Storage (Storage) o Azure Cosmos DB (Databases) o Azure Synapse Analytics (Databases) o Azure Data Factory (Analytics) o Azure Synapse Serverless SQL Pools (Analytics) o Azure Event Hubs (Analytics- Realtime data) • Strong coding skills in languages such as o SQL o Python o PySpark • Experience in data streaming technologies such as Kafka or Azure Event Hubs • Experience in handling unstructured streaming data is highly desired • Knowledge of Business Intelligence Dimensional Modelling, Star Schemas, Slowly Changing Dimensions • Broad understanding of data engineering methodologies and tools, including Data warehousing, DevOps/DataOps, Data ingestion, ELT/ETL and Data visualization tools • Knowledge of database management systems, data modelling, and data warehousing best practices • Experience in software development on a team using Agile methodology • Knowledge of data governance and security practices If you are interested drop your resume at mojesh.p@acesoftlabs.com Call: 9701971793
Posted 1 month ago
1.0 - 4.0 years
4 - 7 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Locations-Pune, Bangalore, Hyderabad, Indore Contract duration 6 month Responsibilities - Must have experience working as a Snowflake Admin/Development in Data Warehouse, ETL, BI projects. - Must have prior experience with end to end implementation of Snowflake cloud data warehouse and end to end data warehouse implementations on-premise preferably on Oracle/Sql server. - Expertise in Snowflake - data modelling, ELT using Snowflake SQL, implementing complex stored Procedures and standard DWH and ETL concepts - Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, - Zero copy clone, time travel and understand how to use these features - Expertise in deploying Snowflake features such as data sharing. - Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python - Experience in Data Migration from RDBMS to Snowflake cloud data warehouse - Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling) - Experience with data security and data access controls and design- - Experience with AWS or Azure data storage and management technologies such as S3 and Blob - Build processes supporting data transformation, data structures, metadata, dependency and workload management- - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot. - Provide resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface. - Must have experience of Agile development methodologies. Good to have - CI/CD in Talend using Jenkins and Nexus. - TAC configuration with LDAP, Job servers, Log servers, database. - Job conductor, scheduler and monitoring. - GIT repository, creating user & roles and provide access to them. - Agile methodology and 24/7 Admin and Platform support. - Estimation of effort based on the requirement. - Strong written communication skills. Is effective and persuasive in both written and oral communication.
Posted 1 month ago
1.0 - 4.0 years
2 - 6 Lacs
Mumbai, Pune, Chennai
Work from Office
Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad
Posted 1 month ago
6.0 - 11.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Were looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization. Responsibilities: Lead the design of data warehouses, lakes, and ETL workflows. Collaborate with teams to gather requirements and build scalable solutions. Ensure data governance, security, and optimal performance of systems. Mentor junior engineers and drive end-to-end project delivery.: 6+ years of experience in data engineering, including at least 2 full-cycle datawarehouse projects. Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms. Expertise in big data tools (e.g., Apache Spark, Kafka). Excellent communication skills and leadership abilities.PreferredExperience with workflow orchestration tools (e.g., Airflow), real-time data,and DataOps practices.
Posted 1 month ago
8.0 - 13.0 years
5 - 8 Lacs
Mumbai
Work from Office
Role Overview : Seeking an experienced Apache Airflow specialist to design and manage data orchestration pipelines for batch/streaming workflows in a Cloudera environment. Key Responsibilities : Design, schedule, and monitor DAGs for ETL/ELT pipelines Integrate Airflow with Cloudera services and external APIs Implement retries, alerts, logging, and failure recovery Collaborate with data engineers and DevOps teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Experience 3–8 years Expertise in Airflow 2.x, Python, Bash Knowledge of CI/CD for Airflow DAGs Proven experience with Cloudera CDP, Spark/Hive-based data pipelines Integration with Kafka, REST APIs, databases
Posted 1 month ago
4.0 - 7.0 years
7 - 11 Lacs
Bengaluru
Work from Office
With 4 9 Years of Experience Build ETL ELT pipelines with Azure Data Factory Azure Databricks Spark Azure Data Lake Azure SQL Database and Synapse Minimum 3 Years of Hands-on experience developing Solid knowledge of Data Modelling Relational Databases and BI and Data Warehousing Demonstrated expertise in SQL Good to have experience with CICD Cloud architectures NoSQL Databases Azure Analysis Services and Power BI Working knowledge or experience in Agile, DevOps Good written and verbal communication skills English Ability to work with geographically diverse teams via collaborative technologies
Posted 1 month ago
3.0 - 7.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Skills required : Bigdata Workflows (ETL/ELT), Python hands-on, SQL hands-on, Any Cloud (GCP BigQuery preferred), Airflow (good knowledge on Airflow features, operators, scheduling etc) NOTE Candidate will be having the coding test (Python and SQL) in the interview process. This would be done through coders-pad. Panel would set it at run-time.
Posted 1 month ago
5.0 - 7.0 years
5 - 9 Lacs
Chennai
Work from Office
Design,develop, and maintain scalable data pipelines and systems to support thecollection, integration, and analysis of healthcare and enterprise data. Theprimary responsibilities of this role include designing and implementingefficient data pipelines, architecting robust data models, and adhering to datamanagement best practices. In this position, you will play a crucial part intransforming raw data into meaningful insights, through development of semanticdata layers, enabling data-driven decision-making across the organization. Theideal candidate will possess strong technical skills, a keen understanding ofdata architecture, and a passion for optimizing data processes. What you will do Design and implement scalable and efficient data pipelines to acquire, transform, and integrate data from various sources, such as electronic health records (EHR), medical devices, claims data, and back-office enterprise data Develop data ingestion processes, including data extraction, cleansing, and validation, ensuring data quality and integrity throughout the pipeline Collaborate with cross-functional teams, including subject matter experts, analysts, and engineers, to define data requirements and ensure data pipelines meet the needs of data-driven initiatives Design and implement data integration strategies to merge disparate datasets, enabling comprehensive and holistic analysis Implement data governance practices and ensure compliance with healthcare data standards, regulations (e.g., HIPAA), and security protocols Monitor and troubleshoot pipeline and data model performance, identifying and addressing bottlenecks, and ensuring optimal system performance and data availability Design and implement data models that align with domain requirements, ensuring efficient data storage, retrieval, and delivery Apply data modeling best practices and standards to ensure consistency, scalability, and reusability of data models Implement data quality checks and validation processes to ensure the accuracy, completeness, and consistency of healthcare data Develop and enforce data governance policies and procedures, including data lineage, architecture, and metadata management Collaborate with stakeholders to define data quality metrics and establish data quality improvement initiatives Document data engineering processes, methodologies, and data flows for knowledge sharing and future reference Stay up to date with emerging technologies, industry trends, and healthcare data standards to drive innovation and ensure compliance Who you are 4+ years strong programming skills in object-oriented languages such as Python Proficiency in SQL Hands on experience with data integration tools, ETL/ELT frameworks, and data warehousing concepts Hands on experience with data modeling and schema design, including concepts such as star schema, snowflake schema and data normalization Familiarity with healthcare data standards (e.g., HL7, FHIR), electronic health records (EHR), medical coding systems (e.g., ICD-10, SNOMED CT), and relevant healthcare regulations (e.g., HIPAA) Hands on experience with big data processing frameworks such as Apache Hadoop, Apache Spark, etc. Working knowledge of cloud computing platforms (e.g., AWS, Azure, GCP) and related services (e.g., DMS, S3, Redshift, BigQuery) Experience integrating heterogeneous data sources, aligning data models and mapping between different data schemas Understanding of metadata management principles and tools for capturing, storing, and managing metadata associated with data models and semantic data layers Ability to track the flow of data and its transformations across data models, ensuring transparency and traceability Understanding of data governance principles, data quality management, and data security best practices Strong problem-solving and analytical skills with the ability to work with complex datasets and data integration challenges Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams Education Bachelor's or Master's degree in computer science, information systems, or a relatedfield. Proven experience as a Data Engineer or similar role with a focus on healthcaredata. Soft Skills: Attention to detail. Good oral and written communication skills in English language. Or Proficient in English communication, both written and verbal. Dedicated self-starter with excellent people skills. Quick learner and a go-getter. Effective time and project management. Analytical thinker and a great team player. Strong leadership, interpersonal &problem-solving skills
Posted 1 month ago
4.0 - 8.0 years
9 - 12 Lacs
Chennai
Work from Office
Job Title: Data Engineer Location: Chennai (Hybrid) Summary Design,develop, and maintain scalable data pipelines and systems to support thecollection, integration, and analysis of healthcare and enterprise data. Theprimary responsibilities of this role include designing and implementingefficient data pipelines, architecting robust data models, and adhering to datamanagement best practices. In this position, you will play a crucial part intransforming raw data into meaningful insights, through development of semanticdata layers, enabling data-driven decision-making across the organization. Theideal candidate will possess strong technical skills, a keen understanding ofdata architecture, and a passion for optimizing data processes. Accountability Design and implement scalable and efficient data pipelines to acquire, transform, and integrate data from various sources, such as electronic health records (EHR), medical devices, claims data, and back-office enterprise data Develop data ingestion processes, including data extraction, cleansing, and validation, ensuring data quality and integrity throughout the pipeline Collaborate with cross-functional teams, including subject matter experts, analysts, and engineers, to define data requirements and ensure data pipelines meet the needs of data-driven initiatives Design and implement data integration strategies to merge disparate datasets, enabling comprehensive and holistic analysis Implement data governance practices and ensure compliance with healthcare data standards, regulations (e.g., HIPAA), and security protocols Monitor and troubleshoot pipeline and data model performance, identifying and addressing bottlenecks, and ensuring optimal system performance and data availability Design and implement data models that align with domain requirements, ensuring efficient data storage, retrieval, and delivery Apply data modeling best practices and standards to ensure consistency, scalability, and reusability of data models Implement data quality checks and validation processes to ensure the accuracy, completeness, and consistency of healthcare data Develop and enforce data governance policies and procedures, including data lineage, architecture, and metadata management Collaborate with stakeholders to define data quality metrics and establish data quality improvement initiatives Document data engineering processes, methodologies, and data flows for knowledge sharing and future reference Stay up to date with emerging technologies, industry trends, and healthcare data standards to drive innovation and ensure compliance Skills 4+ years strong programming skills in object-oriented languages such as Python Proficiency in SQL Hands on experience with data integration tools, ETL/ELT frameworks, and data warehousing concepts Hands on experience with data modeling and schema design, including concepts such as star schema, snowflake schema and data normalization Familiarity with healthcare data standards (e.g., HL7, FHIR), electronic health records (EHR), medical coding systems (e.g., ICD-10, SNOMED CT), and relevant healthcare regulations (e.g., HIPAA) Hands on experience with big data processing frameworks such as Apache Hadoop, Apache Spark, etc. Working knowledge of cloud computing platforms (e.g., AWS, Azure, GCP) and related services (e.g., DMS, S3, Redshift, BigQuery) Experience integrating heterogeneous data sources, aligning data models and mapping between different data schemas Understanding of metadata management principles and tools for capturing, storing, and managing metadata associated with data models and semantic data layers Ability to track the flow of data and its transformations across data models, ensuring transparency and traceability Understanding of data governance principles, data quality management, and data security best practices Strong problem-solving and analytical skills with the ability to work with complex datasets and data integration challenges Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams Education Bachelor's orMaster's degree in computer science, information systems, or a relatedfield. Provenexperience as a Data Engineer or similar role with a focus on healthcare data.
Posted 1 month ago
4.0 - 8.0 years
15 - 22 Lacs
Hyderabad
Work from Office
Senior Data Engineer Cloud & Modern Data Architectures Role Overview: We are looking for a Senior Data Engineer with expertise in ETL/ELT, Data Engineering, Data Warehousing, Data Lakes, Data Mesh, and Data Fabric architectures . The ideal candidate should have hands-on experience in at least one or two cloud data platforms (AWS, GCP, Azure, Snowflake, or Databricks) and a strong foundation in building PoCs, mentoring freshers, and contributing to accelerators and IPs. Must-Have: 5-8 years of experience in Data Engineering & Cloud Data Services . Hands-on with AWS (Redshift, Glue), GCP (BigQuery, Dataflow), Azure (Synapse, Data Factory), Snowflake, Databricks . Strong SQL, Python, or Scala skills. Knowledge of Data Mesh & Data Fabric principles . Nice-to-Have: Exposure to MLOps, AI integrations, and Terraform/Kubernetes for DataOps . Contributions to open-source, accelerators, or internal data frameworks Interested candidates share cv to dikshith.nalapatla@motivitylabs.com with below mention details for quick response. Total Experience: Relevant DE Experience : SQL Experience : SQL Rating out of 5 : Python Experience: Do you have experience in any 2 clouds(yes/no): Mention the cloud experience you have(Aws, Azure,GCP): Current Role / Skillset: Current CTC: Fixed: Payroll Company(Name): Client Company(Name): Expected CTC: Official Notice Period: (if it negotiable kindly mention up to how many days) Serving Notice (Yes / No): CTC of offer in hand: Last Working Day (in current organization): Location of the Offer in hand: ************* 5 DAYS WORK FROM OFFICE ****************
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Responsibilities The ideal candidate will be responsible for the entire SDLC and should have excellent communication skills and experience working directly with the business. They need to be self-sufficient and comfortable with building internal networks, both with the business and other technology teams. The ideal candidate will be expected to own changes all the way from inception to deployment in production. In addition to implementing new functionality, they need to use their experience in TDD and best practices to identify process gaps or areas for improvement with a constant focus on scalability and stability. Candidate should be self-motivated, results oriented and able to multi-task across different teams and applications. Further, the candidate needs to work effectively with remotely dispersed teams as the role will require constant communication across various regional teams. Technical and Professional Requirements: Expertise in workflow enhancement and designing macros. Able to integrate Alteryx with various other tools and applications as per business requirements. Knowledge in maintaining user roles and access provisions in Alteryx gallery Knowledge in version control systems like git Familiarity with multiple data sources compatible with Alteryx ranging from spreadsheets and flat files to databases. Advanced development and troubleshooting skills Documentation of Training and Support Strong understanding of SDLC methodologies with a track record of high-quality deliverables Excellent communication skills and experience working with global teams Strong knowledge of database query tools (SQL). Good understanding of data warehouse architecture Preferred Skills: Technology->DataAnalytics->Alteryx Additional Responsibilities: Strong working experience in Agile environment - Experience working and understanding of ETL / ELT, Data load process - Knowledge on Cloud Infrastructure and data source integrations - Knowledge on relational Databases - Self-motivated, be able to work independently as well as being a team player - Excellent analytical and problem-solving skills - Ability to handle and respond to multiple stakeholders and queries - Ability to prioritize tasks and update key stakeholders - Strong client service focus and willingness to respond to queries and provide deliverables within prompt timeframes. Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit * Location of posting is subject to business requirements
Posted 1 month ago
5.0 - 9.0 years
5 - 7 Lacs
Chennai
Work from Office
Roles & Responsibilities:- Developed, documented, and maintained detailed test cases, user scenarios and test artifacts. Develop and execute test plans, test cases, and test scripts for DBT data models and pipelines. Validate data transformation logic, SQL models, and end-to-end data flows. Work closely with data engineers and analysts to ensure data accuracy and consistency across platforms. Perform data validation and reconciliation between source and target systems. Collaborate with data governance teams to ensure Collibra metadata is correctly mapped, complete, and up to date. Validate business glossaries, data lineage, and metadata assets within Collibra. Identify, document, and track data quality issues, providing recommendations for remediation. Participate in code reviews and DBT model validation processes. Automate QA processes where applicable, using Python, SQL, or testing frameworks. Support data compliance, privacy, and governance initiatives through rigorous QA practices. Knowledge and Experience:- Minimum of 5 years' experience as a software tester with proven experience in defining and leading QA cycles Strong experience with DBT (Data Build Tool) and writing/validating SQL models. Hands-on experience with Collibra for metadata management and data governance validation. Solid understanding of data warehousing concepts and ETL/ELT processes. Proficiency in SQL for data validation and transformation testing. Familiarity with version control tools like Git. Understanding of data governance, metadata, and data quality principles. Strong analytical and problem-solving skills with attention to detail.
Posted 1 month ago
6.0 - 10.0 years
5 - 9 Lacs
Greater Noida
Work from Office
Design, develop, and maintain high-performance SQL and PL/SQL procedures, packages, and functions in Snowflake or other cloud database technologies. Apply advanced performance tuning techniques to optimize database objects, queries, indexing strategies, and resource usage. Develop code based on reading and understanding business and functional requirements following the Agile process Produce high-quality code to meet all project deadlines and ensuring the functionality matches the requirements Analyze and resolve issues found during the testing or pre-production phases of the software delivery lifecycle; coordinating changes with project team leaders and cross-work team members Provide technical support to project team members and responding to inquiries regarding errors or questions about programs Interact with architects, technical leads, team members and project managers as required to address technical and schedule issues. Suggest and implement process improvements for estimating, development and testing processes. Support the development of automated and repeatable processes for ETL/ELT, data integration, and data transformation using industry best practices. Support cloud migration and modernization initiatives, including re-platforming or refactoring legacy database objects for cloud-native platforms. BS Degree in Computer Science, Information Technology, Electrical/Electronic Engineering or another related field or equivalent A minimum of 7 years prior work experience working with an application and database development organization with deep expertise in Oracle PL/SQL or SQL Server T-SQL; must demonstrate experience delivering systems and projects from inception through implementation Proven experience writing and optimizing complex stored procedures, functions, and packages in relational databases such as Oracle, MySQL, SQL Server Strong knowledge of performance tuning, including query optimization, indexing, statistics, execution plans, and partitioning Understanding of data integration pipelines, ETL tools, and batch processing techniques. Possesses solid software development and programming skills, with an understanding of design patterns, and software development best practices Experience with Snowflake, Python scripting, and data transformation frameworks like dbt is a plus Work experience in developing Web Applications with Java, Java Script, HTML, JSPs. Experience with MVC frameworks Spring and Angular
Posted 1 month ago
2.0 - 6.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Design, develop, and maintain high-performance SQL and PL/SQL procedures, packages, and functions in Snowflake or other cloud database technologies. Apply advanced performance tuning techniques to optimize database objects, queries, indexing strategies, and resource usage. ; Develop code based on reading and understanding business and functional requirements following the Agile process Produce high-quality code to meet all project deadlines and ensuring the functionality matches the requirements ; Analyze and resolve issues found during the testing or pre-production phases of the software delivery lifecycle; coordinating changes with project team leaders and cross-work team members ; Provide technical support to project team members and responding to inquiries regarding errors or questions about programs Interact with architects, technical leads, team members and project managers as required to address technical and schedule issues. ; Suggest and implement process improvements for estimating, development and testing processes. Support the development of automated and repeatable processes for ETL/ELT, data integration, and data transformation using industry best practices. ; Support cloud migration and modernization initiatives, including re-platforming or refactoring legacy database objects for cloud-native platforms. ; BS Degree in Computer Science, Information Technology, Electrical/Electronic Engineering or another related field or equivalent ; A minimum of 7 years prior work experience working with an application and database development organization with deep expertise in Oracle PL/SQL or SQL Server T-SQL; must demonstrate experience delivering systems and projects from inception through implementation ; Proven experience writing and optimizing complex stored procedures, functions, and packages in relational databases such as Oracle, MySQL, SQL Server ; Strong knowledge of performance tuning, including query optimization, indexing, statistics, execution plans, and partitioning ; Understanding of data integration pipelines, ETL tools, and batch processing techniques. ; Possesses solid software development and programming skills, with an understanding of design patterns, and software development best practices ; Experience with Snowflake, Python scripting, and data transformation frameworks like dbt is a plus ; Work experience in developing Web Applications with Java, Java Script, HTML, JSPs. Experience with MVC frameworks Spring and Angular
Posted 1 month ago
6.0 - 11.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Bachelor s degree in Engineering, Computer Science, Information Technology, or a related field. Must have minimum 6 years of relevant experience in IT Strong experience in Snowflake, including designing, implementing, and optimizing Snowflake-based solutions. Proficiency in DBT (Data Build Tool) for data transformation and modelling Experience with ETL/ELT processes and integrating data from multiple sources. Experience in designing Tableau dashboards, data visualizations, and reports Familiarity with data warehousing concepts and best practices Strong problem-solving skills and ability to work in cross-functional teams.
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job Summary: We are seeking an experienced Data Engineer with expertise in Snowflake and PLSQL to design, develop, and optimize scalable data solutions. The ideal candidate will be responsible for building robust data pipelines, managing integrations, and ensuring efficient data processing within the Snowflake environment. This role requires a strong background in SQL, data modeling, and ETL processes, along with the ability to troubleshoot performance issues and collaborate with cross-functional teams. Responsibilities: Design, develop, and maintain data pipelines in Snowflake to support business analytics and reporting. Write optimized PLSQL queries, stored procedures, and scripts for efficient data processing and transformation. Integrate and manage data from various structured and unstructured sources into the Snowflake data platform. Optimize Snowflake performance by tuning queries, managing workloads, and implementing best practices. Collaborate with data architects, analysts, and business teams to develop scalable and high-performing data solutions. Ensure data security, integrity, and governance while handling large-scale datasets. Automate and streamline ETL/ELT workflows for improved efficiency and data consistency. Monitor, troubleshoot, and resolve data quality issues, performance bottlenecks, and system failures. Stay updated on Snowflake advancements, best practices, and industry trends to enhance data engineering capabilities. Required Skills: Bachelor s degree in Engineering, Computer Science, Information Technology, or a related field. Strong experience in Snowflake, including designing, implementing, and optimizing Snowflake-based solutions. Hands-on expertise in PLSQL, including writing and optimizing complex queries, stored procedures, and functions. Proven ability to work with large datasets, data warehousing concepts, and cloud-based data management. Proficiency in SQL, data modeling, and database performance tuning. Experience with ETL/ELT processes and integrating data from multiple sources. Familiarity with cloud platforms such as AWS, Azure, or GCP is an added advantage. Snowflake certifications (e.g., SnowPro Core, SnowPro Advanced) are a plus. Strong analytical skills, problem-solving abilities, and attention to detail. Excellent communication skills and ability to work effectively in a collaborative environment.
Posted 1 month ago
3.0 - 5.0 years
3 - 8 Lacs
Indore, Dewas, Pune
Work from Office
Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines for data ingestion and processing Work with stakeholders to understand data requirements and translate them into technical solutions Ensure data quality, reliability, and governance Optimize data storage and retrieval for performance and cost efficiency Collaborate with Data Scientists, Analysts, and Developers to support their data needs Maintain and enhance data architecture to support business growth Required Skills: Strong experience with SQL and relational databases (MySQL, PostgreSQL, etc.) Hands-on experience with Big Data technologies (Spark, Hadoop, Hive, etc.) Proficiency in Python/Scala/Java for data engineering tasks Experience with cloud platforms (AWS, Azure, or GCP) Familiarity with data warehouse solutions (Redshift, Snowflake, BigQuery, etc.) Knowledge of workflow orchestration tools (Airflow, Luigi, etc.) Good to Have: Experience with real-time data streaming (Kafka, Flink, etc.) Understanding of CI/CD and DevOps practices for data workflows Exposure to data security, compliance, and data privacy practices Qualifications: Bachelors/Master’s degree in Computer Science, IT, or a related field Minimum 3 years of experience in data engineering or related field
Posted 1 month ago
8.0 - 13.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Location Bengaluru : We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in data engineering, with a strong focus on Databricks, Python, and SQL. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure to support various business needs. Key Responsibilities Develop and implement efficient data pipelines and ETL processes to migrate and manage client, investment, and accounting data in Databricks Work closely with the investment management team to understand data structures and business requirements, ensuring data accuracy and quality. Monitor and troubleshoot data pipelines, ensuring high availability and reliability of data systems. Optimize database performance by designing scalable and cost-effective solutions. What s on offer Competitive salary and benefits package. Opportunities for professional growth and development. A collaborative and inclusive work environment. The chance to work on impactful projects with a talented team. Candidate Profile Experience: 8+ years of experience in data engineering or a similar role. Proficiency in Apache Spark. Databricks Data Cloud, including schema design, data partitioning, and query optimization Exposure to Azure. Exposure to Streaming technologies. (e.g Autoloader, DLT Streaming) Advanced SQL, data modeling skills and data warehousing concepts tailored to investment management data (e.g., transaction, accounting, portfolio data, reference data etc). Experience with ETL/ELT tools like snap logic and programming languages (e.g., Python, Scala, R programing). Familiarity workload automation and job scheduling tool such as Control M. Familiar with data governance frameworks and security protocols. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Education Bachelor s degree in computer science, IT, or a related discipline. Not Ready to Apply Join our talent pool and we'll reach out when a job fits your skills.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough