Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 6.0 years
20 - 25 Lacs
Noida, Pune, Chennai
Work from Office
We are seeking a skilled and detail-oriented Data Engineer with 4 to 6 years of hands-on experience in Microsoft Fabric , Snowflake , and Matillion . The ideal candidate will play a key role in supporting MS Fabric and migrating from MS fabric to Snowflake and Matillion. Roles and Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using Matillion and integrate data from various sources. Architect and optimize Snowflake data warehouses, ensuring efficient data storage, querying, and performance tuning. Leverage Microsoft Fabric for end-to-end data engineering tasks, including data ingestion, transformation, and reporting. Collaborate with data analysts, scientists, and business stakeholders to deliver high-quality, consumable data products. Implement data quality checks, monitoring, and observability across pipelines. Automate data workflows and support CI/CD practices for data deployments. Troubleshoot performance bottlenecks and data pipeline failures with a root-cause analysis mindset. Maintain thorough documentation of data processes, pipelines, and architecture. trong expertise with: Microsoft Fabric (Dataflows, Pipelines, Lakehouse, Notebooks, etc.) Snowflake (warehouse sizing, SnowSQL, performance tuning) Matillion (ETL/ELT orchestration, job optimization, connectors) Proficiency in SQL and data modeling (dimensional/star schema, normalization). Experience with Python or other scripting languages for data manipulation. Familiarity with version control tools (e.g., Git) and CI/CD workflows. Solid understanding of cloud data architecture (Azure preferred). Strong problem-solving and debugging skills.
Posted -1 days ago
3.0 - 7.0 years
11 - 15 Lacs
Gurugram
Work from Office
Overview We are seeking an experienced Data Modeller with expertise in designing and implementing data models for modern data platforms. This role requires deep knowledge of data modeling techniques, healthcare data structures, and experience with Databricks Lakehouse architecture. The ideal candidate will have a proven track record of translating complex business requirements into efficient, scalable data models that support analytics and reporting needs. About the Role As a Data Modeller, you will be responsible for designing and implementing data models for our Databricks-based Modern Data Platform. You will work closely with business stakeholders, data architects, and data engineers to create logical and physical data models that support the migration from legacy systems to the Databricks Lakehouse architecture, ensuring data integrity, performance, and compliance with healthcare industry standards. Key Responsibilities Design and implement logical and physical data models for Databricks Lakehouse implementations Translate business requirements into efficient, scalable data models Create and maintain data dictionaries, entity relationship diagrams, and model documentation Develop dimensional models, data vault models, and other modeling approaches as appropriate Support the migration of data models from legacy systems to Databricks platform Collaborate with data architects to ensure alignment with overall data architecture Work with data engineers to implement and optimize data models Ensure data models comply with healthcare industry regulations and standards Implement data modeling best practices and standards Provide guidance on data modeling approaches and techniques Participate in data governance initiatives and data quality assessments Stay current with evolving data modeling techniques and industry trends Qualifications Extensive experience in data modeling for analytics and reporting systems Strong knowledge of dimensional modeling, data vault, and other modeling methodologies Experience with Databricks platform and Delta Lake architecture Expertise in healthcare data modeling and industry standards Experience migrating data models from legacy systems to modern platforms Strong SQL skills and experience with data definition languages Understanding of data governance principles and practices Experience with data modeling tools and technologies Knowledge of performance optimization techniques for data models Bachelor's degree in Computer Science, Information Systems, or related field; advanced degree preferred Professional certifications in data modeling or related areas Technical Skills Data modeling methodologies (dimensional, data vault, etc.) Databricks platform and Delta Lake SQL and data definition languages Data modeling tools (erwin, ER/Studio, etc.) Data warehousing concepts and principles ETL/ELT processes and data integration Performance tuning for data models Metadata management and data cataloging Cloud platforms (AWS, Azure, GCP) Big data technologies and distributed computing Healthcare Industry Knowledge Healthcare data structures and relationships Healthcare terminology and coding systems (ICD, CPT, SNOMED, etc.) Healthcare data standards (HL7, FHIR, etc.) Healthcare analytics use cases and requirements Optionally Healthcare regulatory requirements (HIPAA, HITECH, etc.) Clinical and operational data modeling challenges Population health and value-based care data needs Personal Attributes Strong analytical and problem-solving skills Excellent attention to detail and data quality focus Ability to translate complex business requirements into technical solutions Effective communication skills with both technical and non-technical stakeholders Collaborative approach to working with cross-functional teams Self-motivated with ability to work independently Continuous learner who stays current with industry trends What We Offer Opportunity to design data models for cutting-edge healthcare analytics Collaborative and innovative work environment Competitive compensation package Professional development opportunities Work with leading technologies in the data space This position requires a unique combination of data modeling expertise, technical knowledge, and healthcare industry understanding. The ideal candidate will have demonstrated success in designing efficient, scalable data models and a passion for creating data structures that enable powerful analytics and insights.
Posted 20 hours ago
15.0 - 20.0 years
16 - 20 Lacs
Gurugram, Bengaluru
Work from Office
Role Overview We are seeking a highly skilled and experienced Data Manager to lead the development, governance, and utilization of enterprise data systems. This is a strategic leadership role focused on ensuring seamless and secure flow of data across our platforms and teams, enabling timely and accurate access to actionable insights. The ideal candidate brings a strong foundation in data architecture, governance, and cloud-native systems, combined with hands-on experience managing cross-functional teams and implementing scalable, secure, and cost-efficient data solutions. Your Objectives Optimize data systems and infrastructure to support business intelligence and analytics. Implement best-in-class data governance, quality, and security frameworks. Lead a team of data and software engineers to develop, scale, and maintain cloud-native platforms. Support data-driven decision-making across the enterprise Key Responsibilities Develop and enforce policies for effective and ethical data management. Design and implement secure, efficient processes for data collection, storage, analysis, and sharing. Monitor and enhance data quality, consistency, and lineage. Oversee integration of data from multiple systems and business units. Partner with internal stakeholders to support data needs, dashboards, and ad hoc reporting. Maintain compliance with regulatory frameworks such as GDPR and HIPAA. Troubleshoot data-related issues and implement sustainable resolutions. Ensure digital data systems are secure from breaches and data loss. Evaluate and recommend new data tools, architectures, and technologies. Support documentation using Atlassian tools and develop architectural diagrams. Automate cloud operations using infrastructure as code (e.g., Terraform) and DevOps practices. Facilitate inter-team communication to improve data infrastructure and eliminate silos. Leadership & Strategic Duties Manage, mentor, and grow a high-performing data engineering team. Lead cross-functional collaboration with backend engineers, architects, and product teams. Facilitate partnerships with cloud providers (e.g., AWS) to leverage cutting-edge technologies. Conduct architecture reviews, PR reviews, and drive engineering best practices. Collaborate with business, product, legal, and compliance teams to align data operations with enterprise goals. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. 1015 years of experience in enterprise data architecture, governance, or data platform development. Expertise in SQL, data modelling, and modern data tools (e.g., Snowflake, dbt, Fivetran). Deep understanding of AWS cloud services (Lambda, ECS, RDS, DynamoDB, S3, SQS). Proficient in scripting (Python, Bash) and CI/CD pipelines. Demonstrated experience with ETL/ELT orchestration (e.g., Airflow, Prefect). Strong understanding of DevOps, Terraform, containerization, and serverless computing. Solid grasp of data security, compliance, and regulatory requirements Preferred Experience (Healthcare Focused) Experience working in healthcare analytics or data environments. Familiarity with EHR/EMR systems such as Epic, Cerner, Meditech, or Allscripts. Deep understanding of healthcare data privacy, patient information handling, and clinical workflows Soft Skills & Team Fit Strong leadership and mentoring mindset. Ability to manage ambiguity and work effectively in dynamic environments. Excellent verbal and written communication skills with technical and non-technical teams. Passionate about people development, knowledge sharing, and continuous learning. Resilient, empathetic, and strategically focused.
Posted 20 hours ago
6.0 - 10.0 years
16 - 25 Lacs
Varanasi
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 day ago
2.0 - 6.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Design, develop, and maintain high-performance SQL and PL/SQL procedures, packages, and functions in Snowflake or other cloud database technologies. Apply advanced performance tuning techniques to optimize database objects, queries, indexing strategies, and resource usage. ; Develop code based on reading and understanding business and functional requirements following the Agile process Produce high-quality code to meet all project deadlines and ensuring the functionality matches the requirements ; Analyze and resolve issues found during the testing or pre-production phases of the software delivery lifecycle; coordinating changes with project team leaders and cross-work team members ; Provide technical support to project team members and responding to inquiries regarding errors or questions about programs Interact with architects, technical leads, team members and project managers as required to address technical and schedule issues. ; Suggest and implement process improvements for estimating, development and testing processes. Support the development of automated and repeatable processes for ETL/ELT, data integration, and data transformation using industry best practices. ; Support cloud migration and modernization initiatives, including re-platforming or refactoring legacy database objects for cloud-native platforms. ; BS Degree in Computer Science, Information Technology, Electrical/Electronic Engineering or another related field or equivalent ; A minimum of 7 years prior work experience working with an application and database development organization with deep expertise in Oracle PL/SQL or SQL Server T-SQL; must demonstrate experience delivering systems and projects from inception through implementation ; Proven experience writing and optimizing complex stored procedures, functions, and packages in relational databases such as Oracle, MySQL, SQL Server ; Strong knowledge of performance tuning, including query optimization, indexing, statistics, execution plans, and partitioning ; Understanding of data integration pipelines, ETL tools, and batch processing techniques. ; Possesses solid software development and programming skills, with an understanding of design patterns, and software development best practices ; Experience with Snowflake, Python scripting, and data transformation frameworks like dbt is a plus ; Work experience in developing Web Applications with Java, Java Script, HTML, JSPs. Experience with MVC frameworks Spring and Angular
Posted 1 day ago
6.0 - 10.0 years
16 - 25 Lacs
Lucknow
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 day ago
6.0 - 10.0 years
16 - 25 Lacs
Ludhiana
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You are currently seeking a full-time, office-based Data Engineer to join the Information Technology team at our rapidly growing corporate activities. In this role, you will work collaboratively with a team on tasks and projects crucial to the company's success. If you are looking for a dynamic career opportunity to utilize your expertise and further develop and enhance your skills, this position is the perfect fit for you. Your responsibilities will include utilizing your skills in data warehousing, business intelligence, and databases such as Snowflake, ANSI SQL, SQL Server, and T-SQL. You will support programming/software development using ETL and ELT tools like dbt, Azure Data Factory, and SSIS. Designing, developing, enhancing, and supporting business intelligence systems primarily using Microsoft Power BI will be a key part of your role. Additionally, you will be responsible for collecting, analyzing, and documenting user requirements, participating in software validation processes, creating software applications following the software development lifecycle, and providing end-user support for applications. To qualify for this position, you should have a Bachelor's Degree in Computer Science, Data Science, or a related field, along with at least 3 years of experience in Data Engineering. Knowledge of developing dimensional data models, understanding of Star Schema and Snowflake schema designs, solid ETL development and reporting knowledge, and familiarity with Snowflake cloud data warehouse, Fivetran data integration, and dbt transformations are preferred. Proficiency in Python, REST API, SQL Server databases, and bonus knowledge of C#, Azure development is desired. Excellent analytical, written, and oral communication skills are essential for this role. Medpace is a full-service clinical contract research organization (CRO) dedicated to providing Phase I-IV clinical development services to the biotechnology, pharmaceutical, and medical device industries. With a mission to accelerate the global development of safe and effective medical therapeutics, Medpace leverages local regulatory and therapeutic expertise across various major areas. Headquartered in Cincinnati, Ohio, Medpace employs over 5,000 individuals across 40+ countries. At Medpace, you will be part of a team that makes a positive impact on the lives of patients and families facing various diseases. The work you do today will contribute to improving the lives of individuals living with illness and disease in the future. Medpace offers a flexible work environment, competitive compensation and benefits package, structured career paths for professional growth, company-sponsored employee appreciation events, and employee health and wellness initiatives. Medpace has been recognized by Forbes as one of America's Most Successful Midsize Companies and has received CRO Leadership Awards for expertise, quality, capabilities, reliability, and compatibility. If your qualifications align with the requirements of the position, a Medpace team member will review your profile, and if interested, you will be contacted with further details on the next steps. Join us at Medpace and be a part of a team driven by People, Purpose, and Passion to make a difference tomorrow.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are an experienced Alteryx developer responsible for designing and developing new applications and enhancing existing models using Alteryx Design Studio. You will be involved in the entire Software Development Life Cycle (SDLC), requiring excellent communication skills and direct collaboration with the business. It is crucial that you are self-sufficient and adept at building internal networks within the business and technology teams. Your responsibilities include owning changes from inception to deployment, implementing new functionality, identifying process gaps for improvement, and focusing on scalability and stability. You must be results-oriented, self-motivated, and able to multitask across different teams and applications. Additionally, effective communication with remotely dispersed teams is essential for this role. Your technical expertise should include workflow enhancement, designing macros, integrating Alteryx with various tools, maintaining user roles in the Alteryx gallery, using version control systems like git, and working with multiple data sources compatible with Alteryx. You should possess advanced development and troubleshooting skills, document training and support, understand SDLC methodologies, have strong communication skills, be proficient in SQL database query tools, and comprehend data warehouse architecture. In addition to the technical requirements, you will need to have experience working in an Agile environment, managing ETL/ELT data load processes, knowledge of Cloud Infrastructure, and integration with data sources and relational databases. Being self-motivated, working independently, and collaborating as a team player are essential. Your analytical, problem-solving skills, ability to handle multiple stakeholders and queries, prioritize tasks, and meet prompt deadlines are crucial. A strong client service focus and willingness to respond promptly to queries and deliverables are expected. Preferred Skills: - DataAnalytics - Alteryx,
Posted 1 day ago
6.0 - 10.0 years
16 - 25 Lacs
Vadodara
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 day ago
6.0 - 10.0 years
16 - 25 Lacs
Agra
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 day ago
3.0 - 8.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Postgres DBA: 3-8 years of hands-on experience in developing and administering Azure Data Factory solutions for complex data integration projects. Proven experience in migrating databases from MSSQL to PostgreSQL in an Azure environment. Strong proficiency in SQL and experience working with both MSSQL and PostgreSQL databases. Experience with various ADF connectors, transformations, and control flow activities. Understanding of data warehousing concepts and ETL/ELT methodologies. Familiarity with application migration processes and integration patterns, including experience with platforms like PEGA is highly desirable. Experience with scripting languages such as Python or PowerShell for automation tasks.
Posted 1 day ago
6.0 - 10.0 years
16 - 25 Lacs
Nagpur
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 day ago
6.0 - 10.0 years
16 - 25 Lacs
Jaipur
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 day ago
6.0 - 10.0 years
16 - 25 Lacs
Faridabad
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 day ago
8.0 - 12.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Job Description: Senior Data Analyst Location: Hyderabad, IN - Work from Office Experience: 7+ Years Role Summary We are seeking an experienced and highly skilled Senior Data Analyst to join our team. The ideal candidate will possess a deep proficiency in SQL, a strong understanding of data architecture, and a working knowledge of the Google Cloud platform (GCP)based ecosystem. They will be responsible for turning complex business questions into actionable insights, driving strategic decisions, and helping shape the future of our Product/Operations team. This role requires a blend of technical expertise, analytical rigor, and excellent communication skills to partner effectively with engineering, product, and business leaders. Key Responsibilities Advanced Data Analysis: Utilize advanced SQL skills to query, analyze, and manipulate large, complex datasets. Develop and maintain robust, scalable dashboards and reports to monitor key performance indicators (KPIs). Source Code management : Proven ability to effectively manage, version, and collaborate on code using codebase management systems like GitHub. Responsible for upholding data integrity, producing reproducible analyses, and fostering a collaborative database management environment through best practices in version control and code documentation. Strategic Insights: Partner with product managers and business stakeholders to define and answer critical business questions. Conduct deep-dive analyses to identify trends, opportunities, and root causes of performance changes. Data Architecture & Management: Work closely with data engineers to design, maintain, and optimize data schemas and pipelines. Provide guidance on data modeling best practices and ensure data integrity and quality. Reporting & Communication: Translate complex data findings into clear, concise, and compelling narratives for both technical and non-technical audiences. Present insights and recommendations to senior leadership to influence strategic decision-making. Project Leadership: Lead analytical projects from end to end, including defining project scope, methodology, and deliverables. Mentor junior analysts, fostering a culture of curiosity and data-driven problem-solving. Required Skills & Experience Bachelor's degree in a quantitative field such as Computer Science, Statistics, Mathematics, Economics, or a related discipline. 5+ years of professional experience in a data analysis or business intelligence role. Expert-level proficiency in SQL with a proven ability to write complex queries, perform window functions, and optimize queries for performance on massive datasets. Strong understanding of data architecture, including data warehousing, data modeling (e.g., star/snowflake schemas), and ETL/ELT principles. Excellent communication and interpersonal skills, with a track record of successfully influencing stakeholders. Experience with a business intelligence tool such as Tableau, Looker, or Power BI to create dashboards and visualizations. Experience with internal Google/Alphabet data tools and infrastructure, such as BigQuery, Dremel, or Google-internal data portals. Experience with statistical analysis, A/B testing, and experimental design. Familiarity with machine learning concepts and their application in a business context. A strong sense of curiosity and a passion for finding and communicating insights from data. Proficiency with scripting languages for data analysis (e.g., App scripting , Python or R ) would be an added advantage Responsibilities Lead a team of data scientists and analysts to deliver data-driven insights and solutions. Oversee the development and implementation of data models and algorithms to support new product development. Provide strategic direction for data science projects ensuring alignment with business goals. Collaborate with cross-functional teams to integrate data science solutions into business processes. Analyze complex datasets to identify trends and patterns that inform business decisions. Utilize generative AI techniques to develop innovative solutions for product development. Ensure adherence to ITIL V4 practices in all data science projects. Develop and maintain documentation for data science processes and methodologies. Mentor and guide team members to enhance their technical and analytical skills. Monitor project progress and adjust strategies to meet deadlines and objectives. Communicate findings and recommendations to stakeholders in a clear and concise manner. Drive continuous improvement in data science practices and methodologies. Foster a culture of innovation and collaboration within the data science team. Qualifications Possess strong experience in business analysis and data analysis. Demonstrate expertise in generative AI and its applications in product development. Have a solid understanding of ITIL V4 practices and their implementation. Exhibit excellent communication and collaboration skills. Show proficiency in managing and leading a team of data professionals. Display a commitment to working from the office during day shifts.
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You should have a strong knowledge of SQL and Python. Experience in Snowflake is preferred. Additionally, you should have knowledge of AWS services such as S3, Lamdba, IAM, Step function, SNS, SQS, ECS, and Dynamo. It is important to have expertise in data movement technologies like ETL/ELT. Good to have skills include knowledge on DevOps, Continuous Integration, and Continuous Delivery with tools such as Maven, Jenkins, Stash, Control-M, Docker. Experience in automation and REST APIs would be beneficial for this role.,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Engineering Manager at Micron Technology Inc., you will play a crucial role within the Technology Solutions group of the Smart Manufacturing and AI organization. Your responsibilities will involve working closely with Micron's Front End Manufacturing and Planning Ops business area, focusing on data engineering, Machine Learning, and advanced analytics solutions. We are seeking a leader with a strong technical background in Big Data and Cloud Data warehouse technologies, particularly in Cloud data warehouse platforms like Snowflake and GCP, monitoring solutions such as Splunk, and automation and machine learning using Python. Your primary tasks will include leading a team of Data Engineers, providing technical and people leadership, and ensuring the successful delivery of critical projects and production support. You will engage team members in their career development, maintain a positive work culture, and participate in the design, architecture review, and deployment of big data and cloud data warehouse solutions. Additionally, you will collaborate with key project stakeholders, analyze project needs, and translate requirements into technical specifications for the team of data engineers. To excel in this role, you should have a solid background in developing, delivering, and supporting big data engineering and advanced analytics solutions, with at least 10 years of experience in the field. Managing or leading data engineering teams for 6+ years and hands-on experience in building Cloud Data-centric solutions in GCP or other cloud platforms for 4-5 years is essential. Proficiency in Python programming, experience with Spark, ELT or ETL techniques, database management systems like SQL Server and Snowflake, and strong domain knowledge in Manufacturing Planning and Scheduling data are highly desired. Furthermore, you should possess intermediate to advanced programming skills, excellent communication abilities, and a passion for data and information. Being self-motivated, adaptable to a fast-paced environment, and having a Bachelor's degree in Computer Science, Management Information Systems, or related fields are prerequisites for this role. Micron Technology is a pioneering industry leader in memory and storage solutions, dedicated to transforming how information enriches lives globally. Our commitment to innovation, technology leadership, and operational excellence drives us to deliver high-performance memory and storage products through our Micron and Crucial brands, fueling advancements in artificial intelligence and 5G applications. If you are motivated by the power of data and eager to contribute to cutting-edge solutions, we encourage you to explore career opportunities with us at micron.com/careers.,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As an Informatica Developer with a specialization in Informatica Intelligent Data Management Cloud (IDMC), you will be responsible for designing, developing, and maintaining data pipelines and integrations using Informatica IDMC. Your role will involve working on Cloud Data Integration (CDI) and Cloud Application Integration (CAI) modules, building and optimizing ETL/ELT mappings, workflows, and data quality rules in a cloud setup, as well as deploying and monitoring data jobs using IDMC's operational dashboards and alerting tools. You will collaborate closely with data architects and business analysts to understand data integration requirements and write and optimize SQL queries for data processing. Strong hands-on experience with Informatica IDMC, proficiency in CDI, CAI, and cloud-based data workflows, as well as a solid understanding of ETL/ELT processes, data quality, and data integration best practices are essential for this role. Expertise in SQL and working with Oracle/SQL Server, strong analytical and problem-solving skills, along with excellent communication and interpersonal abilities, will be key to your success in this position. Your responsibilities will also include troubleshooting and resolving integration issues efficiently, ensuring performance tuning, and high availability of data solutions. This is a full-time position that requires you to work in person at locations in Bangalore, Cochin, or Trivandrum. If you are passionate about Informatica IDMC, Cloud Data Integration, and Cloud Application Integration, and possess the technical skills required for this role, we encourage you to apply and be part of our dynamic team.,
Posted 4 days ago
5.0 - 10.0 years
7 - 11 Lacs
Pune
Work from Office
Sr Data Engineer2 We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insightsorganization to build data solutions, design and implement ETL/ELT processes and manage our dataplatform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, ourvision is to spearhead technology and data-led solutions and experiences to drive growth & innovation atscale.The ideal candidate will have a strong Data Engineering background, advanced Python knowledge andexperience with cloud services and SQL/NoSQL databases.You will work closely with our cross functional stakeholders in Product, Finance and GTM along withBusiness and Enterprise Technology teams.As a Senior Data Engineer, you willCollaborating closely with various stakeholders to prioritize requests, identify improvements, andoffer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involvesconstructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineeringgroups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet businessneeds. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You''ll be a great addition to the team if you haveHold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintainingdata environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes,managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing,monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift,MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD usingGitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with variousbusiness stakeholders and translate requirements. Added bonus if you also haveA good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data
Posted 4 days ago
4.0 - 9.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Must have technical skills: 4 years+ onSnowflake advanced SQL expertise 4 years+ on data warehouse experiences hands on knowledge with the methods to identify, collect, manipulate, transform, normalize, clean, and validate data, star schema, normalization / denormalization, dimensions, aggregations etc, 4+ Years experience working in reporting and analytics environments development, data profiling, metric development, CICD, production deployment, troubleshooting, query tuning etc, 3 years+ on Python advanced Python expertise 3 years+ on any cloud platform AWS preferred hands on experience on AWS on Lambda, S3, SNS / SQS, EC2 is bare minimum, 3 years+ on any ETL / ELT tool Informatica, Pentaho, Fivetran, DBT etc. 3+ years with developing functional metrics in any specific business vertical (finance, retail, telecom etc), Must have soft skills: Clear communication written and verbal communication, especially with time off, delays in delivery etc. Team Player Works in the team and works with the team, Enterprise Experience Understands and follows enterprise guidelines for CICD, security, change management, RCA, on-call rotation etc, Nice to have: Technical certifications from AWS, Microsoft, Azure, GCP or any other recognized Software vendor, 4 years+ on any ETL / ELT tool Informatica, Pentaho, Fivetran, DBT etc. 4 years+ with developing functional metrics in any specific business vertical (finance, retail, telecom etc), 4 years+ with team lead experience, 3 years+ in a large-scale support organization supporting thousands ofusers
Posted 5 days ago
2.0 - 6.0 years
3 - 7 Lacs
Gurugram
Work from Office
We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability to design, build and unit test the applications in Pyspark. Experience with Python development and Python data transformations. Experience with SQL scripting on one or more platforms Hive, Oracle, PostgreSQL, MySQL etc. In-depth knowledge of Hadoop, Spark, and similar frameworks. Strong knowledge of Data Management principles. Experience with normalizing/de-normalizing data structures, and developing tabular, dimensional and other data models. Have knowledge about YARN, cluster, executor, cluster configuration. Hands on working in different file formats like Json, parquet, csv etc. Experience with CLI on Linux-based platforms. Experience analysing current ETL/ELT processes, define and design new processes. Experience analysing business requirements in BI/Analytics context and designing data models to transform raw data into meaningful insights. Good to have knowledge on Data Visualization. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.
Posted 5 days ago
6.0 - 11.0 years
8 - 12 Lacs
Pune
Work from Office
What You'll Do The Global Analytics & Insights (GAI) team is looking for a Senior Data Engineer to lead our build of the data infrastructure for Avalara's core data assets- empowering us with accurate, data to lead data backed decisions. As A Senior Data Engineer, you will help architect, implement, and maintain our data infrastructure using Snowflake, dbt (Data Build Tool), Python, Terraform, and Airflow. You will immerse yourself in our financial, marketing, and sales data to become an expert of Avalara's domain. You will have deep SQL experience, an understanding of modern data stacks and technology, a desire to build things the right way using modern software principles, and experience with data and all things data related. What Your Responsibilities Will Be You will architect repeatable, reusable solutions to keep our technology stack DRY Conduct technical and architecture reviews with engineers, ensuring all contributions meet quality expectations You will develop scalable, reliable, and efficient data pipelines using dbt, Python, or other ELT tools Implement and maintain scalable data orchestration and transformation, ensuring data accuracy, consistency Collaborate with cross-functional teams to understand complex requirements and translate them into technical solutions Build scalable, complex dbt models Demonstrate ownership of complex projects and calculations of core financial metrics and processes Work with Data Engineering teams to define and maintain scalable data pipelines. Promote automation and optimization of reporting processes to improve efficiency. You will be reporting to Senior Manager What You'll Need to be Successful Bachelor's degree in Computer Science or Engineering, or related field 6+ years experience in data engineering field, with advanced SQL knowledge 4+ years of working with Git, and demonstrated experience collaborating with other engineers across repositories 4+ years of working with Snowflake 3+ years working with dbt (dbt core) 3+ years working with Infrastructure as Code Terraform 3+ years working with CI CD, and demonstrated ability to build and operate pipelines AWS Certified Terraform Certified Experience working with complex Salesforce data Snowflake, dbt certified
Posted 5 days ago
3.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
We are looking for a Strategic thinker, have ability grasp new technologies, innovate, develop, and nurture new solutions. a self-starter to work in a diverse and fast-paced environment to support, maintain and advance the capabilities of the unified data platform. This a global role that requires partnering with the broader JLLT team at the country, regional and global level by utilizing in-depth knowledge of cloud infrastructure technologies and platform engineering experience. Responsibilities Developing ETL/ELT pipelines using Synapse pipelines and data flows Integrating Synapse with other Azure data services (Data Lake Storage, Data Factory, etc.) Building and maintaining data warehousing solutions using Synapse Design and contribute to information infrastructure and data management processes Develop data ingestion systems that cleanse and normalize diverse datasets Build data pipelines from various internal and external sources Create structure for unstructured data Develop solutions using both relational and non-relational databases Create proof-of-concept implementations to validate solution proposals Sounds like you To apply you need to be: Experience & Education Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Worked with Cloud delivery team to provide technical solutions services roadmap for customer. Knowledge on creating IaaS and PaaS cloud solutions in Azure Platform that meet customer needs for scalability, reliability, and performance Technical Skills & Competencies Developing ETL/ELT pipelines using Synapse pipelines and data flows Integrating Synapse with other Azure data services (Data Lake Storage, Data Factory, etc.) Building and maintaining data warehousing solutions using Synapse Design and contribute to information infrastructure and data management processes Develop data ingestion systems that cleanse and normalize diverse datasets Build data pipelines from various internal and external sources Create structure for unstructured data
Posted 5 days ago
15.0 - 20.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Fabric Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that project goals are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also be responsible for maintaining communication with stakeholders to provide updates and gather feedback, ensuring that the applications align with business needs and technical requirements. Your role will require a balance of technical expertise and leadership skills to drive successful project outcomes. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: Should have exposure in Azure Data Components Azure Data factory and Azure Data LakeBuilding ETL processes to extract, transform, and load data into the data modelsDeveloping and maintaining data pipelines and integration workflowsTroubleshooting and resolving issues related to data models, ETL processes, and reportingDesign and implement data pipelines for ingestion, transformation, and loading (ETL/ELT) using Fabric Data Factory Pipeline and Dataflows Gen2.Develop scalable and reliable solutions for batch data integration across various structured and unstructured data sources.Oversee the development of data pipelines for smooth data flow into the Fabric Data Warehouse.Implement and maintain data solutions in Fabric Lakehouse and Fabric Warehouse. Additional Information:- The candidate should have minimum 7.5 years of experience in Microsoft Fabric.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France